We are thrilled to announce Mosscap, the much-anticipated spin-off of our world-renowned AI security game, Gandalf, developed together with our partners - the AI Village for DEFCON.
Created by the talented Lakera AI team, this fun AI security challenge is now available for DEFCON, the AI Village, and the GRT Challenge attendees. Mosscap promises to provide participants with an invaluable opportunity to enhance their expertise in AI security and effectively safeguard against prompt injection vulnerabilities.
Building on the success of Gandalf, which has garnered global recognition and acclaim (over 20 million interactions to date), Mosscap takes AI security gaming to new heights. Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in Large Language Models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Bard.
Prompt injection stands out as the number one vulnerability on OWASP's Top 10 LLM Vulnerabilities Report, underscoring the critical importance of this game in equipping participants with essential knowledge of LLM security risks.
Aligned with our commitment to the research community, we are pleased to announce that the data collected through Mosscap will be made available for research purposes, further advancing the understanding and mitigation of prompt injection vulnerabilities in LLMs.
Get ready to embark on an exciting AI security adventure with Mosscap! Stay tuned for updates and further details as we continue to make AI security and corresponding education accessible and engaging for all.
Lakera is an AI security company based in Zurich, Switzerland. Founded by former Google, Meta, and Daedalean engineers in 2021, the company is on a mission to put safety and security expertise in any AI developer’s toolkit.
For media inquiries, please contact: firstname.lastname@example.org
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure you’re on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.