Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
DEFCON Welcomes Mosscap: Lakeraās AI Security Game to Tackle Top LLM Vulnerabilities
Get ready to embark on an exciting AI security adventure with Mosscap! Inspired by the captivating "Monk and Robot" series, Lakeraās team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in LLMs.
We are thrilled to announce Mosscap, the much-anticipated spin-off of our world-renowned AI security game, Gandalf, developed together with our partners - the AI Village for DEFCON.
Created by the talented Lakera AI team, this fun AI security challenge is now available for DEFCON, the AI Village, and the GRT Challenge attendees. Mosscap promises to provide participants with an invaluable opportunity to enhance their expertise in AI security and effectively safeguard against prompt injection vulnerabilities.
Building on the success of Gandalf, which has garnered global recognition and acclaim (over 20 million interactions to date), Mosscap takes AI security gaming to new heights. Inspired by the captivating "Monk and Robot" series, Lakeraās team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in Large Language Models (LLMs) like OpenAIās ChatGPT, Anthropicās Claude, or Googleās Bard.
Prompt injection stands out as the number one vulnerability on OWASP's Top 10 LLM Vulnerabilities Report, underscoring the critical importance of this game in equipping participants with essential knowledge of LLM security risks.
Aligned with our commitment to the research community, we are pleased to announce that the data collected through Mosscap will be made available for research purposes, further advancing the understanding and mitigation of prompt injection vulnerabilities in LLMs.
Get ready to embark on an exciting AI security adventure with Mosscap! Stay tuned for updates and further details as we continue to make AI security and corresponding education accessible and engaging for all.
Lakera is an AI security company based in Zurich, Switzerland. Founded by former Google, Meta, and Daedalean engineers in 2021, the company is on a mission to put safety and security expertise in any AI developerās toolkit.
Ninety-five percent of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models
Our findings reveal a rapid adoption of GenAI technologies, with nearly 90% of organizations actively implementing or planning to explore LLM use cases. Despite this high adoption rate, only about 5% of organizations feel highly confident in their AI security preparedness.