DEFCON Welcomes Mosscap: Lakera’s AI Security Game to Tackle Top LLM Vulnerabilities

Get ready to embark on an exciting AI security adventure with Mosscap! Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in LLMs.

Lakera Team
November 14, 2023
Hide table of contents
Show table of contents

We are thrilled to announce Mosscap, the much-anticipated spin-off of our world-renowned AI security game, Gandalf, developed together with our partners - the AI Village for DEFCON.

Created by the talented Lakera AI team, this fun AI security challenge is now available for DEFCON, the AI Village, and the GRT Challenge attendees. Mosscap promises to provide participants with an invaluable opportunity to enhance their expertise in AI security and effectively safeguard against prompt injection vulnerabilities.

Mosscap AI security game

Building on the success of Gandalf, which has garnered global recognition and acclaim (over 20 million interactions to date), Mosscap takes AI security gaming to new heights. Inspired by the captivating "Monk and Robot" series, Lakera’s team has worked tirelessly to create a fun and educational experience, shedding light on prompt injection vulnerabilities present in Large Language Models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Bard.

Prompt injection stands out as the number one vulnerability on OWASP's Top 10 LLM Vulnerabilities Report, underscoring the critical importance of this game in equipping participants with essential knowledge of LLM security risks.

**Pro Tip: Read OWASP Top 10 for Large Language Model Applications Explained: A Practical Guide**

Aligned with our commitment to the research community, we are pleased to announce that the data collected through Mosscap will be made available for research purposes, further advancing the understanding and mitigation of prompt injection vulnerabilities in LLMs.

Get ready to embark on an exciting AI security adventure with Mosscap! Stay tuned for updates and further details as we continue to make AI security and corresponding education accessible and engaging for all.

For more information and sneak peeks, please visit Mosscap's official website and follow us on our Twitter/X and LinkedIn.

About Lakera

Lakera is an AI security company based in Zurich, Switzerland. Founded by former Google, Meta, and Daedalean engineers in 2021, the company is on a mission to put safety and security expertise in any AI developer’s toolkit.

For media inquiries, please contact: info@lakera.ai

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download
You might be interested
Lakera Recognized in Gartner's GenAI Security Risks Report
Lakera Recognized in Gartner's GenAI Security Risks Report
1
min read
•

Lakera Recognized in Gartner's GenAI Security Risks Report

Lakera Recognized in Gartner's GenAI Security Risks Report

Gartner's report on GenAI security risks recognizes Lakera's solutions.
Lakera Featured in a NIST Report on AI Security
Lakera Featured in a NIST Report on AI Security
1
min read
•

Lakera Featured in a NIST Report on AI Security

Lakera Featured in a NIST Report on AI Security

Lakera's technology has been recognized by NIST in their report on Adversarial Machine Learning.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.