Last week, the Lakera team attended DEFCON31, one of the most prominent cybersecurity conference held in Las Vegas from August 10th to 13th. As official sponsors of the AI Village, we also contributed to the community by creating Mosscap - an AI security game specifically tailored for DEFCON31.

In this brief article, we'd like to present our key event highlights and insights on the state of AI Security. To set the stage, let's start with a brief overview of DEFCON itself.
DEFCON 31 in Numbers - Overview
DEF CON is renowned as the world's largest and longest running underground hacking conference. Here are a few facts worth noting:
- This yearās edition attracted more than 30.000 attendees.
- There were 31 villages in total. These designated spaces focus on specific topics, akin to mini conferences within the main event. Many villages host their own talks and hands-on activities, such as CTFs, labs, or challenges (like the GRT challenge in which Lakera participated).
- Attendees could choose from over 900 talks and workshops (official & unofficial). The conference comprised more than 5 main stages, featuring a lineup of more than 130 speakers.
- The GRT Challenge within the AI Village attracted over 2200 participants, with around 1000 individuals queuing up for the challenge on Friday.
- Lakera's AI security online game, Mosscap, designed as a warm-up before the GRT challenge engaged thousands of users who tried to trick Mosscap with 130k prompt attacks. Ā Dive deeper into it here: DEFCON Welcomes Mosscap: Lakeraās AI Security Game to Tackle Top LLM Vulnerabilities.
- The AI Village enjoyed sponsorship from more than 20 organizations, including Lakera, the White House, HuggingFace, Scale, Cohere, SeedAI, Google, Meta, and more.

DEFCON 31 Highlights - AI Village
We spent most of our time hanging out in the AI Village. This consisted of two rooms: a room for talks and a room for the Generative Red Team (GRT) challenge - which was the largest ever in-person assessment for any group of AI models.
Check out Generative Red Team Challenge here.
**š”ļø Discover how Lakeraās Red Teaming solutions can safeguard your AI applications with automated security assessments, as well as identifying and addressing vulnerabilities effectively.**
Generative Red Team (GRT) Challenge
Within the AI village, participants had the opportunity to take on the GRT challenge ā a captivating competition offering 50 minutes to "hack" LLMs from a spectrum of providers: Cohere, Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability. The evaluation was facilitated through a platform developed by Scale AI.

This event garnered support from the White House Office of Science, Technology, and Policy, the National Science Foundationās Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus.
As previously mentioned, Lakera introduced Mosscap, a spin-off of Gandalf designed to equip participants with insights into prompt injection as they navigated the challenge.
At the GRT, there were several tasks you could choose from:
- Have the model make up a non-existent law
- Have the model produce output thatās biased against a minority
- Have the model produce misinformation when prompted in a non-English language
- Prompt injection
- Get the model to contradict itself
- Make the model do bad math
- And moreā¦
The submissions are graded manually, and you can get points for each (challenge, model) pair. There were about 50 laptops set up for the contest, along with a dynamic leaderboard. The results of the challenge will be announced in approximately a month's time.

Notable AI Security Trends: The State of LLM Security
Here are some of our key learnings from DEFCON and highlights regarding the state of AI security.
Prioritizing LLM Security for Enterprises
Enterprises are honing in on LLM security as a primary concern. The spotlight is on prompt injections prevention, defense strategies against data leakage, and safeguarding against misbehavior and model misuse. Companies are proactively seeking innovative solutions to fortify themselves against evolving LLM-based threats.
Illuminating AI Security Threats through Educational Challenges
DEFCON was the ideal stage for initiatives like GRT, Mosscap & Gandalf to shine a light on specific types of security risks.
Craig Martell, Chief Digital and AI Officer at the U.S. Defense Department, conveyed a resounding message by saying:
Iām here today because I need hackers everywhere to tell us how this stuff breaks. [ā¦] Because if we donāt know how it breaks, we canāt get clear on the acceptability conditions and if we canāt get clear on the acceptability conditions we canāt push industry towards building the right thing, so that we can deploy it and use it.ā
He underlined the need for in-depth research into LLM vulnerabilities, emphasizing its vital role in shaping industry standards and the deployment of secure AI systems. Martell's call for collaboration with hackers underscores the pursuit of comprehensive AI security.
Introducing AI Regulations Becomes Pivotal
DEFCON discussions resonated with policymakers in the US and EU, emphasizing the importance of regulations governing foundation models.
Nicolas Moƫs, Director of The Future Society - a non-profit committed to European AI Governance, illuminated critical concerns by saying:
āSome of the biggest risks tied to the evolution of foundation models involve inherent biases embedded within them, and the intricate nature of these models, which can give rise to unforeseen behaviors causing harm. It is imperative that we establish more effective safeguards to counteract these potential outcomes.ā
The call for enhanced measures to mitigate these risks underscores the paramount importance of responsible AI development.
Emergence of a Key Category: AISec in the Evolving AI Landscape
As the AI landscape continues to evolve, a pivotal category emergesāAISec. With organizations increasingly harnessing the power of LLMs to drive their internal systems, the implementation of robust safety measures becomes critical. These measures are crucial for safeguarding applications against a spectrum of threats, including prompt injections, hallucinations, and data leakage, among others.
Speaking of whichā¦
Lakera Guard BETA launches at DEFCON31
Lastly, DEFCON also signaled the BETA launch of our new product - Lakera Guard, a powerful API designed to safeguard LLMs.
Check out our official announcement here: An Overview of Lakera Guard ā Bringing Enterprise-Grade Security to LLMs with Just One Line of Code
The #AISec community at DEFCON responded with overwhelmingly positive feedback, and we're delighted to share that this product launch sparked numerous discussions about AI security and AI regulations with EU and US policymakers.
You can try Lakera Guard Playground and sign up here: https://platform.lakera.ai/

Finally, here are a couple of pictures of fellow white-hat hackers who decided to join forces with the Lakera team on our quest to protect LLM models š


And thatās it! We hope to see you next year at DEFCON32!
