Lakera at DEFCON31: Trends, Highlights & the State of AI Security

Here are our key takeaways from DEFCON31—one of the most prominent cybersecurity conference held in Las Vegas from August 10th to 13th 2023.

Lakera Team
November 12, 2023

Last week, the Lakera team attended DEFCON31, one of the most prominent cybersecurity conference held in Las Vegas from August 10th to 13th. As official sponsors of the AI Village, we also contributed to the community by creating Mosscap - an AI security game specifically tailored for DEFCON31.

Lakera team in the AI village

In this brief article, we'd like to present our key event highlights and insights on the state of AI Security. To set the stage, let's start with a brief overview of DEFCON itself.

DEFCON 31 in Numbers - Overview

DEF CON is renowned as the world's largest and longest running underground hacking conference. Here are a few facts worth noting:

  • This year’s edition attracted more than 30.000 attendees.
  • There were 31 villages in total. These designated spaces focus on specific topics, akin to mini conferences within the main event. Many villages host their own talks and hands-on activities, such as CTFs, labs, or challenges (like the GRT challenge in which Lakera participated).
  • Attendees could choose from over 900 talks and workshops (official & unofficial). The conference comprised more than 5 main stages, featuring a lineup of more than 130 speakers.
  • The GRT Challenge within the AI Village attracted over 2200 participants, with around 1000 individuals queuing up for the challenge on Friday.
  • Lakera's AI security online game, Mosscap, designed as a warm-up before the GRT challenge engaged thousands of users who tried to trick Mosscap with 130k prompt attacks.  Dive deeper into it here: DEFCON Welcomes Mosscap: Lakera’s AI Security Game to Tackle Top LLM Vulnerabilities.
  • The AI Village enjoyed sponsorship from more than 20 organizations, including Lakera, the White House, HuggingFace, Scale, Cohere, SeedAI, Google, Meta, and more.
Mosscap and Lakera presented on the main AI stage

DEFCON 31 Highlights - AI Village

We spent most of our time hanging out in the AI Village. This consisted of two rooms: a room for talks and a room for the Generative Red Team (GRT) challenge - which was the largest ever in-person assessment for any group of AI models.

Check out Generative Red Team Challenge here.

Generative Red Team (GRT) Challenge

Within the AI village, participants had the opportunity to take on the GRT challenge – a captivating competition offering 50 minutes to "hack" LLMs from a spectrum of providers: Cohere, Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability. The evaluation was facilitated through a platform developed by Scale AI.

The GRT challenge participants

This event garnered support from the White House Office of Science, Technology, and Policy, the National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus.

As previously mentioned, Lakera introduced Mosscap, a spin-off of Gandalf designed to equip participants with insights into prompt injection as they navigated the challenge.

At the GRT, there were several tasks you could choose from:

  • Have the model make up a non-existent law
  • Have the model produce output that’s biased against a minority
  • Have the model produce misinformation when prompted in a non-English language
  • Prompt injection
  • Get the model to contradict itself
  • Make the model do bad math
  • And more…

The submissions are graded manually, and you can get points for each (challenge, model) pair. There were about 50 laptops set up for the contest, along with a dynamic leaderboard. The results of the challenge will be announced in approximately a month's time.

Lakera team with Dr. Rumman Chowdry - a Co-Founder of the AI Village and the GRT Challenge

Notable AI Security Trends: The State of LLM Security

Here are some of our key learnings from DEFCON and highlights regarding the state of AI security.

Prioritizing LLM Security for Enterprises

Enterprises are honing in on LLM security as a primary concern. The spotlight is on prompt injections prevention, defense strategies against data leakage, and safeguarding against misbehavior and model misuse. Companies are proactively seeking innovative solutions to fortify themselves against evolving LLM-based threats.

Illuminating AI Security Threats through Educational Challenges

DEFCON was the ideal stage for initiatives like GRT, Mosscap & Gandalf to shine a light on specific types of security risks.

Craig Martell, Chief Digital and AI Officer at the U.S. Defense Department, conveyed a resounding message by saying:

I’m here today because I need hackers everywhere to tell us how this stuff breaks. […] Because if we don’t know how it breaks, we can’t get clear on the acceptability conditions and if we can’t get clear on the acceptability conditions we can’t push industry towards building the right thing, so that we can deploy it and use it.”

He underlined the need for in-depth research into LLM vulnerabilities, emphasizing its vital role in shaping industry standards and the deployment of secure AI systems. Martell's call for collaboration with hackers underscores the pursuit of comprehensive AI security.

Introducing AI Regulations Becomes Pivotal

DEFCON discussions resonated with policymakers in the US and EU, emphasizing the importance of regulations governing foundation models.

Nicolas Moës, Director of The Future Society - a non-profit committed to European AI Governance, illuminated critical concerns by saying:

“Some of the biggest risks tied to the evolution of foundation models involve inherent biases embedded within them, and the intricate nature of these models, which can give rise to unforeseen behaviors causing harm. It is imperative that we establish more effective safeguards to counteract these potential outcomes.”

The call for enhanced measures to mitigate these risks underscores the paramount importance of responsible AI development.

Emergence of a Key Category: AISec in the Evolving AI Landscape

As the AI landscape continues to evolve, a pivotal category emerges—AISec. With organizations increasingly harnessing the power of LLMs to drive their internal systems, the implementation of robust safety measures becomes critical. These measures are crucial for safeguarding applications against a spectrum of threats, including prompt injections, hallucinations, and data leakage, among others.

Speaking of which…

Lakera Guard BETA launches at DEFCON31

Lastly, DEFCON also signaled the BETA launch of our new product - Lakera Guard, a powerful API designed to safeguard LLMs.

Check out our official announcement here: An Overview of Lakera Guard – Bringing Enterprise-Grade Security to LLMs with Just One Line of Code

The #AISec community at DEFCON responded with overwhelmingly positive feedback, and we're delighted to share that this product launch sparked numerous discussions about AI security and AI regulations with EU and US policymakers.

You can try Lakera Guard Playground and sign up here: https://platform.lakera.ai/

Lakera Guard product launch (BETA)

Finally, here are a couple of pictures of fellow white-hat hackers who decided to join forces with the Lakera team on our quest to protect LLM models 😜

Lakera white-hat hackers at SeedAI party
Gandalf the White at SeedAI party

And that’s it! We hope to see you next year at DEFCON32!

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Read LLM Security Playbook
Learn about the most common LLM threats and how to prevent them.
Download
You might be interested
Lakera: Leading the Charge in AI Security as a Top Swiss Startup to Watch in 2024‍
Lakera: Leading the Charge in AI Security as a Top Swiss Startup to Watch in 2024‍
2
min read
Announcements

Lakera Selected as a Swiss Startup to Keep an Eye on in 2024

Lakera Selected as a Swiss Startup to Keep an Eye on in 2024

Lakera named among Switzerland’s top startups for 2024, highlighting our focus on secure AI.
We are proud to announce that we have achieved SOC 2 Type I compliance for Lakera Guard in accordance with the American Institute of Certified Public Accountants (AICPA) standards for SOC for Service Organizations, also known as SSAE 18. We have successfully completed a AICPA SOC2 Type I Audit performed by Prescient Assurance.
We are proud to announce that we have achieved SOC 2 Type I compliance for Lakera Guard in accordance with the American Institute of Certified Public Accountants (AICPA) standards for SOC for Service Organizations, also known as SSAE 18. We have successfully completed a AICPA SOC2 Type I Audit performed by Prescient Assurance.
3
min read
Announcements

Announcing Lakera's SOC 2 Compliance

Announcing Lakera's SOC 2 Compliance

We are proud to announce that we have achieved SOC 2 Type I compliance for Lakera Guard in accordance with the American Institute of Certified Public Accountants (AICPA) standards for SOC for Service Organizations, also known as SSAE 18. We have successfully completed a AICPA SOC2 Type I Audit performed by Prescient Assurance.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.