Lakera at DEFCON31: Trends, Highlights & the State of AI Security

Here are our key takeaways from DEFCON31ā€”one of the most prominent cybersecurity conference held in Las Vegas from August 10th to 13th 2023.

Lakera Team
January 22, 2024
Hide table of contents
Show table of contents

Last week, the Lakera team attended DEFCON31, one of the most prominent cybersecurity conference held in Las Vegas from August 10th to 13th. As official sponsors of the AI Village, we also contributed to the community by creating Mosscap - an AI security game specifically tailored for DEFCON31.

Lakera team in the AI village

In this brief article, we'd like to present our key event highlights and insights on the state of AI Security. To set the stage, let's start with a brief overview of DEFCON itself.

DEFCON 31 in Numbers - Overview

DEF CON is renowned as the world's largest and longest running underground hacking conference. Here are a few facts worth noting:

  • This yearā€™s edition attracted more than 30.000 attendees.
  • There were 31 villages in total. These designated spaces focus on specific topics, akin to mini conferences within the main event. Many villages host their own talks and hands-on activities, such as CTFs, labs, or challenges (like the GRT challenge in which Lakera participated).
  • Attendees could choose from over 900 talks and workshops (official & unofficial). The conference comprised more than 5 main stages, featuring a lineup of more than 130 speakers.
  • The GRT Challenge within the AI Village attracted over 2200 participants, with around 1000 individuals queuing up for the challenge on Friday.
  • Lakera's AI security online game, Mosscap, designed as a warm-up before the GRT challenge engaged thousands of users who tried to trick Mosscap with 130k prompt attacks. Ā Dive deeper into it here: DEFCON Welcomes Mosscap: Lakeraā€™s AI Security Game to Tackle Top LLM Vulnerabilities.
  • The AI Village enjoyed sponsorship from more than 20 organizations, including Lakera, the White House, HuggingFace, Scale, Cohere, SeedAI, Google, Meta, and more.
Mosscap and Lakera presented on the main AI stage

DEFCON 31 Highlights - AI Village

We spent most of our time hanging out in the AI Village. This consisted of two rooms: a room for talks and a room for the Generative Red Team (GRT) challenge - which was the largest ever in-person assessment for any group of AI models.

Check out Generative Red Team Challenge here.

**šŸ›”ļø Discover how Lakeraā€™s Red Teaming solutions can safeguard your AI applications with automated security assessments, as well as identifying and addressing vulnerabilities effectively.**

Generative Red Team (GRT) Challenge

Within the AI village, participants had the opportunity to take on the GRT challenge ā€“ a captivating competition offering 50 minutes to "hack" LLMs from a spectrum of providers: Cohere, Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability. The evaluation was facilitated through a platform developed by Scale AI.

The GRT challenge participants

This event garnered support from the White House Office of Science, Technology, and Policy, the National Science Foundationā€™s Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus.

As previously mentioned, Lakera introduced Mosscap, a spin-off of Gandalf designed to equip participants with insights into prompt injection as they navigated the challenge.

At the GRT, there were several tasks you could choose from:

  • Have the model make up a non-existent law
  • Have the model produce output thatā€™s biased against a minority
  • Have the model produce misinformation when prompted in a non-English language
  • Prompt injection
  • Get the model to contradict itself
  • Make the model do bad math
  • And moreā€¦

The submissions are graded manually, and you can get points for each (challenge, model) pair. There were about 50 laptops set up for the contest, along with a dynamic leaderboard. The results of the challenge will be announced in approximately a month's time.

Lakera team with Dr. Rumman Chowdry - a Co-Founder of the AI Village and the GRT Challenge

Notable AI Security Trends: The State of LLM Security

Here are some of our key learnings from DEFCON and highlights regarding the state of AI security.

Prioritizing LLM Security for Enterprises

Enterprises are honing in on LLM security as a primary concern. The spotlight is on prompt injections prevention, defense strategies against data leakage, and safeguarding against misbehavior and model misuse. Companies are proactively seeking innovative solutions to fortify themselves against evolving LLM-based threats.

Illuminating AI Security Threats through Educational Challenges

DEFCON was the ideal stage for initiatives like GRT, Mosscap & Gandalf to shine a light on specific types of security risks.

Craig Martell, Chief Digital and AI Officer at the U.S. Defense Department, conveyed a resounding message by saying:

Iā€™m here today because I need hackers everywhere to tell us how this stuff breaks. [ā€¦] Because if we donā€™t know how it breaks, we canā€™t get clear on the acceptability conditions and if we canā€™t get clear on the acceptability conditions we canā€™t push industry towards building the right thing, so that we can deploy it and use it.ā€

He underlined the need for in-depth research into LLM vulnerabilities, emphasizing its vital role in shaping industry standards and the deployment of secure AI systems. Martell's call for collaboration with hackers underscores the pursuit of comprehensive AI security.

Introducing AI Regulations Becomes Pivotal

DEFCON discussions resonated with policymakers in the US and EU, emphasizing the importance of regulations governing foundation models.

Nicolas Moƫs, Director of The Future Society - a non-profit committed to European AI Governance, illuminated critical concerns by saying:

ā€œSome of the biggest risks tied to the evolution of foundation models involve inherent biases embedded within them, and the intricate nature of these models, which can give rise to unforeseen behaviors causing harm. It is imperative that we establish more effective safeguards to counteract these potential outcomes.ā€

The call for enhanced measures to mitigate these risks underscores the paramount importance of responsible AI development.

Emergence of a Key Category: AISec in the Evolving AI Landscape

As the AI landscape continues to evolve, a pivotal category emergesā€”AISec. With organizations increasingly harnessing the power of LLMs to drive their internal systems, the implementation of robust safety measures becomes critical. These measures are crucial for safeguarding applications against a spectrum of threats, including prompt injections, hallucinations, and data leakage, among others.

Speaking of whichā€¦

Lakera Guard BETA launches at DEFCON31

Lastly, DEFCON also signaled the BETA launch of our new product - Lakera Guard, a powerful API designed to safeguard LLMs.

Check out our official announcement here: An Overview of Lakera Guard ā€“ Bringing Enterprise-Grade Security to LLMs with Just One Line of Code

The #AISec community at DEFCON responded with overwhelmingly positive feedback, and we're delighted to share that this product launch sparked numerous discussions about AI security and AI regulations with EU and US policymakers.

You can try Lakera Guard Playground and sign up here: https://platform.lakera.ai/

Lakera Guard product launch (BETA)

Finally, here are a couple of pictures of fellow white-hat hackers who decided to join forces with the Lakera team on our quest to protect LLM models šŸ˜œ

Lakera white-hat hackers at SeedAI party
Gandalf the White at SeedAI party

And thatā€™s it! We hope to see you next year at DEFCON32!

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download
You might be interested
Lakera Recognized in Gartner's GenAI Security Risks Report
Lakera Recognized in Gartner's GenAI Security Risks Report
1
min read
ā€¢

Lakera Recognized in Gartner's GenAI Security Risks Report

Lakera Recognized in Gartner's GenAI Security Risks Report

Gartner's report on GenAI security risks recognizes Lakera's solutions.
Lakera Featured in a NIST Report on AI Security
Lakera Featured in a NIST Report on AI Security
1
min read
ā€¢

Lakera Featured in a NIST Report on AI Security

Lakera Featured in a NIST Report on AI Security

Lakera's technology has been recognized by NIST in their report on Adversarial Machine Learning.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. ā€ØCome join us and 1000+ others in a chat thatā€™s thoroughly SFW.