Webinar
Virtual
Upcoming
On-demand
All

Lessons Learned from Crowdsourced LLM Threat Intelligence

January
25
,
2024
10:00 am
PT
January 25, 2024
Václav Volhejn
Senior Applied ML Scientist at Lakera
Sam Toyer
EECS PhD Student at UC Berkeley
Sander Schulhoff
CEO of LearnPrompting | NLP at UMD
Marc Fischer
PHD Student at ETH Zurich
Eric Allen
Developer Advocate at Lakera
January 25, 2024

2023 was a wild year for Large Language Models. It started off with Bard, Bing, and Llama, saw GPT-4 and multimodal models arrive, and ended with Mamba, Mixtral, and Phi-2. It was also a wild year for everyone’s favorite new security vulnerability: Prompt Injection.

Many teams were gathering prompt injection and LLM vulnerability data throughout the year, and in this panel we’ll dig into what we learned about prompt injections in 2023 and how we can use that information to build more secure LLM-enabled applications in 2024.

We’ll discuss insights from:

Join us - and a representative from each of these projects - for a discussion panel and interactive Q&A session.

Don't want your question to get lost in the Q&A? Submit your questions for the panel today.

Agenda
  • Get an overview of what we have learned about prompt injections over the last year
  • Understand what you can do to make your LLM-enabled applications more secure today
  • Discover tools and datasets for your own prompt injection evaluations
  • Discuss the importance of crowdsourcing attack and vulnerability data and contributing to our collective knowledge

Speakers
Václav Volhejn
Senior Applied ML Scientist at Lakera

Václav Volhejn is a Senior Applied ML Scientist at Lakera. He is one of the main initial creators of Gandalf along with Max.

Read more
Sam Toyer
EECS PhD Student at UC Berkeley

Sam is a final year PhD student in EECS at UC Berkeley, advised by Stuart Russell. He recently led a project to collect a large-scale dataset of prompt prompt attacks and defenses for instruction-following LLMs (TensorTrust.ai), and is interested in how to address emerging safety and security problems in foundation models.

Read more
Sander Schulhoff
CEO of LearnPrompting | NLP at UMD

Sander Schulhoff is a NLP/DRL researcher at UMD as well as the founder of learnprompting.org, and enterprise-grade e-learning platform teaching Generative AI skills. Learnprompting.org is used by millions of technical and non-technical users, including folks from OpenAI, ScaleAI, and Deloitte. He recently ran HackAPrompt, a global AI security competition sponsored by OpenAI and 12 other AI companies which collected 600K+ samples of prompt injections and won Best Theme Paper at EMNLP2023.

Read more
Marc Fischer
PHD Student at ETH Zurich

Marc Fischer is a final year PhD student in Computer Science at ETH Zurich, advised by Martin Vechev. His background is in certified robustness and the intersection of Machine Learning and Programming Languages. His recent work includes lmql.ai, a LLM-first programming language, and Project LVE, lve-project.org. LVE stands for Language Model Vulnerability and Exposure and is a community-focused open source project, to publicly document and track exploits and attacks on large language models (LLMs) like (Chat)GPT, Llama and Mistral models.

Read more
Eric Allen
Developer Advocate at Lakera

Eric is a Developer Advocate at Lakera and currently calls Boston home. He's spent time as a consultant, adjunct professor, VP, and individual contributor at companies ranging from a boutique design house to a Fortune 50 financial institution.

When he's not experimenting with Large Language Models, he's probably playing with his doggo, Judge; cooking up something vegetarian; enjoying a craft beer; or traveling the world with his fianceé, Priyanka, and pretending that he's a photographer.

Read more

https://lakera/event/crowdsourcing-llm-threat-intelligence

Check out similar events
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.