Masterclass in AI Threat Modeling: Addressing Prompt Injections
Join Mateo Rojas Carulla (Chief Scientist at Lakera), Nate Lee (CISO at CloudSec), and Elliot Ward (Security Researcher at Snyk) for a live discussion on the intricacies of AI threat modeling and the pressing challenges in securing AI systems.
As AI systems become more sophisticated, the threats they face grow in complexity. One of the most pressing challenges today is effectively modeling and defending against AI-specific attacks, such as prompt injections.
This webinar will explore how to effectively model AI-specific threats, address emerging vulnerabilities, and establish a proactive security strategy. The session will also place a special emphasis on prompt injections—an emerging and particularly dangerous form of attack on Generative AI systems. Attendees will gain insights into the latest defense strategies and practical ways to secure AI-driven applications against these sophisticated threats.
Join this session to:
- Understand the unique security risks posed by AI and how to model them effectively.
- Learn about the growing threat of prompt injections and how they exploit GenAI systems.
- Explore cutting-edge research and real-world examples of LLM exploits.
- Discover actionable techniques to defend your AI applications from emerging attack vectors.
untouchable mode.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.