[Live at Lakera HQ] AI Security Is Application Security: What Every Security Leader Needs to Know



Join Steve Giguere, Principal AI Security Advocate at Lakera, for an exclusive in-person session at Lakera HQ*, designed to break down the foundational risks of modern AI systems and show you what real-world defense looks like.
As organizations adopt generative AI across teams and workflows, security leaders are facing a new class of threats. From prompt injections and agentic behavior to model supply chain risks and data leakage, these risks aren’t theoretical, they’re already here.
This live event is designed for security architects, CISOs, and technical stakeholders who want to move beyond the buzz and engage directly with experts and peers in a hands-on, high-trust setting. You’ll get practical insights, space to ask questions, and a firsthand look at how the Lakera team is tackling GenAI risks today.
You’ll also have the chance to connect with other security leaders and members of the Lakera team at our San Francisco office. It’s a relaxed, informal setting, perfect for meaningful conversations, getting to know our team, and seeing where the work happens.
📍 Where: Lakera HQ, 282 2nd Street, Suite 100, San Francisco, CA 94105
🕔 When: July 30, 2025, doors at 5:00PM, session starts at 5:15PM
*Limited to 25 seats!
Join us at Lakera HQ to:
- Debunk common myths and misconceptions holding teams back from securing their GenAI stack.
- Understand how GenAI systems introduce new risk dimensions, from prompt injection to agent tool misuse.
- Explore the expanded threat landscape: model misuse, supply chain issues, data leakage, and more.
- Learn what good looks like: from prompt red teaming to architecting for agentic systems.
- See how Lakera Guard protects real-world applications with minimal latency and maximum impact.
[Live at Lakera HQ] AI Security Is Application Security: What Every Security Leader Needs to Know



Join Steve Giguere, Principal AI Security Advocate at Lakera, for an exclusive in-person session at Lakera HQ*, designed to break down the foundational risks of modern AI systems and show you what real-world defense looks like.
As organizations adopt generative AI across teams and workflows, security leaders are facing a new class of threats. From prompt injections and agentic behavior to model supply chain risks and data leakage, these risks aren’t theoretical, they’re already here.
This live event is designed for security architects, CISOs, and technical stakeholders who want to move beyond the buzz and engage directly with experts and peers in a hands-on, high-trust setting. You’ll get practical insights, space to ask questions, and a firsthand look at how the Lakera team is tackling GenAI risks today.
You’ll also have the chance to connect with other security leaders and members of the Lakera team at our San Francisco office. It’s a relaxed, informal setting, perfect for meaningful conversations, getting to know our team, and seeing where the work happens.
📍 Where: Lakera HQ, 282 2nd Street, Suite 100, San Francisco, CA 94105
🕔 When: July 30, 2025, doors at 5:00PM, session starts at 5:15PM
*Limited to 25 seats!
Join us at Lakera HQ to:
- Debunk common myths and misconceptions holding teams back from securing their GenAI stack.
- Understand how GenAI systems introduce new risk dimensions, from prompt injection to agent tool misuse.
- Explore the expanded threat landscape: model misuse, supply chain issues, data leakage, and more.
- Learn what good looks like: from prompt red teaming to architecting for agentic systems.
- See how Lakera Guard protects real-world applications with minimal latency and maximum impact.
untouchable mode.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.