[Webinar] AI Security Is Application Security: What Every Security Leader Needs to Know



Join Steve Giguere, Principal AI Security Advocate at Lakera, for a practical, no-fluff session breaking down the foundational risks of modern AI systems, and what to do about them.
As organizations adopt generative AI across teams and workflows, security leaders are facing a new class of threats. From prompt injections and agentic behavior to model supply chain risks and data leakage, these risks aren’t theoretical, they’re already here.
This webinar is designed for security architects, CISOs, and technical stakeholders who want to move beyond the buzz and start securing real-world AI applications. We’ll cut through the hype, tackle common misconceptions, and walk through a clear, actionable approach to getting your AI security program off the ground.
Join this session to:
- Debunk common myths and misconceptions holding teams back from securing their GenAI stack.
- Understand how GenAI systems introduce new risk dimensions, from prompt injection to agent tool misuse.
- Explore the expanded threat landscape: model misuse, supply chain issues, data leakage, and more.
- Learn what good looks like: from prompt red teaming to architecting for agentic systems.
- See how Lakera Guard protects real-world applications with minimal latency and maximum impact.
[Webinar] AI Security Is Application Security: What Every Security Leader Needs to Know



Join Steve Giguere, Principal AI Security Advocate at Lakera, for a practical, no-fluff session breaking down the foundational risks of modern AI systems, and what to do about them.
As organizations adopt generative AI across teams and workflows, security leaders are facing a new class of threats. From prompt injections and agentic behavior to model supply chain risks and data leakage, these risks aren’t theoretical, they’re already here.
This webinar is designed for security architects, CISOs, and technical stakeholders who want to move beyond the buzz and start securing real-world AI applications. We’ll cut through the hype, tackle common misconceptions, and walk through a clear, actionable approach to getting your AI security program off the ground.
Join this session to:
- Debunk common myths and misconceptions holding teams back from securing their GenAI stack.
- Understand how GenAI systems introduce new risk dimensions, from prompt injection to agent tool misuse.
- Explore the expanded threat landscape: model misuse, supply chain issues, data leakage, and more.
- Learn what good looks like: from prompt red teaming to architecting for agentic systems.
- See how Lakera Guard protects real-world applications with minimal latency and maximum impact.
untouchable mode.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.