Articles by
Steve Giguere has spent years on the frontlines of application security, witnessing firsthand the dedication and vigilance it takes to keep modern software safe. As AI rapidly reshapes the landscape, he sees both its potential to enhance security and the new, often hidden, vulnerabilities it introduces. His mission is to bring clarity to this noisy space, helping teams embrace AI in ways that are smart, safe, and sustainable.

The Progressive Breach Model Behind the OWASP Top 10 for Agentic Applications
A deep dive into the OWASP Top 10 for Agentic Applications as a progressive breach model, showing how prompt injection and model manipulation evolve into autonomous action, propagation, and loss of containment in agentic systems.

Steve Giguere
February 20, 2026

OpenClaw, Skills, and the Lord of the Flies Problem: Why Agentic AI Is Becoming a CISO Nightmare (OpenClaw Ecosystem Briefing)
An ecosystem-level analysis of OpenClaw’s skills model and why agentic AI platforms are creating new operational and governance risks for security leaders.

Steve Giguere
February 18, 2026

What the New MCP Specification Means to You, and Your Agents
A story about a protocol growing up—and why your agents might need therapy afterwards.

Steve Giguere
November 20, 2025

OWASP Global AppSec DC 2025: Notes From the Breaker Track
Steve Giguere looks back on OWASP Global AppSec DC 2025, reflecting on the Breaker Track, the rise of AI security and the conversations that defined the week.

Steve Giguere
November 14, 2025

Why We Need OWASP’s AIVSS: Extending CVSS for the Agentic AI Era
CVSS falls short in the age of AI agents. OWASP’s new AIVSS layers autonomy, unpredictability, and real-world threat data on top—giving security teams a score that finally matches how agents behave.

Steve Giguere
September 3, 2025

From Alexa Mishearing You to AI Agents Acting on It
When AI agents act on what they mishear, the result isn’t funny anymore. It’s a security risk.

Steve Giguere
August 26, 2025

Activate
untouchable mode.
untouchable mode.
Get started for free.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Join our Slack Community.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.