5
min read
AI Security

The Progressive Breach Model Behind the OWASP Top 10 for Agentic Applications

A deep dive into the OWASP Top 10 for Agentic Applications as a progressive breach model, showing how prompt injection and model manipulation evolve into autonomous action, propagation, and loss of containment in agentic systems.
Steve Giguere
February 20, 2026
3
min read
AI Security

OpenClaw, Skills, and the Lord of the Flies Problem: Why Agentic AI Is Becoming a CISO Nightmare (OpenClaw Ecosystem Briefing)

An ecosystem-level analysis of OpenClaw’s skills model and why agentic AI platforms are creating new operational and governance risks for security leaders.
Steve Giguere
February 18, 2026
3
min read
AI Security

What the New MCP Specification Means to You, and Your Agents

A story about a protocol growing up—and why your agents might need therapy afterwards.
Steve Giguere
November 20, 2025
3
min read
AI Security

OWASP Global AppSec DC 2025: Notes From the Breaker Track

Steve Giguere looks back on OWASP Global AppSec DC 2025, reflecting on the Breaker Track, the rise of AI security and the conversations that defined the week.
Steve Giguere
November 14, 2025
3
min read
Opinion

Why We Need OWASP’s AIVSS: Extending CVSS for the Agentic AI Era

CVSS falls short in the age of AI agents. OWASP’s new AIVSS layers autonomy, unpredictability, and real-world threat data on top—giving security teams a score that finally matches how agents behave.
Steve Giguere
September 3, 2025
5
min read
Opinion

From Alexa Mishearing You to AI Agents Acting on It

When AI agents act on what they mishear, the result isn’t funny anymore. It’s a security risk.
Steve Giguere
August 26, 2025
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.