OWASP Global in Washington DC is now in the rear view, and after a week in the Breaker Track, one thing stood out clearly:
AI security has moved straight into the center of AppSec.
From talks to hallway conversations to the after hours gatherings, everyone kept circling back to the same question: How do we secure what we have already built?
Here is what stood out to me.

On the Ground at OWASP DC
We joined the event on short notice, but stepping into the keynote room on Day 1 set the tone for the entire week. The space had an energy that hit immediately. The room looked fantastic, people were engaged from the start, and there was a sense that this year’s OWASP was going to be different.

There was a buzz even before the first talk began. Conversations in the rows were already about AI security, real incidents, the latest model updates, and what agents are starting to do in production. You could feel that people had arrived with questions they were eager to explore.
And yes, Gandalf as usual, became a bit of an expected conversation starter.
Daniel Miessler’s Keynote and the Claude Code Shift
Daniel delivered one of the cleanest, most forward looking keynotes I have seen in a long time.

His take on Claude Code as a game changer for productivity through AI autonomy was impressive and thought provoking. He demystified and demonstrated its “skills” (acting like an emergent MCP) for the desktop in a way that sent me straight to the OWASP member lounge to experiment with it. Claude Code is powerful in ways that are inspiring and slightly unsettling at the same time. The security questions it opens up are exactly the kinds of questions we explore in Lakera Guard, Lakera Red and Gandalf: Agent Breaker.
That connection between capability and risk became one of the dominant themes of the event.
Breaker Track: Where the Community Was Gathering
The Breaker Track quickly became the place where the most candid and technical conversations were happening. This is where people shared what they had actually seen in the wild. Live demos, real breaches, broken workflows, agent misfires, jailbreak chains that should not have worked but somehow did.
Around three quarters of the talks focused on offensive AI work. And those rooms filled up fast. People wanted proof, not theory.
My colleague Hassan and I spent a lot of time chatting with speakers and attendees as they came in. It created a natural moment for people to open up about their own experiments. The stories ranged from “I don’t know where to start”, to “you won’t believe what our agent did last week” anecdotes.
A Moment You Cannot Manufacture
Late on Day 2, something happened that you cannot script.

In a packed room, just minutes before his talk, Jason Haddix pulled up Gandalf, demoed it, talked about Agent Breaker and pointed at me in the crowd. The whole room reacted with familiar nods and conversations around their own individual achievements. Gandalf was the common ground for discussion as for many it was and still is the educational entry into AI security.
Right after that, the next speaker referenced our Backbone Breaker Benchmark whitepaper we had just released. I happened to be sitting next to someone who leaned over and said something like:
“There is no way to stage a moment like this.”
And they were right. It was one of those rare times when the work speaks for itself and the community carries it forward.
Every Conversation Led Back to AI Security
The most revealing part of the week happened off stage.
Over dinner, coffee and hallway chats, people kept returning to the same practical challenges:
- Agentic behavior introducing completely new failure modes
- Tools with more authority than the systems surrounding them
- Teams trying to understand their real exposure
- Guardrails holding up in some cases and collapsing in others
- The need for offensive testing that reflects how these systems behave outside controlled demos
None of these were abstract musings. These were stories pulled straight from incidents unfolding in real systems.
And somewhere in those hallway conversations, I overheard a line that has been stuck in my head ever since:
“Anyone putting real agents into production right now is brave… like me!”
It was funny, but also uncomfortably accurate.
It aligned almost perfectly with what surfaced in the 2025 GenAI Security Readiness Report, where less than a third of teams said they feel prepared for the threats emerging across agentic and multimodal systems. Hearing the same themes at OWASP made the whole picture feel more immediate.
Agentic AI, Skills and a Changing Security Model
Agentic AI was the recurring character of this year’s conference. Even in sessions that were not directly about it, people kept circling back to the implications.
AI makes development fast and intuitive, but also creates new security dynamics. Capabilities and tools, either presented through MCP or “skills” blend together in ways that are not always easy to anticipate. Automation chains start to behave in ways no single component intended.
This matches what we have been outlining in our Agentic AI Threats series. Once you move beyond single prompts toward systems with tools, memory and autonomy, the risk landscape shifts with it. OWASP showed that this shift is becoming widely recognized.
People and Moments
OWASP has always had a cinematic feel to it, and this year leaned into that atmosphere even more. The mix of neon lighting, long escalators, crowded corridors and fast moving conversations gave the whole event a slightly futuristic tone.
I turned the photos I captured into a short cyberpunk style trailer because it felt true to the environment. Sometimes the best way to summarize a week like this is visually.

It was also a chance to reconnect with old colleagues, meet new ones and share what each of us has been working on. That mix of familiarity and new energy is part of what makes OWASP such an important checkpoint every year.
Looking Ahead
OWASP Global AppSec DC 2025 made something very clear.
The industry is not debating whether AI security matters. Everyone is asking how to move faster, build safer and understand the risks of the systems they are already deploying.
The next phase of AppSec will be shaped by:
- Agentic behavior becoming a central attack surface
- Continuous and automated AI red teaming
- Frameworks like the OWASP LLM Top 10 influencing day to day work
- Security models that consider skills, tools and autonomy together rather than individually
For us at Lakera, this reinforces the direction we have been building toward with Guard, Red, Agent Breaker and the Backbone Breaker Benchmark. It is energizing to see the community leaning into these topics with real curiosity and urgency.
Closing Thoughts
OWASP DC had a pace and energy that reflected where the field is right now. Every room felt charged with people trying to understand how to secure the powerful systems they are unleashing.
If the conversations in DC are any indication, the next year will be an intense and promising one for AI security. The community is ready for it.





