There's a quiet shift happening inside enterprise organizations right now, and most security teams are only just beginning to feel its full weight.
AI has crossed a line. Not the line between hype and reality (that happened a while ago), but a more consequential one: the line between use and action.
For a time, AI was advisory. It summarized, suggested, and generated. Humans remained in the loop. The security implications were real but manageable, mostly a question of data governance and acceptable use policies.
That era is over.
From Copilots to Autonomous Actors
Today's AI systems don't just answer questions. They retrieve internal data, invoke APIs, modify records, and trigger workflows. Often autonomously, often across multiple enterprise systems simultaneously, and often based on nothing more than a natural-language instruction.
Gartner estimates that by 2028, 15% of day-to-day work decisions will be made autonomously by agentic AI. Given the current pace of adoption, that estimate already feels conservative.
As of today, 14% of organizations have already deployed autonomous agents into live production environments.
This is the new reality of enterprise AI: systems that act on behalf of users, operating across knowledge bases, customer data, and operational tooling, without a single, clearly defined execution boundary.
And it's creating a security problem that traditional tools simply weren't built to handle.
The Threat That Hides in Plain Language
Here's what makes agentic AI security so different, and so difficult.
In traditional enterprise security, inputs are structured. APIs have schemas. Traffic patterns are predictable. Security controls can inspect requests, validate data, and enforce access at well-defined boundaries.
AI breaks all of that. Inputs are natural language. Intent is inferred. Context is assembled dynamically at runtime from multiple sources—user prompts, retrieved documents, system instructions, previous conversation turns. The most security-critical decisions happen inside the reasoning process, not at any network boundary you can easily inspect.
The result?
Risk accumulates between layers, not at any single point of failure. A prompt bypasses application logic. Retrieved context is subtly manipulated. An agent takes an action that looks reasonable in isolation but isn't sanctioned by intent.
Each step is technically valid. The outcome is not.
This is what Lakera and Check Point's new Enterprise Playbook calls the common failure pattern, and it's playing out across organizations that thought their existing controls were sufficient.
Where Sensitive Data Is Actually Going
Most conversations about AI data risk start with the wrong question: "Is our data being used to train the model?"
That question matters, but it distracts from a far more immediate problem.
In enterprise settings, sensitive data rarely leaves because of model training. It leaves because AI systems create entirely new paths for legitimate-looking disclosure—paths that existing controls were never designed to inspect.
The attack surface isn't where most organizations are looking.
~60% of observed attack traffic attempts to leak system prompts—targeting internal instructions and IP, not just user data.
—Lakera Q4 2025 Agent Security Trends
A New Kind of Exposure Surface
What makes this moment particularly critical is the emergence of Model Context Protocols (MCPs), the interfaces that define how AI agents discover capabilities, select tools, and translate language into action.
Think of MCPs as the USB-C of agentic AI. Just as USB-C standardized how devices connect to power and peripherals, MCPs standardize how agents connect to tools, data, and systems. This brings faster integration, but also concentrates risk at a single, critical junction.
When the path from reasoning to execution runs through a standardized interface, that interface becomes the place where control either exists or disappears entirely.
This is why AI security incidents are increasingly resembling operational failures rather than traditional data breaches. Data is being modified rather than leaked. Actions are being taken rather than suggested.
Systems behave correctly according to logic, but incorrectly according to intent.
What "Good" Looks Like
The answer isn't to slow AI adoption. It's to build a security model designed for how AI actually works.
The Lakera and Check Point framework introduces the concept of an AI Defense Plane, a unified security architecture spanning the three distinct surfaces where AI risk now lives:
-db1-
- Employees using AI tools,
- AI-powered applications,
- Autonomous agents.
-db1-
The distinction from point solutions matters. Policies defined in one layer inform enforcement in another. Telemetry is correlated across edges. Signals from workforce usage influence how agent behavior is governed. It's security designed for systems that think, not just systems that route.
Companies like Dropbox have deployed it across multiple AI applications to protect against prompt injection and jailbreak attacks without slowing development cycles. Nubank, serving over 115 million customers in a highly regulated banking environment, relies on it as a foundational requirement for enterprise AI compliance.
The Question Worth Asking
Reported AI security incidents jumped 67% year-over-year as organizations moved from pilots to production.
Only 4% of organizations rate their AI security confidence at the highest level, despite widespread adoption.
—Lakera 2025 GenAI Security Readiness Report
Those numbers tell a familiar story: adoption outpacing governance. The gap between "we're using AI" and "we understand our AI risk posture" is widening, and the consequences are no longer theoretical.
The question isn't whether your organization is using AI. It almost certainly is. The question is whether you have visibility into what it's doing, what data it's touching, and what actions it's taking on your behalf.
Ready to Go Deeper?
The full white paper, Agentic AI Security: The Enterprise Playbook, produced jointly by Lakera and Check Point, goes significantly further. It covers the architectural models for securing each exposure surface, a detailed breakdown of how MCPs introduce new risk at the execution layer, and practical deployment guidance from organizations already operating at scale in regulated industries.
If you're building AI-powered products or governing enterprise AI adoption, this is the framework worth understanding before your next deployment.




