AI is changing how software works.
Applications no longer just process requests. They reason, make decisions, and take action. AI agents now retrieve data, invoke tools, and execute workflows across systems in real time.
That shift introduces a new kind of risk.
Because in an agentic world, security is no longer just about who has access.
It’s about what AI is allowed to do.
A New Control Point for Agentic Systems in Google Cloud
Google Cloud’s Gemini Enterprise Agent Platform provides a centralized control point for agentic systems enabling identity, access, policy enforcement, and observability across how agents operate.
This is a critical step forward. Oganizations now have a consistent layer where agent interactions, including tool calls, API requests, and actions, can be inspected and governed.
Access Control Is Necessary but Not Sufficient
Even with a centralized control point, one critical question remains:
Not just what an agent can access but what it should do in context.
Google Cloud’s Gemini Enterprise Agent Platform establishes the foundation for managing identity, access, and connectivity across agent interactions.
But In an agentic system, access can be valid and the outcome can still be wrong.
That’s where the next layer of security is required: one that evaluates behavior in real time and governs outcomes, not just permissions.
Extending the Control Point with Check Point AI Defense Plane
Check Point, powered by Lakera, extends Google Cloud’s control point with a real-time decision layer for AI.
Through integration with Agent Gateway and Agent Registry, Check Point AI Defense Plane helps security teams to:
- Discover AI agents across their environment and understand risk
- Govern agent behavior through policy before deployment
- Protect agent actions at runtime with context-aware enforcement
This transforms the control point into a complete security workflow, from visibility, to policy, to real-time protection.
From Visibility to Real-Time Protection
At runtime, Check Point operates inline with agent interactions to evaluate behavior and enforce decisions in context.
This includes:
- Detecting and blocking prompt injection across inputs, tool responses, and multi-step interactions
- Preventing sensitive data exposure through agent outputs and actions
- Evaluating tool usage before execution to stop unsafe or unintended actions
Check Point adds a critical capability: the ability to assess not just whether an agent can act but whether it should.
A Practical Example
Consider a financial services organization deploying AI agents on Google Cloud to support portfolio management.
An agent receives input designed to influence its decision-making and attempts to execute a high-risk transaction.
At the control point, the request is properly authenticated, authorized, and aligned with defined access policies. The agent has permission to use the required tools, and the interaction follows expected pathways.
Check Point evaluates the full context of the interaction including prior inputs, tool usage, and intent and identifies the manipulation. The action is stopped and flagged for review.
The system doesn’t just enforce access.
It ensures the outcome is appropriate.
Secure How AI Behaves, Not Just How It Connects
AI systems don’t fail just because they lack access controls.
They fail because they make the wrong decisions.
Securing AI agents requires more than a single layer.
A complete architecture includes:
- A centralized control point to manage identity, access, and connectivity
- A governance layer to discover agents, assess risk, and enforce policy
- A runtime layer to evaluate behavior and control outcomes
Google Cloud provides the control point.
Check Point provides the governance and runtime decision layer.
Together, they enable organizations to securely scale AI adoption.
Not just controlling access but controlling outcomes.
Availability
Check Point integration with Google Cloud’s Gemini Enterprise Agent Platform will be available in late June 2026.




