OpenClaw didn’t go viral because it’s enterprise software. It went viral because it shows where everyday work is heading.
Tools like OpenClaw are being picked up first by individuals: engineers automating side projects, analysts wiring together workflows, operators speeding up routine tasks using open and community-built agent frameworks. But the moment those tools touch real systems—inboxes, shared drives, internal dashboards, developer environments—they stop being “personal experiments.” They become part of an organization’s attack surface.
This is the real shift behind securing how employees use AI at work. Not companies officially rolling out autonomous agents, but people bringing powerful, open-ended automation into real workflows—often without the visibility, controls, or guardrails we apply to traditional software.
OpenClaw’s breakout moment is more than a viral AI story. It’s a preview of how work is changing: software that used to assist people is starting to act on their behalf.
That shift sits at the core of securting the usage of AI among the employees. When employees adopt AI assistants that can browse, run tasks, install “skills,” and operate across apps, the security question changes. It’s no longer just “what did the model say?” but “what did the agent do, and under whose authority?”
“OpenClaw is a glimpse of the future: AI assistants that don’t just suggest—they act. The security challenge isn’t the AI’s output; it’s the authority we delegate to it.”
—David Haber, VP of AI Agent Security, Lakera (A Check Point Company)
Why This Matters: Blast Radius
In the last few days, researchers and news outlets have flagged security issues around OpenClaw’s rapidly growing ecosystem, including reports of one-click execution paths and malicious third-party skills.
It’s easy to treat this as another AI security headline. But OpenClaw changes the stakes. Agents are becoming a layer that can touch everything a user can touch.
That means familiar risks—links, plugins, supply chain—can lead to unfamiliar outcomes: fast execution, broad permissions, and actions that look indistinguishable from normal work.
The Real Lesson: Security Hasn’t Caught up to Delegation
The OpenClaw moment highlights a simple gap.
Organizations are delegating work to AI faster than they are building controls for what that AI can access, install, and execute.
This is why AI security can’t stop at model behavior or content safety. A system can be perfectly polite and still be dangerously exploitable, especially when it’s wired into inboxes, files, browsers, dev tools, and internal systems.
What Workforce AI Security Means in Practice
Workforce AI Security isn’t a slogan. It’s a set of controls for a world where employees are increasingly delegating real work to AI assistants—inside email, documents, browsers, developer tools, and business applications.
In practice, that means focusing on how AI operates on behalf of people at work:
Visibility
Which AI assistants are employees using across the organization—and what systems, data, and tools do those assistants have access to?
Guardrails on actions
When an assistant is about to take a sensitive action on an employee’s behalf—installing a skill, running a command, moving data—that action should be treated like a high-risk operation, not a convenience click.
Trust boundaries for third-party extensions
Skills and plugins used by workplace assistants aren’t “just add-ons.” They are code pathways into the same business systems employees rely on every day.
Protection against indirect manipulation
Workplace assistants consume large volumes of untrusted input—documents, links, spreadsheets, tickets, and datasets. In an agentic world, those inputs don’t just inform work; they can quietly steer it.
“Moltbot exposes a dangerous new reality: with AI agents, data is code. A malicious spreadsheet cell can now exfiltrate your entire inbox. We're living in this world today, and the way enterprises think about security needs to catch up.”
—Mateo Rojas-Carulla, Head of Research, Lakera (A Check Point Company)
This is exactly the class of risk that emerges in real employee workflows—not through obvious exploits, but through everyday work artifacts like documents, links, and datasets that quietly influence AI behavior.
If you want to see how this plays out in practice, you can try it yourself with Gandalf: Agent Breaker, a hands-on game where you attempt to manipulate real agent-style systems using indirect inputs and prompt attacks. It’s a simple way to experience how easily “harmless” data can turn into control.
We’ve also written more in depth about these patterns in our guides to data poisoning and indirect prompt injection, which break down how attackers embed instructions into training data, documents, and external content that AI systems trust by default.
How to make this useful on Monday morning
If you’re experimenting with tools like OpenClaw (or any workplace agent), a pragmatic posture looks like this:
-db1-
- Treat agent tools as high-trust apps: review installs, connectors, and permissions like you would browser extensions or developer tools.
- Apply least privilege where you already have control: identity, OAuth scopes, SaaS permissions.
- Tighten the plugin and skills surface on managed endpoints: restrict installs and limit who can add new connectors.
- Treat external content (docs, links, web pages) as untrusted inputs that can steer behavior, not just information employees read.
- Measure outcomes using logs you already have: SaaS audit trails, repo activity, sensitive file access. What matters is what the agent did, not what it said.
-db1-
For security leaders, this is the practical reality: employees will adopt AI automation with or without a policy. The choice is whether you build visibility and control at the identity and data layer now, or investigate it later as an incident.
The Broader Shift
The most important thing about OpenClaw isn’t whether a specific bug exists or gets patched. It’s that “work” increasingly includes autonomous tools acting on human authority.
We’re entering a world where employees, applications, and agents all interact with AI in ways that directly touch data, systems, and real operations. Securing this shift isn’t just about model safety or content filtering. It’s about building AI security into how organizations discover AI usage, control what systems AI can access, and enforce guardrails around what it’s allowed to do in practice.
Where Lakera Fits
Lakera focuses on helping organizations understand and control how AI is actually being used across real workflows—from employees experimenting with copilots, to applications embedding LLMs, to agents making autonomous decisions.
In practice, that means providing visibility into which AI systems are in use, constraining risky connections, and adding guardrails around sensitive actions like data access, tool execution, and third-party integrations.
If you’re starting to see this in your environment, we’re happy to share a practical readiness checklist and lessons learned from deploying controls in real workflows.




