One of the toughest challenges for security teams adopting GenAI isn’t building defenses. It’s working out what defenses are most needed, and what’s the appropriate risk threshold for each use case.
Policy management in GenAI is messy by nature. You’re trying to control novel behaviors, across dynamic inputs and outputs, in a landscape where best practices barely exist.
So what happens?
Teams often default to blunt, blanket controls, or take ages over engineering—evaluating every threshold, toggling every setting—and lose sight of the real goal: unblocking innovation with effective safeguards quickly.
That’s why we redesigned policy management in Lakera Guard around battle-tested, expert-recommended security policies. It’s a simpler way to help teams deploy meaningful protections right away, and fine-tune only when they’re ready.
Not sure how to secure your GenAI application? Skip the guesswork with expert-recommended policies built by Lakera’s AI security team. Apply them in seconds, fine-tune when you’re ready, and get started with real protection from day one.
The Lakera team has accelerated Dropbox’s GenAI journey.
“Dropbox uses Lakera Guard as a security solution to help safeguard our LLM-powered applications, secure and protect user data, and uphold the reliability and trustworthiness of our intelligent features.”
The core idea: security that grows with you
With our new approach to policy management, security teams can get protected in seconds using expert-curated, battle-tested, pre-built policies, and then progressively introduce more control as their use cases and maturity evolve.
Here’s how it works:
- Start fast with a pre-built policy
We’ve introduced five one-click policies covering the most common GenAI deployment scenarios:
- Default policy – maximum security for high-risk or regulated environments
- Public-facing application – for customer-facing assistants and chatbots
- Internal-facing application – for employee tools where trust is higher
- Prompt defense only – a lean profile to focus on stopping prompt injection and manipulation
- Content safety – ideal for education, moderation, or user safety use cases
Each policy includes guidance on when to use it — and all are backed by Lakera’s security expertise.

- Adjust a single sensitivity setting
Every policy can be tuned with one global flagging sensitivity. We provide clear guidance on how to choose the right level for your use case: from L1 (minimal disruption) to L4 (maximum protection).

- Unlock advanced settings when you’re ready
As your security needs grow, you can edit templates, adjust guardrails, or fully customize behavior using our advanced settings — without having to start from scratch.
What policy maturity looks like in practice
Most teams follow a natural progression when rolling out Lakera Guard:
- Start non-blocking: Run a default policy in logging mode to observe behavior
- Block leniently: Introduce flag-based enforcement with minimal false positives
- Tune and expand: Address any patterns, adjust policies, and raise the flagging threshold to the desired level
- Add complexity when needed: Layer in custom guardrails, optimise performance, remove noisy detectors based on use case
This update is about supporting that journey: offering simplicity at the start, flexibility over time, and clarity throughout.
What this means for your team
Security teams no longer need to start from a blank slate or interpret a dozen detection options. Instead, they can:
- Select a vetted policy aligned to their use case
- Adjust it easily to match their policy and compliance requirements
- Fine tune to the appropriate risk threshold, with one clear sensitivity setting
- Iterate gradually — not from scratch
And for those running self-hosted, don’t worry you get these expert curated policy templates too, so you can adopt the same approach in your own infra.
Why we made this change
This update isn’t just a UX refresh. It’s the result of everything we’ve learned from helping security teams deploy GenAI in production.
We’ve learned a lot from working closely with trailblazing GenAI-first companies, the teams pushing AI into production before most had figured out what secure even looked like. Those experiences taught us where security teams get stuck, what guardrail configurations actually work, and how to guide policy decisions that scale.
We’ve turned those lessons into clear, expert-backed starting points, so new customers can move faster, deploy confidently, and spend less time fine-tuning.
This is how we help teams operationalize AI security: with policy defaults that make sense, and flexibility that grows with you.
The bigger picture: defining policy in the age of GenAI
As AI security matures, policy management will become the heart of how companies translate their risk appetite into defensible controls. But we can’t expect every team to start with perfect answers.
That’s why the future of policy management is opinionated, contextual, and adaptable. And that’s exactly what we’ve just shipped.
If you want to see it in action, we’ll be sharing short walkthroughs soon — or get in touch for a hands-on demo.
