Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
Back

The Security Company of the Future Will Look Like OpenAI

AI security isn’t just cybersecurity with a twist—it’s a whole new game.

Mateo Rojas-Carulla
May 20, 2025
Last updated: 
May 27, 2025
The illusion of progress—buying another legacy vendor with an LLM wrapper—is dangerous. It delays the real work. It creates false confidence. The breaches have already started—and as agents gain traction, they’ll only grow faster, stranger, and harder to contain.

You don’t stop a transformer with a firewall.

As AI systems become the core of modern infrastructure, we’re witnessing the birth of an entirely new security paradigm. The cybersecurity playbook needs to evolve. In the past, cybersecurity companies were focused on optimizing for threats they understood, rather than looking to the future and preparing for completely new threats to come.

AI security isn’t simply an extension of the cybersecurity playbook of the past. It’s a new beast altogether.

On this page
Table of Contents
Hide table of contents
Show table of contents

AI Security Is Not Cybersecurity 2.0

Most mental models for AI security are misunderstood. Traditional cybersecurity protects static systems: codebases, APIs, endpoints. The rules are known and attack surfaces are fixed.

But AI systems aren’t static or transparent. In AI, data is code—and that code is complex, high-dimensional, and endlessly variable. The behavior of a model emerges not from neatly organized functions, but from billions of parameters shaped by opaque training data and dynamic inputs.

Models operate as black boxes—vast, non-deterministic systems that even their creators struggle to fully understand. This breaks the assumptions that traditional security is built on: that systems are analyzable, predictable, and static enough to wrap rules around.

Securing AI means abandoning those assumptions and starting fresh—with tools built for uncertainty, adaptation, and scale.

Trying to secure AI with traditional tools is like using a spell-checker to understand a picture. AI taught us that tasks like image recognition or natural language understanding couldn’t be solved with old-school rules and heuristics: they needed AI.

The same is true for AI security. Companies that try to retrofit yesterday’s tools to this new paradigm are falling into the same trap all over again: applying the wrong solution to a problem that has outgrown it.

The Attacker Has Changed. The Game Has Changed.

AI doesn’t just empower defenders: it supercharges attackers.

The skills required to exploit these systems aren’t elite technical feats anymore. They're creative ones. We’ve seen 11-year-olds crafting powerful prompt attacks that take them through all of Gandalf’s defenses. And those same ideas scale instantly across the internet to attack LLM systems in less friendly settings.

The barrier to entry has dropped to zero, and the attack surface has exploded. You don’t need malware kits, just imagination. And that’s infinitely more abundant.

We're entering an era where the most creative adversaries won’t need nation-state resources—they’ll just need curiosity, access to a model, and a clever idea.

“AI for Security” ≠ “Security for AI”

A growing number of cybersecurity vendors are touting “AI-powered” features—copilots, auto-triage, LLM-enabled dashboards. These developments are valuable in enhancing traditional workflows. 

But there's a critical distinction: using AI to optimize existing cybersecurity practices is not the same as securing AI systems themselves. That’s a fundamentally different problem—and one that’s often overlooked.

Trying to solve AI security without rethinking the approach completely is the equivalent of putting satellite navigation in a car and calling it autonomous. The underlying system hasn't changed. And when agentic systems start making critical decisions, exposing private data, or pinging external APIs, the risks are no longer theoretical.

Attackers are already adapting. Most defenders haven’t even realized they’re in a new game.

The CrowdStrike Moment for AI

The last major shift in security was the cloud. And it caught a lot of legacy vendors flat-footed.

CrowdStrike didn’t just “move to the cloud”—they reimagined endpoint security from the ground up. They understood that cloud-native threats required cloud-native defenses. And they built accordingly.

AI is the next platform shift. It’s moving faster, changing faster, and breaking more things than ever before.

The winners won’t be the ones who retrofit AI into old tools. They’ll be the ones who build from the group up, with a deep understanding of how these systems work, and how they break.

Brace Yourself

This isn’t a drill. Foundation models available with open weights, agents accessing untrusted data on the web, it’s all accelerating. The combination of public access, low cost, and emergent capability is a perfect recipe for chaos.

Enterprises deploying LLMs today have no idea what’s coming.

The illusion of progress—buying another legacy vendor with an LLM wrapper—is dangerous. It delays the real work. It creates false confidence. The breaches have already started—and as agents gain traction, they’ll only grow faster, stranger, and harder to contain.

The New Paradigm

The next great security company won’t come from the old playbook. It will have AI at its core. It won’t look like an existing security company—it’ll look like OpenAI.

This is the security moonshot of our era.

And whoever gets it right will define the next decade of defense.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

Download Free

Explore Prompt Injection Attacks.

Learn LLM security, attack strategies, and protection tools. Includes bonus datasets.

Unlock Free Guide

Learn AI Security Basics.

Join our 10-lesson course on core concepts and issues in AI security.

Enroll Now

Evaluate LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Download Free

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Download Free

The CISO's Guide to AI Security

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Download Free

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Download Free
Mateo Rojas-Carulla

GenAI Security Preparedness
Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
10
min read
AI Security

OWASP Top 10 for Large Language Model Applications Explained: A Practical Guide

In this practical guide, we’ll give you an overview of OWASP Top10 for LLMs, share examples, strategies, tools, and expert insights on how to address risks outlined by OWASP. You’ll learn how to securely integrate LLMs into your applications and systems while also educating your team.
Lakera Team
May 21, 2025
5
min read
AI Security

What Is Content Moderation for GenAI? A New Layer of Defense

A fresh look at content moderation in the GenAI era: why traditional filters fall short, and how real-time LLM guardrails change the game.
Lakera Team
May 22, 2025
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.