Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Lakera Launches the AI Model Risk Index: A New Standard for Evaluating LLM Security

Lakera releases the AI Model Risk Index, the most comprehensive, realistic, and contextually relevant measure of model security for AI systems.

Lakera Team
June 24, 2025
June 24, 2025
Hide table of contents
Show table of contents

The reality is, an LLM can never be held accountable; therefore, an LLM must never make a management decision 

SAN FRANCISCO and ZURICH - June 24, 2025 – Lakera, the world’s leading security platform for generative AI applications, today announced the release of the AI Model Risk Index, the most comprehensive, realistic, and contextually relevant measure of model security for AI systems.

Designed to assess the real-world risk exposure of large language models (LLMs) to attacks, the Lakera AI Model Risk Index measures how effectively models can maintain their intended behavior under adversarial conditions. From AI-powered customer support bots to assistants, the report tests LLMs in realistic scenarios across industries, including technology, finance, healthcare, law, education and more.

“Traditional cybersecurity frameworks fall short in the era of generative AI,” said Mateo Rojas-Carulla, co-founder and Chief Scientist at Lakera. “We built the AI Model Risk Index to educate and inform. Enterprises deploying AI systems must completely rethink their approach to securing them. Today, attackers don’t need source code, they just need to know how to communicate with AI systems in plain English.”

Most risk assessment approaches focus on surface-level issues: testing prompt responses in isolation and with context independent static prompt attacks that focus on quantity and not on context or quality. By contrast, the Index asks a more practical question for enterprises: how easily can this model be manipulated to break mission-specific rules and objectives and in which type of deployments?

The difference is critical.

Within the report, you will find:

  • Real-world attack simulation models how adversaries target AI systems through multiple attack vectors, including direct manipulation attempts through user interactions and indirect attacks that embed malicious instructions in RAG documents or other content the AI processes.
  • Applied risk assessment focuses on measuring whether AI systems can maintain their intended purpose under adversarial conditions. The evaluation tests the model's consistency in performing its designated role, which is essential for enterprise deployments where predictable behavior drives business operations and regulatory compliance.
  • Quantitative risk measurement provides clear scoring that enables relative analysis between different AI models, tracks security improvements or degradations across model versions and releases, and delivers standardized metrics for enterprise security evaluation.

Key findings 

The results also reveal that newer and more powerful versions of large language models are not always more secure than earlier ones, and that all models, to some extent, can be manipulated to act outside their intended purpose.

Availability

The Lakera AI Model Risk Index, along with detailed model cards and AI risk assessment methodology, are available now at http://lakera.ai/ai-model-risk-index.

About Lakera

Lakera is the world’s leading GenAI security company with AI at its core. The company uses AI to continuously evolve its defenses so their enterprise customers can stay ahead of emerging threats. One of the data sources for Lakera’s research team is Gandalf, their viral AI security game that has generated 50+ million data points and grows by tens of thousands of novel attacks every day. Lakera was founded by David Haber, Mateo Rojas-Carulla and Matthias Kraft in 2021, and is dual-headquartered in Zurich and San Francisco. To learn more, visit Lakera.ai, play Gandalf, and connect with us on LinkedIn.

Lacey Haines

Head of Communications

press@lakera.ai

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

GenAI Security Preparedness 

Report 2024

Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
Lakera AI Model Risk Index featured on Yahoo Finance — advancing LLM security standards.
Lakera AI Model Risk Index featured on Yahoo Finance — advancing LLM security standards.
3
min read
Media Coverage

Yahoo Finance Highlights Lakera’s AI Model Risk Index Launch

Yahoo Finance Highlights Lakera’s AI Model Risk Index Launch

Yahoo Finance features Lakera’s launch of the AI Model Risk Index, a new standard for evaluating the security of large language models (LLMs).
3
min read
Media Coverage

Before scaling GenAI, map your LLM usage and risk zones

Before scaling GenAI, map your LLM usage and risk zones

Help Net Security highlights how The Motley Fool uses Lakera’s AI security platform to implement guardrails and reduce LLM risks when scaling GenAI.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.