Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Lakera and Cohere Set the Bar for New Enterprise LLM Security Standards

Lakera and Cohere come together with a shared goal—to define new LLM security standards and empower organizations to confidently deploy LLM-based systems at scale.

Lakera Team
July 24, 2024
Hide table of contents
Show table of contents

The rapid adoption of Large Language Models (LLMs) has highlighted a multitude of security concerns that demand immediate attention. Issues such as prompt injection attacks, data leaks, phishing attempts, hallucinations, toxic language output, and more, have emerged as formidable threats, putting organizations relying on LLMs at risk.

Here, at Lakera, we have been actively engaged in extensive discussions with hundreds of developers and security engineers who are leading the way in developing LLM-powered systems in recent months. Among the primary challenges they face in deploying those systems at scale, security concerns take center stage.

There is no single magical solution to the growing number of threats we encounter and we’ve learnt that security is not a challenge limited to LLM providers; it extends its shadow over app builders and end-users alike as we continue to integrate these models into our daily lives.

We believe that it is imperative for the entire AI community to unite and collaborate in tackling these evolving challenges.

In the spirit of this collective effort, in July 2023, Lakera and Cohere came together with a shared goal—to define new LLM security standards and empower organizations to confidently deploy LLM-based systems at scale. Cohere focuses on enabling generative AI in enterprise environments and is at the forefront of establishing safety and security requirements for AI technology and LLMs. 

{{Advert}}

This shared commitment to addressing the most prevalent LLM cybersecurity threats has resulted in the creation of two valuable resources: the LLM Security Playbook and the Prompt Injection Attacks Cheatsheet.

Mateo Rojas-Carulla, Co-Founder and CPO of Lakera, shared:

Collaborating with Cohere and red-teaming their model has provided us with unique insights into the intricate nature of LLMs. Exploring novel and imaginative methods to break the model was both challenging and… fun. Red teaming offers a valuable opportunity to step into the shoes of potential attackers who can now manipulate LLMs using natural language rather than coding, opening up numerous new possibilities for anyone to exploit these models, potentially leading to harmful actions.

‍

In August, both teams also participated in DEFCON31's Generative Red Teaming AI Challenge, organized by AI Village, where participants were tasked with "hacking" Cohere's model (as well as other LLMs) that had previously undergone red-teaming by Lakera's team. DEFCON31 sparked numerous discussions about AI security and underscored the necessity for collaboration across the entire AI community to ensure the responsible use of LLMs.

Ads Dawson, Senior Security Engineer at Cohere and a founding core contributor to the OWASP Top 10 for LLM Applications project, added:

It’s essential for us to collaborate with companies like Lakera to refine our security practices continuously. Our red-teaming exercises allow us to uncover weak points in our security infrastructure and strengthen our defenses proactively. Also, our collaboration with other industry experts helps us stay informed about emerging threats and evolving security trends. By leading discussions on security challenges and solutions, we contribute to the collective effort to enhance the security posture of AI applications, making LLMs safer to use.

‍

The technical expertise of both teams, coupled with our insights from launching the largest global red teaming initiative - Gandalf, helped us redefine our approach to LLM security and has inspired us to seek innovative ways to ensure it.

This collaboration comes at a pivotal moment when organizations are seeking to harness the vast potential of LLMs and AI technology.

About Cohere

Cohere is the leading AI platform for enterprise providing access to advanced Large Language Models and NLP tools through one easy-to-use API.

About Lakera

Lakera is the leading AI security company building developer-first solutions that empower developers to confidently build secure AI applications and deploy them at scale.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

The CISO's Guide to AI Security

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
Watch David Haber’s RSA Conference 2024 talk on advancing AI security with insights from the world’s largest AI red team.
Watch David Haber’s RSA Conference 2024 talk on advancing AI security with insights from the world’s largest AI red team.
4
min read
•
Announcements

Advancing AI Security With Insights From The World’s Largest AI Red Team

Advancing AI Security With Insights From The World’s Largest AI Red Team

Watch David Haber’s RSA Conference 2024 talk on advancing AI security with insights from the world’s largest AI red team.
Lakera Recognized in Gartner's GenAI Security Risks Report
Lakera Recognized in Gartner's GenAI Security Risks Report
1
min read
•

Lakera Recognized in Gartner's GenAI Security Risks Report

Lakera Recognized in Gartner's GenAI Security Risks Report

Gartner's report on GenAI security risks recognizes Lakera's solutions.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.