Lakera and Cohere Set the Bar for New Enterprise LLM Security Standards

Lakera and Cohere come together with a shared goal—to define new LLM security standards and empower organizations to confidently deploy LLM-based systems at scale.

Lakera Team
November 13, 2023
Hide table of contents
Show table of contents

The rapid adoption of Large Language Models (LLMs) has highlighted a multitude of security concerns that demand immediate attention. Issues such as prompt injection attacks, data leaks, phishing attempts, hallucinations, toxic language output, and more, have emerged as formidable threats, putting organizations relying on LLMs at risk.

Here, at Lakera, we have been actively engaged in extensive discussions with hundreds of developers and security engineers who are leading the way in developing LLM-powered systems in recent months. Among the primary challenges they face in deploying those systems at scale, security concerns take center stage.

There is no single magical solution to the growing number of threats we encounter and we’ve learnt that security is not a challenge limited to LLM providers; it extends its shadow over app builders and end-users alike as we continue to integrate these models into our daily lives.

We believe that it is imperative for the entire AI community to unite and collaborate in tackling these evolving challenges.

In the spirit of this collective effort, in July 2023, Lakera and Cohere came together with a shared goal—to define new LLM security standards and empower organizations to confidently deploy LLM-based systems at scale. Cohere focuses on enabling generative AI in enterprise environments and is at the forefront of establishing safety and security requirements for AI technology and LLMs. 

{{Advert}}

This shared commitment to addressing the most prevalent LLM cybersecurity threats has resulted in the creation of two valuable resources: the LLM Security Playbook and the Prompt Injection Attacks Cheatsheet.

Mateo Rojas-Carulla, Co-Founder and CPO of Lakera, shared:

Collaborating with Cohere and red-teaming their model has provided us with unique insights into the intricate nature of LLMs. Exploring novel and imaginative methods to break the model was both challenging and… fun. Red teaming offers a valuable opportunity to step into the shoes of potential attackers who can now manipulate LLMs using natural language rather than coding, opening up numerous new possibilities for anyone to exploit these models, potentially leading to harmful actions.

‍

In August, both teams also participated in DEFCON31's Generative Red Teaming AI Challenge, organized by AI Village, where participants were tasked with "hacking" Cohere's model (as well as other LLMs) that had previously undergone red-teaming by Lakera's team. DEFCON31 sparked numerous discussions about AI security and underscored the necessity for collaboration across the entire AI community to ensure the responsible use of LLMs.

Ads Dawson, Senior Security Engineer at Cohere and a founding core contributor to the OWASP Top 10 for LLM Applications project, added:

It’s essential for us to collaborate with companies like Lakera to refine our security practices continuously. Our red-teaming exercises allow us to uncover weak points in our security infrastructure and strengthen our defenses proactively. Also, our collaboration with other industry experts helps us stay informed about emerging threats and evolving security trends. By leading discussions on security challenges and solutions, we contribute to the collective effort to enhance the security posture of AI applications, making LLMs safer to use.

‍

The technical expertise of both teams, coupled with our insights from launching the largest global red teaming initiative - Gandalf, helped us redefine our approach to LLM security and has inspired us to seek innovative ways to ensure it.

This collaboration comes at a pivotal moment when organizations are seeking to harness the vast potential of LLMs and AI technology.

About Cohere

Cohere is the leading AI platform for enterprise providing access to advanced Large Language Models and NLP tools through one easy-to-use API.

About Lakera

Lakera is the leading AI security company building developer-first solutions that empower developers to confidently build secure AI applications and deploy them at scale.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download
You might be interested
Lakera Recognized in Gartner's GenAI Security Risks Report
Lakera Recognized in Gartner's GenAI Security Risks Report
1
min read
•

Lakera Recognized in Gartner's GenAI Security Risks Report

Lakera Recognized in Gartner's GenAI Security Risks Report

Gartner's report on GenAI security risks recognizes Lakera's solutions.
Lakera Featured in a NIST Report on AI Security
Lakera Featured in a NIST Report on AI Security
1
min read
•

Lakera Featured in a NIST Report on AI Security

Lakera Featured in a NIST Report on AI Security

Lakera's technology has been recognized by NIST in their report on Adversarial Machine Learning.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.