How to Protect your Langchain Applications with Lakera Guard

In this tutorial, we'll show you how to integrate Lakera Guard into your LangChain applications to protect them from the most common AI security risks, including prompt injections, toxic content, data loss, and more!

Lakera Team
December 1, 2023

LangChain offers versatile and user-friendly solutions for those looking to develop applications with Large Language Models (LLMs). But with development comes the essential task of ensuring the model's security.

For starters, the LLM in use shouldn't be easily prone to prompt injections and should avoid producing harmful content or engaging in user interactions that might lead to unintended outputs.

Fortunately for LLM developers, chaining LangChain's LLM component with Lakera Guard — either sequentially or in parallel — provides precisely this level of protection.

{{Advert}}

Question-answering  in LangChain

A common use case in LangChain is retrieval-augmented generation, namely an LLM-powered Q&A service that answers users’ questions based on a custom database. The following code shows how this can be achieved within LangChain.

LangChain question-answering secured by Lakera Guard

Now, let's secure the Q&A service by integrating Lakera Guard into the code mentioned above.

First, we'll need to add the required imports, specify the Lakera API key, and define the function that requests AI security feedback from the Guard’s API—in this case, the detection of prompt injections.

Run Lakera Guard before question-answering

We can chain Lakera Guard and the LLM sequentially. This means that we check if Lakera Guard has identified any AI security risk in the question before the question is sent to the LLM to get an answer.

If you simply want to get the AI security feedback alongside the answer, you can extend the code with the lakera_guard function that adds the feedback to the output.

Instead of invoking QA_chain directly, let us now chain lakera_guard and QA_chain sequentially. This is where the magic happens!

By stopping the question from being answered when an AI security issue is detected, you can save unnecessary calls to the LLM and thereby save money. To achieve that, use the following lakera_guard function instead:

Lakera Guard provides rapid feedback, with a typical response time under 200ms. However, if this latency is excessive for your use case, you can run Lakera Guard's AI security checks in parallel with answering the question. While this approach eliminates any latency, it does have a drawback: the LLM might be triggered even if a question is flagged as an AI security risk and potentially shouldn't be answered in the first place.

Wrapping up: Protecting your Langchain Applications with Lakera Guard

By following the above tutorial, you can integrate Lakera Guard into your LangChain application in less than an hour. Lakera Guard acts as a security layer between the user and any LLM. Through the Guard’s API, developers get security feedback on a user’s prompt to the LLM. This includes the detection of prompt injections, toxic content, and data loss. More specifically, Lakera Guard’s backbone consists of text classifiers for prompt injection, jailbreaking, hate speech, inappropriate sexual content, personally identifiable information, and unknown links (spam, phishing).

We are constantly improving our detectors and adding new functionalities. To learn more, sign up for free to test Lakera Guard yourself and check out our documentation.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Lakera Team
Read LLM Security Playbook
Learn about the most common LLM threats and how to prevent them.
Download
You might be interested
2
min read
Product Updates

Lakera releases robustness testing suite for radiology AI teams

Great news for all AI radiology teams—you can now take your medical machine learning testing capabilities to a new level with MLTest. You can now easily test whether your algorithms are robust to radiological artifacts and variations.
Lakera Team
December 1, 2023
3
min read
Product Updates

Lakera releases ethics assessments for computer vision – MLTest for model fairness

Whether you want to conduct an ethics assessment, want to check compliance with ISO 24027, or need an extra layer of safety, MLTest now makes it extremely easy to test if your models encode any biases. We have added state-of-the-art ethics capabilities to MLTest.
Lakera Team
December 1, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.