Download this guide to delve into the most common LLM security risks and ways to mitigate them.
In-context learning
As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.
[Provide the input text here]
[Provide the input text here]
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?
â
Title italic
A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.
English to French Translation:
Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?
Lorem ipsum dolor sit amet, line first
line second
line third
Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?
â
Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic
A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.
English to French Translation:
Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?
LangChain has become one of the easiest ways to integrate a Large Language Model (LLM) into your applications, but guarding those LLM-powered applications against prompt injection and other risks isnât as straightforward.
ChainGuard provides a simple, reliable way to guard your LangChain agents and applications from prompt injection, jailbreaks, and more with Lakera Guard.
ChainGuard has been published under the MIT license to the Python Package Index (PyPI) as lakera-chainguard,its source code is available on GitHub, and you can install it via pip (or your package manager of choice):
{{Advert}}
Chains are a hardcoded sequence of actions that could call an LLM, a tool, or some sort of data manipulation. Agents empower a model to decide which actions to take in what order.
Guarding both of these use cases requires different approaches. You can provide your agent with a tool, but as anyone who has worked with tool-enabled LLMs will tell you, getting an agent to reliably, consistently, and accurately use a tool is more complicated than it seems.
Beyond the potential for the LLM to incorrectly implement the tool, this approach could be vulnerable to prompts that convince the agent that they no longer have access to the tool or that the tool has already provided a valid response.
ChainGuard provides a wrapper to create a guarded version of any LLM or Chat Model supported by LangChain, including your custom AgentExecutors.
By default, ChainGuard uses the prompt injection endpoint and raises an exception but allows you to choose which Lakera Guard endpoint to invoke and whether ChainGuard should raise an Exception or a Warning.
Weâve created some in-depth tutorials and quick how-to guides for integrating ChainGuard into your LangChain applications and provided some examples below to get you started.
If you want more control over how your code handles the flagged input, ChainGuardâs exceptions and warnings include the full results of the call to Lakera Guard. You can log the results or inspect the confidence scores for the risk categories that the endpoint detects, or, in the case of endpoints like the Personally Identifiable Information (PII) endpoint, you can use the payload of detected PII entities to obfuscate the PII before sending any input to the LLM.
Hereâs an example of a prompt that contains PII. Maybe the user is pasting in data from another system where they have privileged access:
Using ChainGuard to guard your LangChain LLM with Lakera Guardâs PII endpoint, you can redact the PII before the userâs input gets sent to your LLM:
After we catch the PII warning and redact the identifying information, we can pass the redacted input to the LLM without worrying about the user appropriately protecting this information from the third-party LLM.
You can also follow our guide to Automatically Redacting PII if you want to use this as part of a chain.
Retrieval Augmented Generation (RAG) is one of the most popular ways to include up-to-date and relevant context with questions from your LLM-enabled applicationâs users. Indirect prompt injection involves an attacker including their prompt injection in some external content that the LLM will interpret.
Weâve set up a demo page with a brief description of Lakera Guard and embedded an indirect prompt injection on the page. See if you can find it.
To see this attack in action, you can follow the LangChain Q&A with this quickstart tutorial and use our demo URL instead of the example blog post: http://lakeraai.github.io/chainguard/demos/indirect-prompt-injection/
When you ask the RAG chain about Lakera Guard using our injected demo URL for context, you should notice a link to learn more, but we canât trust links from unknown sources. In this case, itâs a harmless Rickroll, but a motivated attacker could easily include malicious links or misinformation.
We can protect our RAG applications against these indirect prompt injection attacks with ChainGuardâs `detect` method and LangChainâs RunnableLambda functionality.
ChainGuard makes it easy to protect your LangChain applications by being flexible enough to fit into your LangChain workflows, regardless of which implementation pattern youâre using.
Weâre looking forward to helping you protect your LangChain applications and welcome any feedback or contributions to ChainGuard.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.
Compare the EU AI Act and the White Houseâs AI Bill of Rights.
Get Lakera's AI Security Guide for an overview of threats and protection strategies.
Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.
Use our checklist to evaluate and select the best LLM security tools for your enterprise.
Discover risks and solutions with the Lakera LLM Security Playbook.
Discover risks and solutions with the Lakera LLM Security Playbook.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure youâre on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. â¨Come join us and 1000+ others in a chat thatâs thoroughly SFW.