Lakera Team

15
min read
AI Security

Navigating AI Security: Risks, Strategies, and Tools

Discover strategies for AI security and learn how to establish a robust AI security framework. In this guide, we discuss various risks, and propose a number of best practices to bolster the resilience of your AI systems.
Lakera Team
December 1, 2023
3
min read
Engineering

Releasing Canica: A Text Dataset Viewer

Discover Canica, Lakera's interactive text dataset viewer that elevates data analysis with visual exploration tools like t-SNE and UMAP. Now available for the machine learning community under the MIT license.
Lakera Team
December 1, 2023
10
min read
AI Security

OWASP Top 10 for Large Language Model Applications Explained: A Practical Guide

In this practical guide, we’ll give you an overview of OWASP Top10 for LLMs, share examples, strategies, tools, and expert insights on how to address risks outlined by OWASP. You’ll learn how to securely integrate LLMs into your applications and systems while also educating your team.
Lakera Team
December 1, 2023
4
min read
Product Updates

Introducing Lakera Chrome Extension - Privacy Guard for Your Conversations with ChatGPT

Lakera introduces Lakera PII Extension—a user-friendly Chrome plugin that allows you to input prompts to ChatGPT securely.
Lakera Team
December 1, 2023

The Beginner’s Guide to Hallucinations in Large Language Models

As LLMs gain traction across domains, hallucinations—distortions in LLM output—pose risks of misinformation and exposure of confidential data. Delve into the causes of hallucinations and explore best practices for their mitigation.
Lakera Team
December 1, 2023
1
min read
Product Updates

Lakera releases one-line Voxel51 integration to bring full-scale model validation to FiftyOne users.

The Voxel51 integration is here! Check out how to add full-scale model validation to yourworkflow with a single line of code.
Lakera Team
December 1, 2023
3
min read
Product Updates

Lakera releases ethics assessments for computer vision – MLTest for model fairness

Whether you want to conduct an ethics assessment, want to check compliance with ISO 24027, or need an extra layer of safety, MLTest now makes it extremely easy to test if your models encode any biases. We have added state-of-the-art ethics capabilities to MLTest.
Lakera Team
December 1, 2023
min read
Product Updates

Lakera releases robustness testing suite for digital pathology

Lakera now offers you the opportunity to easily test whether your algorithms are robust to histological artifacts and variations. It lets you stress test your computer vision models to gain confidence in their robustness properties prior to clinical validation and deployment.
Lakera Team
December 1, 2023
7
min read
Computer Vision

Medical imaging as a serious prospect: Where are we at?

The promise these possibilities hold has put medical imaging in the lead of the race toward landing in hospitals. But that is not the end of the discussion…
Lakera Team
December 1, 2023
min read
Computer Vision

The computer vision bias trilogy: Drift and monitoring.

Unforeseen data may be presented to the computer vision system during operation despite careful mitigation of datasets and shortcuts.
Lakera Team
December 1, 2023
min read
Computer Vision

The computer vision bias trilogy: Shortcut learning.

Nobel Prize-winning economist, Daniel Kahneman once remarked “by their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but their heuristics of AI are not necessarily the human ones”. This is certainly the case when we talk about “shortcut learning”.
Lakera Team
December 1, 2023
min read
Computer Vision

The computer vision bias trilogy: Data representativity.

“Data is a reflection of the inequalities that exist in the world”. While this might be true, developers have great potential to curb bias in their computer vision systems.
Lakera Team
December 1, 2023
min read
Machine Learning

Why testing should be at the core of machine learning development.

AI (artificial intelligence) is capable of helping the world scale solutions to our biggest challenges but if you haven’t experienced or heard about AI’s mishaps then you’ve been living under a rock. Coded bias, unreliable hospital systems and dangerous robots have littered headlines over the past few years.
Lakera Team
December 1, 2023
min read
Machine Learning

3 Strategies for Making Your ML Testing Mission-Critical.

Testing machine learning systems is currently more of an art form than a standardized engineering practice. This is particularly problematic for machine learning in mission-critical contexts. This article summarizes three steps from our ML testing series that any development team can take when testing their ML systems.
Lakera Team
December 1, 2023
6
min read
Machine Learning

Test machine learning the right way: Fuzz testing.

In this instance of our ML testing series, we discuss fuzz testing. We discuss what it is, how it works, and how it can be used to stress test machine learning systems to gain confidence before going to production.
Lakera Team
December 1, 2023
6
min read
Machine Learning

Test machine learning the right way: Metamorphic relations.

As part of our series on machine learning testing, we are looking at metamorphic relations. We’ll discuss what they are, how they are used in traditional software testing, what role they play in ML more broadly and lastly, how to use them to write great tests for your machine learning application.
Lakera Team
December 1, 2023
4
min read

Test machine learning the right way: Regression testing.

In this blog series, we’ll investigate how we can better test machine learning applications. In the first post, we’ll look at what we mean by ML testing, what an ML bug is, and where they occur, as well as introduce the first technique for your ML testing repertoire: regression testing.
Lakera Team
December 1, 2023
5
min read
Machine Learning

Free of bias? We need to change how we build ML systems.

The topic of bias in ML systems has received significant attention recently. And rightly so. The core input to ML systems is data. And data is biased due to a variety of factors. Building a system free of bias is challenging. And in fact, the ML community has long struggled to define what a bias-free or fair system is.
Lakera Team
December 1, 2023
5
min read
Product Updates

How to Protect your Langchain Applications with Lakera Guard

In this tutorial, we'll show you how to integrate Lakera Guard into your LangChain applications to protect them from the most common AI security risks, including prompt injections, toxic content, data loss, and more!
Lakera Team
December 1, 2023
7
min read
AI Governance

Navigating the AI Regulatory Landscape: An Overview, Highlights, and Key Considerations for Businesses

The recent weeks have highlighted the increasing concerns over AI safety and security and showcased a collaborative effort among global entities in the EU, US, and the UK aiming to mitigate these risks. Here's a brief overview of the most recent key regulatory developments and their potential implications for businesses.
Lakera Team
December 1, 2023
2
min read
Product Updates

Lakera releases robustness testing suite for radiology AI teams

Great news for all AI radiology teams—you can now take your medical machine learning testing capabilities to a new level with MLTest. You can now easily test whether your algorithms are robust to radiological artifacts and variations.
Lakera Team
December 1, 2023
7
min read
AI Security

AI Security by Design: Lakera’s Alignment with MITRE ATLAS

Developed with MITRE ATLAS in mind, Lakera acts as a robust LLM gateaway, addressing vulnerabilities in data, models, and on the user front, protecting your AI applications against the most prominent LLM threats.
Lakera Team
December 1, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.