AI security blog
-min%20(1).png)
The Rise of the Internet of Agents: A New Era of Cybersecurity
As AI-powered agents go online, securing our digital infrastructure will require a fundamental shift in cybersecurity.
All topics

Test machine learning the right way: Detecting data bugs.
In this second instance of the testing blog series, we deep dive into data bugs: what do they look like, and how can you use specification and testing to ensure you have the right data for the job?

Test machine learning the right way: Metamorphic relations.
As part of our series on machine learning testing, we are looking at metamorphic relations. We’ll discuss what they are, how they are used in traditional software testing, what role they play in ML more broadly and lastly, how to use them to write great tests for your machine learning application.

Fuzz Testing for Machine Learning: How to Do It Right
In this instance of our ML testing series, we discuss fuzz testing. We discuss what it is, how it works, and how it can be used to stress test machine learning systems to gain confidence before going to production.

3 Strategies for Making Your ML Testing Mission-Critical Now
Testing machine learning systems is currently more of an art form than a standardized engineering practice. This is particularly problematic for machine learning in mission-critical contexts. This article summarizes three steps from our ML testing series that any development team can take when testing their ML systems.

Why testing should be at the core of machine learning development.
AI (artificial intelligence) is capable of helping the world scale solutions to our biggest challenges but if you haven’t experienced or heard about AI’s mishaps then you’ve been living under a rock. Coded bias, unreliable hospital systems and dangerous robots have littered headlines over the past few years.

The computer vision bias trilogy: Data representativity.
“Data is a reflection of the inequalities that exist in the world”. While this might be true, developers have great potential to curb bias in their computer vision systems.

Activate
untouchable mode.
untouchable mode.
Get started for free.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Join our Slack Community.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.