The computer vision bias trilogy: Drift and monitoring.

Unforeseen data may be presented to the computer vision system during operation despite careful mitigation of datasets and shortcuts.

Lakera Team
December 1, 2023
April 19, 2022

If the past three years have taught us anything, it is that the world around us can take unexpected turns. The same can be true for your computer vision models.

Unforeseen data may be presented to the computer vision model during operation despite careful mitigation of datasets and shortcuts. One such phenomenon is data drift.

A hospital may change their x-ray machine and keep using the same computer vision model to diagnose, even though the system was not trained with this kind of input data. Similarly, an autonomous car solely built for European streets notable for their twists and turns, may not perform as expected if deployed in an American city.

Fail, but fail gracefully.

ML models tend to fail silently and make predictions regardless, albeit erroneous ones. One can mitigate operational bias by adding the right mitigation strategies: the wider ML system should detect in operation if an image looks “suspicious” or “unknown”, and gracefully fail (for example, by asking the doctor for a closer look).

Out-of-distribution detection.

The problem of finding such problematic inputs is called out-of-distribution detection. The challenging problem involves comparing the distribution of high-dimensional objects. If you’re interested in learning more about it, the research in the area is extensive [1], [2], [3]. Note that out-of-distribution detection is a key part of many learning systems.

For example, Generative Adversarial Networks train a discriminator network whose sole task is to detect if a generated image is “suspicious” when judged against a reference dataset. Systems in production should be endowed with an out-of-distribution detector in order to detect problematic samples on the fly. If a problematic image is detected, the system should fail gracefully, thus reducing the risk of silent failures of your computer vision system.

It is essential to keep data drift in mind once your system is in production. Keeping the data and model up-to-date is just a part of any AI’s lifecycle. In the meantime, ensure that mitigation strategies are in place so those suspicious outcomes are detected and looked at by humans in the loop.

Get started with Lakera today.

Get in touch with to find out more about what Lakera can do for your team, or get started right away.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Lakera Team
Read LLM Security Playbook
Learn about the most common LLM threats and how to prevent them.
You might be interested
min read
Computer Vision

The computer vision bias trilogy: Data representativity.

“Data is a reflection of the inequalities that exist in the world”. While this might be true, developers have great potential to curb bias in their computer vision systems.
Lakera Team
December 1, 2023
min read
Computer Vision

Not All mAPs are Equal and How to Test Model Robustness

Model selection is a fundamental challenge for teams deploying to production: how do you choose the model that is most likely to generalize to an ever-changing world?
Mateo Rojas-Carulla
December 1, 2023
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.