Back

Lakera releases ethics assessments for computer vision – MLTest for model fairness

Whether you want to conduct an ethics assessment, want to check compliance with ISO 24027, or need an extra layer of safety, MLTest now makes it extremely easy to test if your models encode any biases. We have added state-of-the-art ethics capabilities to MLTest.

Lakera Team
December 1, 2023
December 8, 2022
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

In-context learning

As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.

[Provide the input text here]

[Provide the input text here]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Lorem ipsum dolor sit amet, line first
line second
line third

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Hide table of contents
Show table of contents

Evaluating a machine learning model requires going beyond standard model tests, and computing metrics such as accuracy, precision, and recall. To deploy responsible and aligned ML models, we need to understand if the models that we have built exhibit any biases during operation.

Bias can manifest itself in numerous ways, throughout a model's lifecycle, from data collection and annotation to the features that a model uses for prediction – and so it’s super important to test for data bias and model bias throughout development and operation.

This becomes even more important as we increasingly rely more on pre-trained models which may have been trained on biased datasets and encode discriminative behavior that can surface during operation (yes, our favorite ImageNet is also affected).

What is a fair ML model?

We asked ChatGPT how it would describe AI fairness:

AI fairness refers to the idea that artificial intelligence (AI) systems should not discriminate against certain groups of people based on characteristics such as race, gender, age, or other factors. This means that when an AI system is used to make decisions that affect people's lives, such as deciding who gets a job or who is eligible for a loan, it should treat all individuals equally and not make decisions based on biased or prejudiced assumptions. Ensuring AI fairness is important because biased AI systems can perpetuate and even amplify existing inequalities in society.

We think this is actually a pretty good description – let’s stick with it for now.

In the context of computer vision, we see a lot of discussions on fairness in the context of healthcare applications, face recognition systems, and the EHS sector. But even models which do not operate on data in these contexts can suffer from severe biases in operation. If we are not aware of these biases, models may underperform significantly, compromising trust with end-users. Given such risks, it’s definitely worth making fairness assessments a part of your ML testing processes.

Implementing model fairness.

And so for good reason, governments, regulatory organizations, and standardization groups have also released a plethora of content regarding what fairness is and how to evaluate it. Present regulations and proposals such as the EU Artificial Intelligence Act/EU AI Act are not short of demands when it comes to ethics, fairness, and biases in ML models. Once these regulatory guidelines come into effect soon, teams around the world will have to provide detailed fairness assessments of their AI systems.

While the requirements for fairness seem relatively clear at this stage, we have found that there is still a relatively large gap between defining what fairness is and then actually implementing it for specific applications.

As a concrete example, ISO 24027, currently the leading international standard on AI bias, defines a good set of fairness metrics and how they look like for a generic binary classification case. But how do you take the proposed metrics from binary classification to image-based object detection? What about multi-class image segmentation?

In other words, it’s actually quite tricky to evaluate model fairness for your specific models and use cases, especially when it comes to computer vision.

Lakera brings fairness to ML testing.

Knowing the struggles of assessing fairness for real-world computer vision systems, we accepted the challenge. We digested the most recent regulatory proposals, turned towards the latest literature on ML fairness, and combined our regulatory experience with standards such as ISO 24027 to come up with a concrete implementation of model fairness in MLTest. The result of our work includes an implementation of state-of-the-art fairness metrics for computer vision models. We are so excited to release these to our users!

With this new update, users can easily add fairness tests to their MLTest pipelines with a single line of code:

Our documentation includes more examples and a list of available metrics. You can now evaluate your object detection and image classification models on fairness dimensions such as demographic parity, equality of opportunity, equalized odds, and predictive equality.

Together with MLTest’s insights into model failure clusters, you can now easily include a fairness assessment of your computer vision algorithm as part of your existing pipelines. You can check if your computer vision is compliant with ISO 24027 and other regulatory requirements already today.

The (im)possibility of fairness.

Building fair AI systems is often discussed as an important objective. A common question is “how do we avoid bias?” or “how do we build systems that are fair?”

The truth is that it may be impossible after all to actually build fair ML models. Some notions of fairness are fundamentally incompatible with each other and we may not be able to eliminate bias in the first place.

So the focus should be on surfacing model biases. Known model biases can be taken into account during decision-making. Unknown biases cannot. If we have transparency around model biases, we can put downstream mitigation strategies in place.

Where to go from here?

You can make Lakera’s fairness assessment a part of your everyday workflows and automatically check if your models encode any biases. This lets you reduce the risk that your models behave in undesired ways during operation and minimizes future compliance risks as well.

You can get access to MLTest here. If you have any questions or would like to continue the discussion around ML fairness, reach out to David at david@lakera.ai.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Lakera Team
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download
You might be interested
5
min read
Product Updates

Introducing Lakera Guard – Bringing Enterprise-Grade Security to LLMs with One Line of Code

Introducing Lakera Guard: Bringing enterprise-grade security to LLMs with one line of code.
David Haber
December 21, 2023
2
min read
Product Updates

Lakera releases robustness testing suite for radiology AI teams

Great news for all AI radiology teams—you can now take your medical machine learning testing capabilities to a new level with MLTest. You can now easily test whether your algorithms are robust to radiological artifacts and variations.
Lakera Team
December 1, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.