Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
Back

Lakera releases robustness testing suite for radiology AI teams

Great news for all AI radiology teams—you can now take your medical machine learning testing capabilities to a new level with MLTest. You can now easily test whether your algorithms are robust to radiological artifacts and variations.

Lakera Team
December 1, 2023
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

In-context learning

As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.

[Provide the input text here]

[Provide the input text here]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Lorem ipsum dolor sit amet, line first
line second
line third

Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now?

Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic

A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.

English to French Translation:

Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?

Hide table of contents
Show table of contents

Before medical imaging technologies are put into hospitals, it is important to evaluate their predictive performance and obtain regulatory approval. This requires appropriate testing methodologies that far exceed how testing is done in other industries – an undertaking that can quickly become overwhelming.

This is why we’re proud to announce the release of our robustness testing suite for radiology imaging. In addition to all existing MLTest features, this release makes it even easier for medical imaging teams to validate their computer vision models, put together optimal datasets, and deploy their AI systems quickly and safely.

**💡 Pro tip: Lakera's computer vision safety testing is used by leading medical imaging teams. You can get started in minutes here.**

Why should I care about robustness testing?

While of course delivering potentially life-saving information, the field of radiology is often plagued by a variety of artifacts, which can stem from incorrect handling of the image to defects in the film, to even the patients’ movement or clothing. These artifacts can negatively impact the accuracy of a diagnostic exam and lead to incorrect treatments and patient harm.

Mitigating these artifacts is therefore vital for ensuring the quality and reliability of X-ray imaging results, allowing for an accurate diagnosis and appropriate patient care. Failure to adequately address these issues can also result in additional testing and exposure to unnecessary radiation.

Examples of artifacts in X-Ray images

Our latest release enables teams to take their medical machine-learning testing capabilities to a new level. In addition to all of MLTest’s existing features, this test suite includes robustness tests specifically relevant for radiology applications, such as:

  • Double exposure: When the receptor is exposed twice and two images appear superimposed over one another.
  • Grid lines: Grids are placed between the patient and the X-ray to reduce scatter, but can be present in the image.
  • Static electricity: Which can occur from flexing of the film or low humidity.
  • Image quality differences: Often caused when data comes from multiple scanning devices.
  • Dead pixels: Meaning certain regions of the image will be purely black.
  • Variations in focus: Often caused by the presence of foreign objects.
  • Variations in lighting conditions (e.g. brightness and contrast): Often caused by differences in the patients, processing, or scanning devices.
  • Geometric changes: Different patient locations and rotations.

**💡 Pro tip: Read our latest Nature journal article on testing medical imaging systems here.**

Robustness testing as a proxy for model generalization.

The fundamental question when testing ML models is then how to select the model with the best generalization properties. The gold standard is picking the model with the highest validation accuracy. But as we’ve written in one of our recent articles, this approach is seriously flawed. Reaching a great validation accuracy doesn’t necessarily imply that we’re any closer to having a production-ready model.

MLTest’s automatically synthesizes additional data to test the robustness of your model beyond the training distribution. It exercises your model on variations of the data that are likely to appear in the real world. And this is exactly what generalization means! Robustness tests allow you to go way beyond validation set accuracy and are a great predictor of model performance in the wild.

So if you want to get a better grasp on which of your models generalize, adding robustness tests with MLTest can give you much more confidence around model performance – prior to deployment.

Where to go from here?

Our latest release enables teams to take their medical machine learning testing capabilities to a new level. With MLTest, you can now easily test whether your algorithms are robust to radiological artifacts and variations. It lets you stress test your computer vision models to gain confidence in their robustness properties prior to clinical validation. Does your lung infection model still work in cases of double exposure? Can something as simple as grid lines dramatically change your models’ performance?

If these questions are on your mind, then head on over to MLTest to learn more about how to get started within minutes, or get in touch with us at dan@lakera.ai.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

Download Free

Explore Prompt Injection Attacks.

Learn LLM security, attack strategies, and protection tools. Includes bonus datasets.

Unlock Free Guide

Learn AI Security Basics.

Join our 10-lesson course on core concepts and issues in AI security.

Enroll Now

Evaluate LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Download Free

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Download Free

The CISO's Guide to AI Security

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Download Free

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Download Free
Lakera Team

The CISO's Guide to AI Security

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Free Download
Read LLM Security Playbook

Learn about the most common LLM threats and how to prevent them.

Download

Explore AI Regulations.

Compare the EU AI Act and the White House’s AI Bill of Rights.

Understand AI Security Basics.

Get Lakera's AI Security Guide for an overview of threats and protection strategies.

Uncover LLM Vulnerabilities.

Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.

Optimize LLM Security Solutions.

Use our checklist to evaluate and select the best LLM security tools for your enterprise.

Master Prompt Injection Attacks.

Discover risks and solutions with the Lakera LLM Security Playbook.

Unlock Free AI Security Guide.

Discover risks and solutions with the Lakera LLM Security Playbook.

You might be interested
3
min read
Product Updates

Lakera Guard Expands Enterprise-Grade Content Moderation Capabilities for GenAI Applications

We are excited to announce a significant upgrade to Lakera Guard's Content Moderation capabilities.
Lakera Team
June 11, 2024
3
min read
Product Updates

Lakera releases ethics assessments for computer vision – MLTest for model fairness

Whether you want to conduct an ethics assessment, want to check compliance with ISO 24027, or need an extra layer of safety, MLTest now makes it extremely easy to test if your models encode any biases. We have added state-of-the-art ethics capabilities to MLTest.
Lakera Team
December 1, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.