Before medical imaging technologies are put into hospitals, it is important to evaluate their predictive performance and obtain regulatory approval. This requires appropriate testing methodologies that far exceed how testing is done in other industries – an undertaking that can quickly become overwhelming.
This is why we’re proud to announce the release of our robustness testing suite for radiology imaging. In addition to all existing MLTest features, this release makes it even easier for medical imaging teams to validate their computer vision models, put together optimal datasets, and deploy their AI systems quickly and safely.
**💡 Pro tip: Lakera's computer vision safety testing is used by leading medical imaging teams. You can get started in minutes here.**
While of course delivering potentially life-saving information, the field of radiology is often plagued by a variety of artifacts, which can stem from incorrect handling of the image to defects in the film, to even the patients’ movement or clothing. These artifacts can negatively impact the accuracy of a diagnostic exam and lead to incorrect treatments and patient harm.
Mitigating these artifacts is therefore vital for ensuring the quality and reliability of X-ray imaging results, allowing for an accurate diagnosis and appropriate patient care. Failure to adequately address these issues can also result in additional testing and exposure to unnecessary radiation.
Our latest release enables teams to take their medical machine-learning testing capabilities to a new level. In addition to all of MLTest’s existing features, this test suite includes robustness tests specifically relevant for radiology applications, such as:
**💡 Pro tip: Read our latest Nature journal article on testing medical imaging systems here.**
The fundamental question when testing ML models is then how to select the model with the best generalization properties. The gold standard is picking the model with the highest validation accuracy. But as we’ve written in one of our recent articles, this approach is seriously flawed. Reaching a great validation accuracy doesn’t necessarily imply that we’re any closer to having a production-ready model.
MLTest’s automatically synthesizes additional data to test the robustness of your model beyond the training distribution. It exercises your model on variations of the data that are likely to appear in the real world. And this is exactly what generalization means! Robustness tests allow you to go way beyond validation set accuracy and are a great predictor of model performance in the wild.
So if you want to get a better grasp on which of your models generalize, adding robustness tests with MLTest can give you much more confidence around model performance – prior to deployment.
Our latest release enables teams to take their medical machine learning testing capabilities to a new level. With MLTest, you can now easily test whether your algorithms are robust to radiological artifacts and variations. It lets you stress test your computer vision models to gain confidence in their robustness properties prior to clinical validation. Does your lung infection model still work in cases of double exposure? Can something as simple as grid lines dramatically change your models’ performance?
If these questions are on your mind, then head on over to MLTest to learn more about how to get started within minutes, or get in touch with us at dan@lakera.ai.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure you’re on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.