Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
All topics

OpenAI’s CLIP in production

We have released an implementation of OpenAI’s CLIP model that completely removes the need for PyTorch, enabling you to quickly and seamlessly install this fantastic model in production and even possibly on edge devices.
Daniel Timbrell
October 20, 2023
November 29, 2022
min read

Continuous testing and model selection with Lakera and Voxel51

We are excited to announce the release of our first integration with FiftyOne by Voxel51. This integration makes it possible to benefit from FiftyOne's powerful visualization features to dig into the insights generated by Lakera's MLTest. Read on to learn how you can benefit from this.
Santiago Arias
October 20, 2023
January 6, 2023
min read
Machine Learning

Your validation set won’t tell you if a model generalizes. Here’s what will.

As we all know from machine learning 101, you should split your dataset into three parts: the training, validation, and test set. You train your models on the training set. You choose your hyperparameters by selecting the best model from the validation set. Finally, you look at your accuracy (F1 score, ROC curve...) on the test set. And voilà, you’ve just achieved XYZ% accuracy.
Václav Volhejn
October 20, 2023
February 7, 2023
11
min read

[Updated for YOLOv8] How robust are pre-trained object detection ML models like YOLO or DETR?

Deep-dive into advanced comparison methods beyond standard performance metrics to build computer vision models that consistently perform over the long term.
Justin Deschenaux
October 20, 2023
January 26, 2023
min read
Computer Vision

Not All mAPs are Equal and How to Test Model Robustness

Model selection is a fundamental challenge for teams deploying to production: how do you choose the model that is most likely to generalize to an ever-changing world?
Mateo Rojas-Carulla
October 20, 2023
June 13, 2023
min read
AI Security

LLM Vulnerability Series: Direct Prompt Injections and Jailbreaks

of prompt injections that are currently in discussion. What are the specific ways that attackers can use prompt injection attacks to obtain access to credit card numbers, medical histories, and other forms of personally identifiable information?
Daniel Timbrell
October 20, 2023
July 18, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.