Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
All topics
11
min read

[Updated for YOLOv8] How robust are pre-trained object detection ML models like YOLO or DETR?

Deep-dive into advanced comparison methods beyond standard performance metrics to build computer vision models that consistently perform over the long term.
Justin Deschenaux
October 20, 2023
January 26, 2023
min read
Computer Vision

Not All mAPs are Equal and How to Test Model Robustness

Model selection is a fundamental challenge for teams deploying to production: how do you choose the model that is most likely to generalize to an ever-changing world?
Mateo Rojas-Carulla
October 20, 2023
June 13, 2023
min read
AI Security

LLM Vulnerability Series: Direct Prompt Injections and Jailbreaks

of prompt injections that are currently in discussion. What are the specific ways that attackers can use prompt injection attacks to obtain access to credit card numbers, medical histories, and other forms of personally identifiable information?
Daniel Timbrell
October 20, 2023
July 18, 2023

10 Techniques for Effective Prompt Engineering

Explore proven techniques for prompt engineering, addressing the technical foundations and practical implementations that drive successful AI interactions.
Deval Shah
October 20, 2023
January 9, 2025

The List of 11 Most Popular Open Source LLMs [2025]

Discover the top 11 open-source Large Language Models (LLMs) that are shaping the landscape of AI. Explore their features, benefits, and challenges in this comprehensive guide to stay updated on the latest developments in the world of language technology.
Armin Norouzi
October 20, 2023
August 21, 2023

The Beginner’s Guide to Hallucinations in Large Language Models

As LLMs gain traction across domains, hallucinations—distortions in LLM output—pose risks of misinformation and exposure of confidential data. Delve into the causes of hallucinations and explore best practices for their mitigation.
Deval Shah
October 20, 2023
August 23, 2023
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.