Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Adversarial Machine Learning

Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks, typically by introducing specially crafted input data (known as adversarial examples) to cause incorrect model predictions.

How Adversarial Machine Learning Works

1. Adversarial Examples:

  1. These are input samples that have been slightly modified to deceive a trained model, resulting in misclassifications. These modifications are often imperceptible or barely noticeable to humans.

2. Generating Adversarial Examples:

  1. Techniques like the Fast Gradient Sign Method (FGSM) or Jacobian-based Saliency Map Attack (JSMA) are used. For instance, FGSM computes the gradient of the loss with respect to the input data, then adjusts the input data to maximize the loss.

3. Types of Attacks:

  1. White-box Attacks: Attackers have full knowledge of the model, including its architecture and parameters. They use this knowledge to craft adversarial examples.
  2. Black-box Attacks: Attackers have limited knowledge of the target model. They might know its type or training data but not its internal parameters.
Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Related terms
No items found.
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.