VOCABULARY

Bagging (Bootstrap Aggregating)

Bagging, an acronym for bootstrap aggregating, is an ensemble machine learning algorithm. The general idea of bagging is to create several subsets of data from the original dataset, with replacement, and train a model for each subset. The final prediction is calculated by averaging the predictions (in case of regression problem) or by voting (in case of a classification problem) from each model.

How Bagging works

Bagging works by following these steps:

  1. Multiple subsets are created from the original dataset, using bootstrap (random sampling with replacement).
  2. A base model is created on each of these subsets. The model can be any machine learning algorithms like decision trees, logistic regression, neural networks, etc.
  3. Each model is then trained independently of the others, allowing them to learn from the different sampling of the dataset.
  4. For a prediction, all the individual models predict the output and then the final prediction is decided either by taking the majority vote (in the case of classification) or by averaging the predictions (in the case of regression).

This approach helps to reduce the variance and avoid overfitting. One of the most common uses of bagging is in the Random Forest algorithm, in which a collection of decision trees are bagged to get the final outcome.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Related terms
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.