Log Loss, also known as logarithmic loss, is a performance metric for evaluating the predictions of probabilities of membership to a specific class. It is essentially a loss function used in machine learning algorithms where the aim is to model predictions as probabilities. Log Loss is particularly favored in binary classification problems and it is usually used when we need to rank predictions, in addition to classifying them.
Log Loss in practice
Log Loss quantifies the accuracy of a classifier by penalising false classifications. It heavily penalises classifiers which are confident about an incorrect classification i.e., the further away the predicted probabilities are from the actual class, the more the log loss penalises.
In the case of binary classification, where the number of classes M equals 2, the formula to calculate Log Loss is:
-(1/n) * Σ [y log(p) + (1 - y) log(1 - p)]
- n is the number of samples or instances,
- y is the binary target (0 or 1),
- p is the predicted probability of the instance being in class 1.
Log Loss takes the actual probabilities from the distribution of the target classes and measures how well the predicted probabilities agree with the actual ones. In other words, it examines the disparity between the actual outcome and the predicted probability for that outcome. Lower values of log loss indicates better predictions and hence it is something we aim to minimize.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.