Mean Absolute Error (MAE) is a measure used in statistics, primarily in the field of machine learning and predictive modeling, to quantify the average magnitude of the errors in a set of predictive observations, without considering their direction. It's the average over the test sample of the absolute differences between prediction and actual observation where all individual differences have equal weight.
Simply put, it's the average amount by which the predictions are wrong.
How Mean Absolute Error works
Mean Absolute Error is calculated by taking the absolute difference between the actual and predicted values, summing all these absolute differences, and then finding the average. In mathematical terms, if 'y' is the actual value and 'a' is the predicted value, and there are 'n' number of data points, the formula for Mean Absolute Error is:
MAE = (1/n) * Σ|y - a|
- 'Σ' denotes the sum over all data points
- '|y - a|' denotes the absolute difference between the actual value 'y' and the predicted value 'a'
The smaller the MAE, the better the predictive accuracy of the model. One of the advantages of MAE is its simplicity and interpretability. It is measured in the same units as the data, which is not always the case when using other error metrics. However, since MAE assigns equal weight to all errors, large errors and small errors are treated equally. Therefore, it may not adequately capture the performance of a model in datasets with extreme values or outliers.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.