Model Degradation

Model Degradation refers to the phenomenon where a previously well-performing machine learning model starts to deliver poorer performance over time. This decrease in model efficacy can be due to various factors, primarily stemming from changes in the underlying data distribution or the environment in which the model operates. Model degradation is an important aspect to monitor, especially in dynamic environments, as it necessitates model retraining or adaptation to maintain desired levels of performance.

Model Degradation in practice

  1. Data Drift: One of the main reasons for model degradation is data drift. This is when the statistical properties of the target variable, which the model is trying to predict, change over time. For example, in a credit scoring model, if there's an economic downturn, the characteristics of defaulting customers might shift.
  2. Concept Drift: This is a specific type of data drift where the relationship between the input features and the target variable changes. For instance, a model predicting movie popularity based on certain actors might degrade if those actors fall out of favor or new actors emerge.
  3. Environmental Changes: Changes external to the model can also contribute to degradation. For example, a model trained on data from one location might perform poorly when deployed in a different location with different demographics.
  4. Model Overfitting: If a model was overfitted to the training data, it may initially perform well on the test data but degrade quickly when exposed to new, unseen data.

In essence, model degradation is a natural occurrence in dynamic environments. Recognizing its signs and understanding its root causes can help data scientists take corrective measures, ensuring sustained model performance.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Related terms
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.