AI Glossary

Model Drift

The degradation of an AI model's performance and accuracy over time due to changing real-world data.

TL;DR

  • The degradation of an AI model's performance and accuracy over time due to changing real-world data.
  • Model Drift shapes how organizations design controls, ownership, and operating discipline around AI.
  • Use the related terms and explanation below to connect the definition to real enterprise rollout decisions.

In Depth

Model Drift (often called concept drift or data drift) is the phenomenon where a machine learning model's predictive power or accuracy degrades over time. AI models are essentially mathematical representations of the world as it existed at the exact moment their training data was collected. However, the real world is dynamic. Language evolves, consumer preferences shift, economic conditions change, and new compliance regulations are enacted. As the real-world data begins to differ from the historical training data, the model's outputs become increasingly irrelevant, biased, or factually incorrect.

In generative AI and LLMs, drift can manifest in subtle, dangerous ways. For example, an AI agent trained to analyze cybersecurity threats might experience drift if a fundamentally new type of malware architecture is invented after its training cutoff date. Because the model lacks the new conceptual framework, it may misclassify a critical zero-day exploit as benign traffic. Similarly, an AI used for financial forecasting will drift rapidly if macroeconomic conditions (like inflation rates) shift outside the bounds of its historical training set.

Governing model drift requires continuous AI Observability. Organizations cannot deploy an AI system and forget it; they must implement automated monitoring to track the statistical distribution of the inputs and the accuracy of the outputs over time. When drift crosses a defined threshold, the governance team must intervene, typically by grounding the model with updated documents (RAG) or initiating a fine-tuning cycle to realign the model with current reality.

Free Resource

The 1-Page AI Safety Sheet

Print this, pin it next to every screen. 10 rules your team should follow every time they use AI at work.

You get

A printable 1-page PDF with 10 clear do's and don'ts for AI use.

Free Resource

Get a Draft AI Policy in 5 Minutes

Answer 6 questions about your company. Get a real AI usage policy you can hand to legal this week.

You get

A ready-to-review AI policy document customized to your company.

Knowledge Hub

Glossary FAQs

Data drift happens when the input data changes (e.g., users start asking questions in a new slang). Concept drift happens when the definition of the 'correct' answer changes (e.g., what constitutes a 'good' interest rate changes due to central bank policies).
<a href='/glossary/rag'>Retrieval-Augmented Generation</a> (<a href='/glossary/rag'><a href='/glossary/rag'>RAG</a></a>) mitigates drift by separating the model's logic from its knowledge. Instead of retraining the model when facts change, you simply update the internal document database. The model retrieves the fresh document, instantly correcting the drift.
Yes. While it is usually gradual, sudden external events (like a global pandemic, a major regulatory change like GDPR, or a sudden stock market crash) can cause immediate, catastrophic concept drift, rendering historical models instantly obsolete.

ENTERPRISE AI GOVERNANCE

Turn glossary concepts like Model Drift into enforceable operating controls with Remova.

Sign Up