Explainability Drift in AI Models Explained in 60 Seconds | When Explanations Quietly Break
Explainability drift in AI models happens when the *reasons* behind a model’s predictions change over time, even if accuracy still looks fine. In this 1-minute glossary video, you’ll learn why explanations that once made sense can slowly stop matching what the model is actually doing under the hood.
We’ll cover a plain-English definition, a simple mental model, a practical example, and why explainability drift matters for audits, regulation, and trustworthy AI.
What you’ll learn:
- What "explainability drift" means in modern AI systems
- How a model’s decision logic can shift while metrics s…
Watch on YouTube ↗
(saves to browser)
Chapters (5)
Intro
0:05
Plain-English Definition
0:16
Mental Model of Explainability Drift
0:35
Real-World Example
1:09
Why Explainability Drift Matters
DeepCamp AI