Model Monitoring & ObservabilityFeature Importance Tracking (SHAP Drift)Medium⏱️ ~3 min

What is SHAP Drift and Why Track It?

Definition
SHAP drift is a change in how much each feature contributes to model predictions over time, even if the model itself has not changed.

WHY TRACK FEATURE IMPORTANCE

SHAP (SHapley Additive exPlanations) values quantify each feature contribution to each prediction. Tracking SHAP values over time reveals how the model is using features differently as data changes.

A static model can produce shifting SHAP values when input distributions change. If feature A historically contributed 30% of predictions but now contributes 15%, something changed. This might indicate data drift, feature degradation, or concept drift.

WHAT SHAP DRIFT REVEALS

Feature degradation: If a feature importance drops significantly, the feature may be corrupted or less predictive. A user activity feature dropping from #1 to #5 importance warrants investigation.

Model reliance shifts: If the model starts relying heavily on a feature it previously ignored, the world may have changed. This could indicate concept drift or a shift in what drives outcomes.

Fairness monitoring: If sensitive features (age, gender, location) increase in importance, the model may be developing bias. SHAP monitoring enables early bias detection.

SHAP VS OTHER IMPORTANCE METRICS

SHAP has theoretical guarantees (consistency, local accuracy) that permutation importance lacks. SHAP values are additive: sum of SHAP values equals prediction minus baseline. This makes them interpretable and comparable.

Downside: SHAP is computationally expensive. Exact SHAP for tree models is fast; for deep learning, approximations (Kernel SHAP) are needed and can be slow.

💡 Key Insight: SHAP drift monitoring answers a question data drift cannot: is the model using features differently, not just are features different?
💡 Key Takeaways
SHAP values quantify feature contributions to predictions; tracking over time reveals how model usage shifts
SHAP drift reveals: feature degradation, model reliance shifts, potential fairness issues from sensitive feature importance
SHAP has theoretical guarantees (consistency, additivity) but is computationally expensive for non-tree models
📌 Interview Tips
1Interview Tip: Explain what SHAP drift reveals that data drift does not—model behavior, not just data distribution.
2Interview Tip: Give an example: user activity feature dropping from #1 to #5 importance signals investigation needed.
← Back to Feature Importance Tracking (SHAP Drift) Overview
What is SHAP Drift and Why Track It? | Feature Importance Tracking (SHAP Drift) - System Overflow