What are SHAP and LIME for Model Interpretability?
Why Interpretability Matters
A loan application is rejected. The applicant asks why. "The model said so" is unacceptable. Regulations require explanations (GDPR Article 22, ECOA). Beyond compliance, interpretability enables debugging: if the model learned that employment at a bankrupt company predicts default, you catch this before deployment. Production ML needs explanations for users, regulators, and engineers.
How SHAP Works
SHAP borrows from cooperative game theory. Each feature is a "player" contributing to the prediction "payout." The Shapley value calculates each contribution by averaging marginal contributions across all possible feature combinations. If income alone predicts 0.3, and income + age predicts 0.5, age contributed 0.2. Average across all combinations. Each feature gets fair credit for pushing prediction from baseline to final value.
How LIME Works
LIME generates thousands of perturbed versions of the input (randomly changing features). It runs the model on all samples, then fits a simple linear model to approximate the decision boundary near that point. Linear model coefficients become importances. The intuition: even complex models behave roughly linearly in local neighborhoods.