Privacy & Fairness in MLBias Detection & MitigationEasy⏱️ ~3 min

What is Bias in Machine Learning Systems?

Definition
Bias in ML occurs when a model systematically produces outcomes that unfairly favor or disadvantage certain groups. Unlike statistical bias, ML bias refers to discriminatory patterns from data, features, or model design.

Sources of Bias

Historical bias: Training data reflects past discrimination. If 80% of hires were male, the model learns maleness predicts success. Representation bias: Underrepresented groups. A facial system trained on 90% light skin fails on darker tones. Measurement bias: Features correlate with protected attributes. Credit scores correlate with race due to historical lending. Aggregation bias: A single model for diverse populations learns majority patterns, failing minorities.

Why Bias Matters Beyond Ethics

Biased models create business and legal risks. Loan models with racial bias face regulatory action costing hundreds of millions. Hiring tools have resulted in settlements exceeding M. Biased recommendations lose minority users permanently. Bias also indicates model weakness: 95% accuracy on Group A but 70% on Group B is an engineering problem masquerading as ethics.

The Accuracy-Fairness Trade-off

Optimizing for raw accuracy often amplifies bias. If Group A has more training data, the model performs better on Group A, raising overall accuracy while Group B suffers. Fairness constraints typically cost 2-5% accuracy. This trade-off is not always acceptable: in medical diagnosis, 2% loss might mean missed cancers.

💡 Key Insight: Bias is an engineering concern: different performance across groups means spurious correlations that will fail when data shifts.
💡 Key Takeaways
Four sources: historical (past discrimination), representation, measurement (proxies), aggregation
Business risks include regulatory fines, lawsuits exceeding M, permanent user loss
Performance gaps (95% vs 70% across groups) indicate engineering problem, not just ethics
Fairness constraints typically cost 2-5% accuracy, trade-off must be domain-specific
Biased models learned spurious correlations that fail when data shifts
📌 Interview Tips
1Name the four bias sources with concrete examples for each
2Frame bias as engineering: group performance gaps indicate model quality issues
← Back to Bias Detection & Mitigation Overview