Privacy & Fairness in MLFederated LearningEasy⏱️ ~3 min

What is Federated Learning?

Definition
Federated Learning is a machine learning approach where models are trained across multiple decentralized devices holding local data, without exchanging the raw data itself. Instead of collecting data centrally, the algorithm travels to the data.

The Core Problem

Traditional ML requires centralizing all training data. For keyboard prediction, this means uploading every keystroke from millions of users. For healthcare models, hospitals must share patient records. Both face insurmountable barriers: users refuse to share typing patterns, and regulations prohibit transferring patient data. The data exists, but cannot be accessed conventionally.

How It Works

Instead of bringing data to the model, federated learning brings the model to the data. A central server sends the current model to participating devices (clients). Each client trains locally for several iterations, producing updated weights. Clients send only weight updates back. The server aggregates updates from all clients into an improved model by averaging weights. This cycle repeats until convergence. A single round might involve 10,000 devices training locally for 5 epochs, with aggregation every 10-30 minutes.

Why Simple Alternatives Fail

Why not anonymize data and centralize it? Anonymization is fragile: 87% of Americans can be uniquely identified from zip code, birth date, and gender. Even aggregate statistics leak information. Federated learning ensures raw data never leaves the device. The server sees only model updates, which are mathematically difficult to reverse-engineer into original training examples.

💡 Key Insight: Federated learning enables training on data that physically cannot be centralized, such as real-time sensor readings from IoT devices or behavioral data locked inside applications.
💡 Key Takeaways
Models travel to data instead of data traveling to models, enabling training on sensitive data without exposure
Each round involves local training then sending only weight updates to a central server for aggregation
Anonymization fails because 87% of people can be re-identified from minimal demographic data
Enables training on data that cannot be physically moved, not just data that should not be moved
Communication happens in rounds lasting 10-30 minutes with thousands of devices per round
📌 Interview Tips
1Explain the round-based pattern: distribute model, local training, send weight updates, aggregate, repeat
2Mention federated learning solves both regulatory constraints and physical impossibility of centralizing data
← Back to Federated Learning Overview
What is Federated Learning? | Federated Learning - System Overflow