Partitioning & ShardingConsistent HashingEasy⏱️ ~2 min

What is Consistent Hashing and Why Does It Matter?

Definition
Consistent hashing maps both keys and nodes onto the same circular hash space (a "ring"). Each key is owned by the first node clockwise from its position. The critical property: adding or removing one node from N nodes only moves ~1/N of keys, not all of them.
Why Simple Modulo Fails: With simple modulo hashing (partition = hash(key) % N), adding one node to a 100-node cluster remaps nearly all keys. If you have 100 TB of data, you would need to move ~99 TB during a scaling event. Networks saturate, latencies spike, and the operation takes hours or days. How Consistent Hashing Solves This: Both nodes and keys are hashed onto the same ring (typically 0 to 2^32). When you look up a key, you find its position on the ring and walk clockwise to the first node. That node owns the key. When a new node joins, it takes ownership only of keys in its arc from the previous node. When a node leaves, its keys transfer to the next clockwise successor. With 100 nodes, adding one node moves only ~1% of data (1 TB instead of 99 TB). Production Impact: This property enables elastic scaling. Clusters grow and shrink without massive reshuffling. Distributed caches, databases like Cassandra, and storage systems rely on consistent hashing to maintain single-digit millisecond latencies during topology changes.
✓ In Practice: Without consistent hashing, every scaling operation becomes a risky, manually coordinated migration event. With it, scaling is a routine automated action that completes in minutes, not days.
💡 Key Takeaways
Minimal disruption: Adding or removing one node from N nodes moves only approximately 1/N of keys, versus nearly all keys with modulo hashing
Amazon Dynamo pattern with 100 nodes and 100 TB: adding one node moves roughly 1 TB (1%) instead of 99 TB (99%), enabling planned off peak migrations
Hash both keys and nodes into same space: Use identical hash function (often SHA-1 or non-cryptographic like MurmurHash) to map both into large identifier space
Deterministic and distributed: Each client can independently compute key placement without central directory, reducing coordination overhead and single points of failure
Real latency targets: Meta memcache clusters serve millions of requests per second with sub-millisecond median and low millisecond p99, maintained even during node additions
📌 Interview Tips
1Memcached at Meta (mcrouter): 200 node cache pool with 200M hot keys uses consistent hashing; adding one node remaps approximately 1M keys (0.5%), kept under background refill budgets to avoid cache miss storms
2Amazon ElastiCache: Clients use Ketama style consistent hashing to select cache shards; rolling out one node in a 50 node cluster affects 2% of keys with no central coordinator
3Cassandra token ring: 100 nodes with replication factor 3 across availability zones; removing one failed node transfers its 1% key range to immediate successors with controlled streaming
← Back to Consistent Hashing Overview