Cache Coherency Strategies: Cache Aside, Read Through, Write Through, and Write Back
Cache Aside
The most common pattern. Application controls everything: on read miss, fetch from database and populate cache; on write, write to database first then invalidate or update cache. This keeps the cache simple but creates a staleness window between database write and cache invalidation. Combining with short TTLs (seconds to minutes) provides a safety net, automatically expiring stale entries even if invalidation fails.
Read Through and Write Through
Read through: cache intercepts all reads and automatically fetches from database on miss, transparently loading and returning values. Enables request coalescing where thousands of concurrent requests for the same missing key collapse into one backend query, preventing load spikes. Write through: all writes go through cache to database synchronously before acknowledging. Cache is always consistent with database, reducing staleness windows but adding write latency (database 5ms + cache 1ms = 6ms unless parallelized).
Write Back (Write Behind)
Most aggressive pattern: writes acknowledged as soon as they reach cache; cache asynchronously drains to database in background. Minimizes write latency (<0.5ms) but risks data loss if cache node crashes before flushing write buffer. Use only for idempotent or derivable data with replicated write buffers for durability.
Lease Based Refill for Stampede Prevention
When popular key expires, lease based refill grants one requester exclusive refresh rights via a token while others receive stale data or wait briefly. Prevents thundering herds that could spike database load by 1000x when 50,000 RPS keys expire simultaneously.