Database DesignKey-Value Stores (Redis, DynamoDB)Hard⏱️ ~3 min

Operational Patterns: Caching Strategies and Global Deployments

Cache-Aside Pattern

The most common caching pattern: application checks cache first, falls back to database on miss, then populates cache. This lazy loading approach gives explicit control over what gets cached. On writes, two strategies: delete-on-write removes the cache key, forcing next read to fetch fresh data. Write-through updates both cache and database together. Delete-on-write is simpler and avoids stale writes; write-through keeps cache warm but adds complexity.

Read-Through and Write-Behind

Read-through caches fetch from the origin automatically on miss. The cache itself handles database queries, centralizing caching logic instead of scattering it across application code. Write-behind (also called write-back) acknowledges writes immediately and persists asynchronously in batches. This significantly improves write throughput by coalescing multiple writes, but risks data loss if the cache node fails before persisting accumulated writes to the database.

Cache Invalidation Challenges

Cache invalidation is notoriously difficult. Stale data persists until TTL (Time-To-Live, the duration after which cached data automatically expires) runs out or explicit invalidation occurs. Distributed systems complicate this: when source data changes, all cache nodes must be notified. Race conditions occur when cache population and invalidation happen concurrently. A common bug: read populates cache with old value just as write invalidates it, leaving stale data. Version stamps help detect and reject stale entries.

Global Deployment

Geographic distribution places key-value nodes closer to users, saving 50-150ms cross-region latency. A European user reads from European nodes instead of crossing the Atlantic. Write handling varies: single-leader routes all writes to one primary region (simpler, no conflicts). Multi-leader accepts writes in any region (lower write latency) but requires conflict resolution when concurrent writes collide.

Operational Considerations

Connection pooling is essential since creating connections is expensive, often 10-100ms. Monitor: hit rates (target above 90%), latency percentiles (P50 is median, P99 means 99% of requests are faster), memory usage, and eviction rates (high eviction suggests undersized cache). Cache warming preloads frequently accessed data before serving traffic, avoiding cold-start miss storms after deployments.

💡 Key Takeaways
Cache-aside gives explicit control; delete-on-write avoids stale writes, write-through keeps cache warm
Write-behind improves throughput by coalescing writes but risks data loss on cache node failure
Cache invalidation race conditions: populate vs invalidate timing can leave stale data
Global deployment saves 50-150ms; single-leader simplifies writes, multi-leader reduces write latency
Target hit rates above 90%; high eviction rates indicate undersized cache
📌 Interview Tips
1Discuss the invalidation race condition: read populates stale value just as write invalidates
2Mention connection pooling (10-100ms to create) and cache warming as operational necessities
3For global systems, explain single-leader vs multi-leader trade-offs for write latency and conflict handling
← Back to Key-Value Stores (Redis, DynamoDB) Overview