CachingCache Patterns (Aside, Through, Back)medium⏱️ ~2 min

Write Through and Write Around: Consistency vs Cache Pollution

Two Complementary Write Patterns

Write through and write around are complementary patterns addressing different consistency and performance needs. Write through maximizes read after write consistency by updating the cache synchronously with the database. Write around maximizes cache efficiency by keeping write data out of the cache entirely. Understanding when to use each is essential for production caching.

Write Through: Guaranteed Fresh Reads

In write through, the application writes to the cache and synchronously persists to the database before acknowledging the caller. A user who writes data can immediately read it from cache without experiencing a miss. The cache is always consistent with the database for data written through it. The latency cost is real: a write through operation takes the sum of cache write (1-2ms) and database write (5-20ms), typically 6-22ms total. Some systems parallelize these writes, reducing latency to the slower of the two plus coordination overhead.

When Write Through is Appropriate

Write through is appropriate when users immediately re-read what they write: a user posts a comment and expects to see it instantly on page refresh, inventory updates where selling out of stock items causes order failures, session data where stale reads cause authentication errors. The pattern guarantees read your writes consistency: any read after a successful write sees the written value.

Write Around: Preserving Cache for Hot Data

Write around bypasses the cache entirely on writes, sending updates directly to the database without populating the cache. This avoids polluting the cache with cold write data (data that is written but rarely read) that may never be accessed, preserving cache space for truly hot read traffic. Consider a logging system writing 100,000 events per second but rarely reading recent logs. With write through, those writes constantly churn the cache, evicting hot user profile data read thousands of times per second. With write around, the cache stays populated with frequently read data while logs go straight to the database.

The Write Around Trade-off

Newly written data will miss on the first read attempt, incurring a cache miss penalty of 5-50ms for a database fetch. For write heavy workloads with cold data where the majority of writes are not followed by immediate reads, this penalty is acceptable and rarely incurred. Monitor cache write amplification (ratio of cache writes to actual reads): a ratio above 10:1 suggests write around would be more efficient.

Mixing Patterns by Data Type

Production systems mix patterns based on data characteristics. Hot entities where users read what they just wrote use write through. Write heavy cold data flows use write around. A social platform might use write through for comment creation (users view their comments immediately) but write around for analytics events computed asynchronously.

Key Insight: Write through guarantees read your writes consistency at cost of higher write latency. Write around preserves cache for hot data at cost of cache miss on first read after write. Choose based on whether immediate read after write matters for that data type.
💡 Key Takeaways
Write through updates cache and database synchronously, guaranteeing read your writes consistency at cost of higher latency
Write latency is sum of cache (1-2ms) and database (5-20ms) unless parallelized; adds 50-100% overhead versus database only
Write around bypasses cache on writes, preventing pollution from write heavy data that pushes out hot read entries
Write around causes cache miss on first read after write (5-50ms penalty), acceptable when data is rarely read immediately
Use write through for user facing entities where immediate read after write matters; write around for high volume background writes
Monitor cache write amplification: ratio above 10:1 (writes to reads) suggests write around would be more efficient
📌 Interview Tips
1Write through example: user submits comment, server writes to cache and database, ACKs user; user refreshes and reads from cache hit immediately
2Write around example: analytics event stream writes 100K events/sec directly to database; cache stays populated with user profiles read 1M times/sec
3Decision pattern: if (isUserFacingEntity(key)) { writeThrough(); } else { writeAround(); } based on immediate read after write requirement
← Back to Cache Patterns (Aside, Through, Back) Overview