CachingCache Invalidation StrategiesMedium⏱️ ~3 min

Event Driven Invalidation: Pushing Changes to Caches for Strong Freshness

How Event Driven Invalidation Works

When a write commits to your source of truth (database, primary store), immediately publish invalidation events to a message queue. Cache tiers subscribe to these events and delete or refresh affected keys, typically within milliseconds to low seconds. This tightens staleness windows dramatically compared to TTL only strategies: correctness sensitive systems target sub second propagation for privacy changes, under 2 seconds cross region for p99 (the 99th percentile latency). The cost is distributed systems complexity around delivery guarantees, ordering, idempotency, and the risk that pipeline failures amplify into outages.

Critical Implementation Patterns

Four patterns ensure reliability and correctness. First, use at least once delivery semantics with idempotent invalidation handlers keyed by entity identifier and version to survive message redelivery and retries safely. Second, partition your event stream by entity identifier (user_id, post_id) to preserve per key ordering and avoid reordering anomalies where an older update invalidates after a newer one. Third, always commit to your source of truth before publishing invalidation events (commit then invalidate). If you invalidate first, a read might cache stale data between invalidation and commit, leaving stale data cached indefinitely. Fourth, include monotonically increasing version numbers or timestamps in events so consumers can detect and discard out of order events that arrive late due to network delays or retries.

When to Use Event Driven Invalidation

Event driven excels for correctness critical or privacy sensitive mutations where even short staleness windows are unacceptable: user permissions and access control (showing private content is a security bug), inventory and pricing (selling out of stock items loses money), financial balances and budgets (compliance requirement), and visibility changes on social platforms (privacy violations). For read heavy content with acceptable staleness (blog posts, public profiles, product images), the operational complexity of event pipelines often is not worth it compared to simple TTL with longer expiry windows.

Key Insight: Event driven adds operational burden (message brokers, partitioning, delivery monitoring) justified only when business or compliance requires strong freshness. Always include max TTL as safety net: if invalidation pipeline fails, caches serve stale data only until TTL expires rather than indefinitely.
💡 Key Takeaways
Event driven tightens staleness to milliseconds or low seconds (sub second for privacy, under 2s cross region p99) vs minutes or hours with TTL only.
Four patterns: at least once with idempotent handlers, partition by entity ID for ordering, commit before invalidate, include version numbers for out of order detection.
Commit before invalidate is critical: invalidating first allows stale data to be cached indefinitely when next read fetches old origin data.
Best for correctness critical: permissions (security), inventory (revenue), balances (compliance). Overkill for blogs, images, public profiles.
📌 Interview Tips
1Explain commit before invalidate: commit to database first, then publish invalidation. Reversing allows stale data to be cached forever.
2Partitioning rationale: events for user:123 go to same partition, preserving order. Without partitioning, retries can shuffle event ordering.
3Know when to use: permissions (security bug), inventory (revenue loss), balances (compliance). Skip for blogs, images, public profiles.
← Back to Cache Invalidation Strategies Overview