Database DesignTime-Series Databases (InfluxDB, TimescaleDB)Medium⏱️ ~3 min

Hot Warm Cold Tiering: Balancing Query Speed and Storage Cost

Value Decay Over Time

Time series data value decays with age. Metrics from the last hour power real-time dashboards and alerts requiring millisecond latency. Data from last month serves historical analysis tolerating seconds of latency. Data older than a year rarely gets queried but must remain for compliance. Hot-warm-cold tiering exploits this pattern to optimize both performance and cost.

Tier Characteristics

Hot tier: Recent data (hours to days) in memory or fast SSD with row-oriented formats. Sub-10ms queries, rapid updates. Cost: ~/GB/month for memory.

Warm tier: Weeks to months in columnar format on local SSD. Query latency 10-100ms. Balances speed and cost.

Cold tier: Older data in object storage (distributed storage accessed via HTTP, like S3-compatible systems) in compressed Parquet files (columnar format optimized for analytics). Queries take seconds but storage costs drop to /bin/zsh.01-0.02/GB/month.

Lifecycle Policies

Lifecycle policies automate transitions. Compression policies convert hot row-oriented data to columnar format after configured age, achieving 10x compression while queries transparently span both formats. Retention policies delete data exceeding maximum age or migrate to cheaper tiers.

Query Federation

Queries span tiers transparently. A 30-day query hits hot memory for the last day, warm SSD for the last week, and cold object storage for the remainder. Query planners push down filters and partial aggregations to minimize data movement. Continuous aggregates (pre-computed rollups) optimize further: queries for hourly averages over 6 months read compact aggregated data rather than billions of raw points.

Key Trade-off: Tiering achieves 10-100x cost savings versus keeping everything hot, at the cost of query layer complexity and some freshness lag in cold tiers.
💡 Key Takeaways
Hot tier (memory/SSD): sub-10ms queries, ~/GB/month; warm tier (SSD columnar): 10-100ms; cold tier (object storage): seconds, /bin/zsh.01-0.02/GB/month
Lifecycle policies automatically compress row data to columnar format after configured age, achieving 10x space reduction with transparent query access
Query federation pushes down filters to each tier; 30-day query reads hot (1 day), warm (1 week), cold (remaining) minimizing data movement
Continuous aggregates precompute rollups (hourly averages) so historical queries read compact materialized views not billions of raw points
Object storage uses Parquet columnar format enabling column pruning and efficient compression for 10-100x cost savings over hot storage
Tiering complexity: query planner must merge results across tiers; cold queries have seconds of latency; freshness lag exists
📌 Interview Tips
1Design retention: 1-second granularity hot for 7 days (real-time alerting), 1-minute warm for 90 days (dashboards), 1-hour cold indefinitely (compliance) at 100x lower cost.
2Calculate savings: 10TB in memory at /GB = M/month. Same data cold at /bin/zsh.02/GB = /month. Tiering by access pattern achieves 99.98% cost reduction.
3Continuous aggregates: dashboard querying 6 months of hourly averages reads 4,380 rows (6 x 30 x 24) instead of 15 billion raw 1-second points.
← Back to Time-Series Databases (InfluxDB, TimescaleDB) Overview