Change Data Capture (CDC) • Trigger-based CDC PatternsEasy⏱️ ~2 min
What is Trigger Based CDC?
Definition
Trigger Based Change Data Capture (CDC) uses database triggers to automatically record every data change (insert, update, delete) into a separate table, enabling reliable propagation of changes to downstream systems without accessing transaction logs.
orders_changes table containing the operation type, primary key, changed values, and a timestamp.
The key insight is that this happens in the same transaction. If the original write commits, the change record is guaranteed to exist. If the transaction rolls back, no change record appears. You get strong consistency without touching transaction logs.
✓ In Practice: A separate CDC service continuously reads the change table, converts rows into events, and publishes them to message systems like Kafka. Downstream consumers then update search indexes, data warehouses, or caches based on these events.
The Trade:
You add 1 to 3 milliseconds to each write operation and increase database CPU usage by 10 to 30 percent at moderate scale. In exchange, you get reliable change capture using standard database features available on nearly every relational database without needing special permissions or vendor lock in.💡 Key Takeaways
✓Triggers execute synchronously within the same database transaction, guaranteeing that committed writes always produce change records
✓Works on any relational database with standard trigger support, no special log access or vendor APIs required
✓Adds 1 to 3 ms latency per write operation and increases database CPU by 10 to 30 percent at moderate throughput
✓Change records include operation type, primary key, before and after values, timestamp, and monotonic sequence for ordering
✓Enables downstream systems like search indexes, warehouses, and caches to stay synchronized with source data
📌 Examples
1An ecommerce platform writes an order to the orders table. An AFTER INSERT trigger fires in the same transaction, writing to orders_changes with fields: sequence_id=12847, table_name=orders, operation=INSERT, primary_key=ord_9283, commit_ts=2024-01-15T10:23:45Z, after_image={customer_id: 456, total: 129.99, status: pending}
2A CDC service polls the orders_changes table every 500 milliseconds, reading batches of 5000 rows where sequence_id > last_processed_offset. It converts each row into an OrderCreated event and publishes to Kafka, achieving 0.5 to 3 seconds end to end lag at 95th percentile