Database DesignTransaction Isolation LevelsEasy⏱️ ~3 min

What Are Transaction Isolation Levels?

Definition
Transaction isolation controls how the effects of one transaction are visible to other concurrent transactions. Higher isolation prevents more anomalies but costs more in performance. The SQL standard defines four levels from weakest to strongest.

Isolation Levels vs Anomalies

Isolation LevelDirty ReadNon-RepeatablePhantomWrite Skew
Read UncommittedPossiblePossiblePossiblePossible
Read CommittedPreventedPossiblePossiblePossible
Repeatable ReadPreventedPreventedPossible*Possible
SerializablePreventedPreventedPreventedPrevented
*Some databases prevent phantoms at Repeatable Read using gap locking

Read Uncommitted

The weakest level. Transactions can see uncommitted changes from other transactions (dirty reads). If Transaction B modifies a row then rolls back, Transaction A may have already acted on that phantom value. Rarely used in production because it provides no meaningful guarantee.

Read Committed

Guarantees data read was committed at the moment of reading. Successive reads of the same row within one transaction may return different values if another transaction commits between them (non-repeatable reads). The default in most databases. Safe for OLTP (Online Transaction Processing) workloads where each query is independent. Locks are held only during statement execution, minimizing contention.

Repeatable Read

If you read a row once, subsequent reads within the same transaction return identical data, even if other transactions commit updates. Implemented using MVCC (Multi-Version Concurrency Control): the database keeps snapshots of data at transaction start time. New rows inserted by concurrent transactions may still appear (phantom reads). Some databases prevent phantoms using gap locks (locks on ranges between existing index keys).

Serializable

The strongest level. Guarantees that concurrent transactions produce results identical to some serial execution order. Prevents all anomalies including write skew. Implemented via locking (2PL), optimistic validation, or dependency tracking. The performance cost is significant: higher abort rates under contention, longer lock waits, or commit delays. Use for critical paths where correctness is non-negotiable (financial transactions, inventory reservations).

💡 Key Takeaways
Read Uncommitted allows dirty reads (seeing uncommitted data from other transactions), rarely used in production due to corruption risk
Read Committed is the most common default (SQL Server, Oracle, PostgreSQL), prevents dirty reads but allows non-repeatable reads and phantoms
Repeatable Read ensures the same query returns consistent row values but new rows can still appear (phantoms), MySQL/InnoDB default with gap locking
Serializable prevents all anomalies by making transactions appear to run sequentially, but reduces throughput significantly under contention (often 2 to 5 times lower)
Most production systems use MVCC variants (snapshot isolation) rather than pure locking, trading write skew vulnerability for better read concurrency
Enabling snapshot isolation at SQL Server with 100,000 transactions per second typically eliminates blocking but can grow version store by gigabytes requiring careful monitoring
📌 Interview Tips
1Google Spanner uses TrueTime for strict serializability with typical 7 millisecond commit waits in single region, 80 to 150 milliseconds cross continent
2Oracle Database defaults to Read Committed with MVCC; long running queries (10 to 30 minutes) often require dedicated replicas to avoid snapshot too old errors
3Microsoft SQL Server default is locking based Read Committed; enterprises enable RCSI to decouple readers from writers and eliminate read/write deadlocks
← Back to Transaction Isolation Levels Overview