Memory Compression: DAWG, FST, and Double Array Tries
Why Basic Tries Waste Memory
A standard trie node needs one pointer per possible child character. With 26 lowercase letters, that is 26 times 8 bytes equals 208 bytes per node, even if most pointers are null. For 50 million nodes, you need 10 GB just for child pointers. In practice, most nodes have only 1 to 3 children. The average node wastes 180 plus bytes on null pointers. With 256 characters for full ASCII, waste grows to 2 KB per node. This is why production autocomplete systems never use naive tries.
DAWG Directed Acyclic Word Graph
A DAWG compresses tries by sharing common suffixes as well as prefixes. The words "walking", "talking", "stalking" share the suffix "alking". A DAWG merges these suffixes into a single subgraph, reducing node count by 50 to 70 percent for typical English dictionaries. A 50 million word trie with 30 million nodes compresses to 10 to 12 million DAWG nodes. Trade off: construction requires suffix based merging (more complex than trie insertion), and you cannot store per word metadata at suffix shared nodes since multiple words pass through them.
FST Finite State Transducers
FSTs extend DAWGs by associating output values with transitions. As you traverse "california", the FST outputs its frequency score incrementally. Lucene uses FSTs for its term dictionary, achieving 10 to 20x compression over tries. A 500 MB trie compresses to 25 to 50 MB as an FST. The data structure is immutable (must rebuild to update) and read optimized, perfect for autocomplete where dictionary changes infrequently. FSTs pack transitions as delta offsets in contiguous arrays, improving cache locality over pointer chasing.
Double Array Trie
Instead of pointer per child, store all nodes in two parallel arrays: BASE and CHECK. BASE[i] plus character equals index of child for that character. CHECK[j] confirms the parent was correct. This eliminates null pointers entirely. A 30 million node trie fits in 240 MB (two 4 byte integers per node) versus 6 plus GB for naive tries. Lookup is two array accesses per character, cache friendly and predictably fast with no pointer chasing.
Memory Comparison for 50M Words
Naive trie: 10 to 12 GB memory, 0.05 ms lookups. DAWG: 2 to 3 GB, 0.08 ms lookups (suffix traversal overhead). FST: 0.5 to 1 GB, 0.15 ms lookups (decode overhead). Double array: 0.8 to 1.2 GB, 0.04 ms lookups (cache efficient). For most autocomplete use cases, double array tries offer the best balance of memory efficiency and query speed.