OS & Systems FundamentalsProcesses vs ThreadsMedium⏱️ ~2 min

Scaling Models: Thread Pools vs Event Driven vs Process Pools

Thread Pool Architecture

A thread pool pre creates a fixed number of worker threads that wait for tasks in a shared queue. When work arrives, an idle thread picks it up, executes it, and returns to waiting. This avoids the 10-100 microsecond cost of creating a new thread for each request.

Thread pools work best for CPU bound workloads where each task needs significant computation. The optimal pool size equals the number of CPU cores: with 8 cores, use 8 threads. More threads than cores cause context switching overhead. Fewer threads leave cores idle.

For I/O bound workloads (tasks that spend time waiting for network or disk), larger pools make sense. While one thread waits for I/O, others can use the CPU. A common heuristic is threads = cores × (1 + wait_time / compute_time). If tasks spend 90% of time waiting and 10% computing, you might use 8 × 10 = 80 threads on an 8 core machine.

Event Driven Architecture

Event driven systems use a single thread (or one per core) running an event loop. The loop waits for I/O events using operating system primitives like epoll (efficient poll, a mechanism that monitors thousands of file descriptors simultaneously). When data arrives on any connection, the loop processes it without blocking.

This model excels at handling many concurrent connections with minimal memory overhead. Each connection requires only a small state object (~1KB) instead of a full thread stack (1-8MB). A server can handle 100,000 concurrent connections using ~100MB of state memory versus 100GB+ for thread per connection.

The drawback: CPU intensive operations block the event loop, stalling all connections. If one request needs 100ms of computation, every other connection waits. Event driven architectures must offload CPU work to thread pools or separate processes.

Process Pool Architecture

Process pools spawn multiple worker processes, each handling requests independently. A parent process or load balancer distributes incoming work. Each worker has complete isolation: a crash or memory corruption in one worker cannot affect others.

Process pools trade performance for reliability. Creating a worker process costs 1-10ms versus 10-100 microseconds for threads. Communication between workers requires IPC, adding 2-10 microseconds per message. However, one worker crashing does not bring down the server.

Process pools suit workloads where individual requests might be unstable: parsing untrusted input, running user provided code, or calling libraries with known memory leak issues. The isolation boundary lets you restart individual workers without affecting others.

🎯 When To Use: Thread pools for CPU bound work with stable code. Event driven for I/O bound work with many connections. Process pools when crash isolation matters more than raw performance.
💡 Key Takeaways
Thread pools avoid per request creation overhead; optimal size equals CPU cores for compute bound work
Event driven handles 100,000+ connections with ~100MB memory; thread per connection would need 100GB+
Event loops block entirely on CPU work; a 100ms computation stalls all connections
Process pools cost 1-10ms per worker creation but provide crash isolation between workers
I/O bound thread pools use formula: threads = cores × (1 + wait_time / compute_time)
📌 Interview Tips
1When asked about handling 10,000 concurrent websocket connections, explain that event driven uses ~10MB while thread per connection uses ~10GB of stack memory alone
2If designing a server that processes untrusted uploads, recommend process pools so a malformed file crashing one worker does not affect others
3For CPU intensive image processing, suggest thread pool sized to core count, not connection count, to avoid context switch overhead
← Back to Processes vs Threads Overview