1:1 Threading vs M:N User Space Scheduling
1:1 Threading Model
In 1:1 threading, each user thread maps to one kernel thread. The OS scheduler manages all threads. Context switches go through the kernel. This is what Linux pthreads and Java threads use. Simple and well integrated with OS facilities like signals and blocking I/O.
The cost is kernel involvement on every context switch. Kernel mode transitions take 1 to 10 microseconds. With thousands of threads doing frequent switches, overhead becomes significant. Thread creation also requires kernel calls, costing 10 to 50 microseconds each.
M:N Threading and Goroutines
M:N threading maps M user space threads to N kernel threads. The runtime scheduler multiplexes user threads onto kernel threads. Switches happen in user space: no kernel calls, just register save and restore. Cost drops to 100 nanoseconds or less.
Go uses M:N scheduling. Goroutines are user space threads scheduled by the Go runtime. A program can have millions of goroutines but only as many kernel threads as cores. When a goroutine blocks on I/O, the runtime parks it and runs another. No kernel involvement unless the goroutine makes a system call.
Trade-offs
1:1 advantages: Full OS integration. Blocking calls work naturally. Debugging and profiling tools understand threads. Preemption is automatic and fair.
M:N advantages: Lightweight creation (goroutines cost 2 to 4 KB vs 1 MB for threads). Fast switching. Can support millions of concurrent tasks. Better for high concurrency with many short lived tasks.
M:N challenges: Blocking calls can block the kernel thread, starving other user threads. Runtimes must integrate with I/O carefully. Debugging is harder because OS tools see kernel threads, not user threads.