OS & Systems FundamentalsCPU Scheduling & Context SwitchingHard⏱️ ~3 min

CPU Affinity, Core Pinning, and NUMA Awareness

CPU Affinity Basics

CPU affinity restricts which cores a thread can run on. By default, threads can migrate between any core. Setting affinity pins a thread to specific cores. This eliminates migration overhead and keeps cache warm.

Use taskset or sched_setaffinity to set affinity. Pin latency critical threads to dedicated cores. Pin interrupt handlers to cores away from application threads. This prevents interrupt storms from preempting application work.

Core Pinning Benefits

Cache locality: A pinned thread always finds its data in that core L1 and L2 cache. Migrating to another core means starting with cold cache. For memory intensive workloads, cold cache costs hundreds of microseconds per migration.

Predictable latency: Pinned threads do not compete for scheduling on other cores. They always run when runnable and their core is available. This reduces jitter in latency sensitive paths.

NUMA alignment: Pin threads to cores on the same NUMA node as their memory. This ensures local memory access at 100 nanoseconds instead of remote access at 200+ nanoseconds.

Isolcpus and CPU Sets

The isolcpus kernel parameter removes cores from default scheduler. Only explicitly assigned threads run on isolated cores. Kernel threads, interrupts, and other processes cannot preempt your workload. This provides maximum isolation for latency critical applications.

Use cgroups and cpusets for container isolation. Assign containers to specific cores. Prevent noisy neighbor effects where one container spikes CPU and preempts others. Kubernetes supports CPU pinning through static CPU manager policy.

✅ Best Practice: For latency sensitive services, pin application threads to dedicated cores. Use isolcpus to prevent kernel interference. Align pinning with NUMA topology. Leave some cores for OS tasks and interrupts.
💡 Key Takeaways
CPU affinity pins threads to specific cores, preventing migration
Pinning preserves cache warmth: migration causes cold cache latency hit
Isolcpus removes cores from scheduler; only explicitly assigned work runs there
Align thread pinning with NUMA topology for local memory access
Kubernetes static CPU manager enables pod level core pinning
📌 Interview Tips
1Explain cache benefit of pinning: migrated thread starts with cold L1 and L2 cache, costing hundreds of microseconds
2For low latency systems, recommend isolcpus plus explicit affinity to prevent kernel preemption
3When designing container deployment, mention CPU sets to prevent noisy neighbor preemption
← Back to CPU Scheduling & Context Switching Overview