Get Early Access to NVIDIA B200 With 20,000 Free Cloud Credits
Still Paying Hyperscaler Rates? Save Up to 60% on your Cloud Costs

CPU Affinity Glossary

A
Affinity Drift

Gradual loss of intended CPU placement.

Affinity Fragmentation

CPU cores left unusable due to rigid pinning.

Affinity-Sensitive Workload

Application whose performance depends heavily on CPU placement.

B
Best-Effort QoS Class

Pods with no CPU guarantees or affinity.

Burstable QoS Class

Pod class with flexible CPU allocation.

C
Cache Locality

Performance benefit of accessing data already present in CPU cache.

Cache Miss

Event where requested data is not found in cache.

Cache Thrashing

Frequent cache invalidation due to poor CPU placement.

Cache Warmth

State where CPU cache contains relevant data for a workload.

CFS Quota Throttling

Throttling caused by CFS enforcing CPU limits.

cgroups (CPU)

Kernel feature used to limit and isolate CPU resources.

Completely Fair Scheduler (CFS)

Linux scheduler whose load balancing behavior interacts with CPU affinity.

Context Switch

CPU switching execution between tasks.

Context Switch Overhead

Performance cost incurred during task switching.

Core Affinity

Assigning workloads to specific CPU cores instead of allowing free migration.

Core Pinning

Locking execution to specific physical CPU cores.

CPU Affinity

Binding a process or thread to specific CPU cores to improve cache locality and performance predictability.

CPU Affinity Benchmarking

Measuring performance impact of CPU binding.

CPU Affinity Best Practices

Guidelines to balance predictability and flexibility.

CPU Affinity for Databases

Used to stabilize latency and throughput for databases.

CPU Affinity for HPC

Essential for predictable scaling in HPC workloads.

CPU Affinity for Inference

Reduces tail latency for inference workloads.

CPU Affinity for ML Training

Improves data preprocessing and model training performance.

CPU Affinity for Networking

Used to isolate packet processing and reduce jitter.

CPU Affinity in Containers

Restricting container CPU usage for predictability.

CPU Affinity in NUMA Systems

Critical optimization to avoid remote memory penalties.

CPU Manager (Static Policy)

Kubernetes feature providing exclusive CPUs to pods.

CPU Mask

Bitmask defining which CPUs a process or thread may use.

CPU Affinity Monitoring

Observing CPU placement and utilization.

CPU Affinity Trade-off

Balancing performance determinism against scheduler freedom.

CPU Affinity Tuning

Manual optimization of CPU placement.

CPU Contention

Multiple workloads competing for the same CPU cores.

CPU Isolation

Reserving CPU cores exclusively for specific workloads.

CPU Load Balancing

Automatic redistribution of tasks across CPUs, sometimes overridden by affinity.

CPU Hotspot

Overloaded CPU core while others remain idle.

CPU Migration

Movement of a task from one CPU core to another.

CPU Overcommitment

Allocating more virtual CPUs than physical cores.

CPU Pinning

Explicitly fixing workloads to selected CPU cores.

CPU Quota

Hard limit on CPU time for a container.

CPU Ready Time

Time a vCPU waits to be scheduled by the hypervisor.

CPU Scheduler

OS component responsible for selecting which task runs on which CPU.

CPU Set (cpuset)

Linux mechanism restricting CPU and NUMA node usage for processes.

CPU Shares

Relative CPU weight assigned to containers.

CPU Shielding

Practice of combining isolated cores, cpuset, IRQ affinity, and QoS settings to keep background and interrupt workloads off dedicated cores reserved for real-time or performance-critical tasks.

CPU Starvation

Workloads unable to get CPU time due to affinity misconfiguration.

CPU Steal Time

Time a VM waits while physical CPU is used by others.

Cross-Socket Execution

CPU execution across sockets, often increasing latency.

cpuset Controller

cgroup controller managing CPU and memory locality.

CPU Topology Awareness

Ability of the OS, hypervisor, or scheduler to account for sockets, cores, SMT threads, and NUMA nodes when placing tasks, so that CPU affinity improves cache and memory locality instead of hurting it.

CPU Throttling

Forced reduction in CPU usage due to quotas.

D
E
Exclusive CPU Core

Physical core reserved for a single workload (VM, pod, or process) with no time-sharing, typically configured via CPU isolation and CPU Manager static policy for low-latency or real-time tasks.

F
False Sharing

Performance issue caused by multiple CPUs modifying the same cache line.

G
Guaranteed QoS Class

Pod class enabling strict CPU affinity.

H
Hyperthreading (SMT)

Running multiple logical threads on a single physical core.

Housekeeping Cores

Cores reserved for OS and background tasks.

Hard Affinity

Strict binding where a workload can run only on specified CPUs.

Hypervisor Steal Cycles

CPU cycles unavailable to virtual machines.

I
Isolated Cores

CPU cores removed from general-purpose scheduling.

IRQ Balance

Service that distributes interrupts across CPUs.

IRQ Affinity

Binding hardware interrupts to specific CPUs.

Interrupt Storm

Excessive interrupts degrading CPU performance.

J
K
Kubernetes CPU Requests

Guaranteed CPU allocation for pods.

Kubernetes CPU Limits

Maximum CPU usage allowed for pods.

L
Logical CPU

OS-visible execution unit, including hyperthreads.

Lock Holder Preemption

Performance issue caused by scheduler decisions.

Load Imbalance

Uneven CPU utilization caused by poor affinity.

Live Migration CPU Impact

Loss of affinity after VM migration.

M
Migration Cost

Cache and scheduling penalty caused by CPU migration.

N
numactl CPU Binding

Tool-based method to bind workloads to CPUs and NUMA nodes.

NUMA-Aware vCPU Placement

Aligning vCPUs with physical NUMA nodes.

NUMA-Aware CPU Affinity

Binding CPUs and memory together to reduce remote access latency.

NUMA Spanning

Allowing VMs to span multiple NUMA nodes.

NUMA Boundary Crossing

Performance penalty when VMs cross NUMA boundaries.

NUMA Affinity

Aligning CPU and memory placement within the same NUMA node.

NAPI Polling

Network packet processing model sensitive to CPU locality.

O
Over-Pinning

Excessive pinning that reduces scheduler flexibility.

P
Physical Core Affinity

Binding workloads to physical cores instead of logical threads.

Processor Affinity

OS-level mechanism that restricts which CPUs a workload can execute on.

Process Co-Location

Running related processes on nearby CPUs.

Process Affinity

Assigning an entire process to a defined set of CPU cores.

Pod CPU Fragmentation

Inefficient distribution of CPU cores across pods.

Physical CPU

Actual hardware execution core.

Q
R
Runqueue Length

Number of tasks waiting on a CPU, indicating contention.

Runqueue

Per-CPU queue holding runnable tasks.

Receive Side Scaling (RSS)

Network traffic distribution across CPUs.

Real-Time CPU Affinity

Binding latency-sensitive workloads to dedicated CPUs.

Real-Time Scheduling

Scheduler policy prioritizing time-critical workloads.

S
Scheduler Affinity

Influence of affinity rules on OS scheduling decisions.

Scheduler Domain

Grouping of CPUs used by the scheduler for load balancing.

Scheduler Inversion

Scheduler priorities overridden by affinity rules.

Scheduler Tick

Periodic interrupt used for scheduling decisions.

Shared CPU Core

Core on which multiple workloads time-share CPU cycles under the scheduler, even if they have affinity constraints; typical for Burstable and BestEffort pods or overcommitted vCPUs.

SMT Affinity

Deciding whether workloads should share sibling hyperthreads.

SMT Sibling (Hyperthread Sibling)

One of the logical CPUs that share a single physical core’s execution resources under SMT/Hyper-Threading; critical when deciding whether to co-locate noisy workloads on the same core.

Soft Affinity

Preferred CPU placement where migration is allowed if necessary.

SoftIRQ

Deferred interrupt processing that consumes CPU time.

Spinlock Contention

Lock contention worsened by poor CPU placement.

Storage IRQ Affinity

Binding storage interrupts to CPUs for predictable I/O performance.

T
taskset

Linux command used to view or set CPU affinity.

Thread Affinity

Binding individual threads to specific CPU cores.

Thread Pinning

Preventing threads from migrating between CPU cores.

Thread Pool Affinity

Pinning worker threads for consistent performance.

Topology Manager

Kubernetes component aligning CPU, memory, and devices.

Transmit Queue Affinity

Aligning outgoing network traffic with specific CPUs.

U
V
vCPU Pinning

Binding virtual CPUs to physical CPU cores.

vCPU to pCPU Mapping

Relationship between virtual and physical CPUs.

vNUMA Affinity Drift

Gradual loss of NUMA locality over time.

W
X
Y
Z

No matching data found.

Get in Touch

Explore trends, industry updates and expert opinions to drive your business forward.

    We value your privacy and will use your information only to communicate and share relevant content, products and services. See Privacy Policy