CPU Affinity Glossary
Gradual loss of intended CPU placement.
CPU cores left unusable due to rigid pinning.
Application whose performance depends heavily on CPU placement.
Pods with no CPU guarantees or affinity.
Pod class with flexible CPU allocation.
Performance benefit of accessing data already present in CPU cache.
Event where requested data is not found in cache.
Frequent cache invalidation due to poor CPU placement.
State where CPU cache contains relevant data for a workload.
Throttling caused by CFS enforcing CPU limits.
Kernel feature used to limit and isolate CPU resources.
Linux scheduler whose load balancing behavior interacts with CPU affinity.
CPU switching execution between tasks.
Performance cost incurred during task switching.
Assigning workloads to specific CPU cores instead of allowing free migration.
Locking execution to specific physical CPU cores.
Binding a process or thread to specific CPU cores to improve cache locality and performance predictability.
Measuring performance impact of CPU binding.
Guidelines to balance predictability and flexibility.
Used to stabilize latency and throughput for databases.
Essential for predictable scaling in HPC workloads.
Reduces tail latency for inference workloads.
Improves data preprocessing and model training performance.
Used to isolate packet processing and reduce jitter.
Restricting container CPU usage for predictability.
Critical optimization to avoid remote memory penalties.
Kubernetes feature providing exclusive CPUs to pods.
Bitmask defining which CPUs a process or thread may use.
Observing CPU placement and utilization.
Balancing performance determinism against scheduler freedom.
Manual optimization of CPU placement.
Multiple workloads competing for the same CPU cores.
Reserving CPU cores exclusively for specific workloads.
Automatic redistribution of tasks across CPUs, sometimes overridden by affinity.
Overloaded CPU core while others remain idle.
Movement of a task from one CPU core to another.
Allocating more virtual CPUs than physical cores.
Explicitly fixing workloads to selected CPU cores.
Hard limit on CPU time for a container.
Time a vCPU waits to be scheduled by the hypervisor.
OS component responsible for selecting which task runs on which CPU.
Linux mechanism restricting CPU and NUMA node usage for processes.
Relative CPU weight assigned to containers.
Practice of combining isolated cores, cpuset, IRQ affinity, and QoS settings to keep background and interrupt workloads off dedicated cores reserved for real-time or performance-critical tasks.
Workloads unable to get CPU time due to affinity misconfiguration.
Time a VM waits while physical CPU is used by others.
CPU execution across sockets, often increasing latency.
cgroup controller managing CPU and memory locality.
Ability of the OS, hypervisor, or scheduler to account for sockets, cores, SMT threads, and NUMA nodes when placing tasks, so that CPU affinity improves cache and memory locality instead of hurting it.
Forced reduction in CPU usage due to quotas.
Physical core reserved for a single workload (VM, pod, or process) with no time-sharing, typically configured via CPU isolation and CPU Manager static policy for low-latency or real-time tasks.
Performance issue caused by multiple CPUs modifying the same cache line.
Pod class enabling strict CPU affinity.
Running multiple logical threads on a single physical core.
Cores reserved for OS and background tasks.
Strict binding where a workload can run only on specified CPUs.
CPU cycles unavailable to virtual machines.
CPU cores removed from general-purpose scheduling.
Service that distributes interrupts across CPUs.
Binding hardware interrupts to specific CPUs.
Excessive interrupts degrading CPU performance.
Guaranteed CPU allocation for pods.
Maximum CPU usage allowed for pods.
OS-visible execution unit, including hyperthreads.
Performance issue caused by scheduler decisions.
Uneven CPU utilization caused by poor affinity.
Loss of affinity after VM migration.
Cache and scheduling penalty caused by CPU migration.
Tool-based method to bind workloads to CPUs and NUMA nodes.
Aligning vCPUs with physical NUMA nodes.
Binding CPUs and memory together to reduce remote access latency.
Allowing VMs to span multiple NUMA nodes.
Performance penalty when VMs cross NUMA boundaries.
Aligning CPU and memory placement within the same NUMA node.
Network packet processing model sensitive to CPU locality.
Excessive pinning that reduces scheduler flexibility.
Binding workloads to physical cores instead of logical threads.
OS-level mechanism that restricts which CPUs a workload can execute on.
Running related processes on nearby CPUs.
Assigning an entire process to a defined set of CPU cores.
Inefficient distribution of CPU cores across pods.
Actual hardware execution core.
Number of tasks waiting on a CPU, indicating contention.
Per-CPU queue holding runnable tasks.
Network traffic distribution across CPUs.
Binding latency-sensitive workloads to dedicated CPUs.
Scheduler policy prioritizing time-critical workloads.
Influence of affinity rules on OS scheduling decisions.
Grouping of CPUs used by the scheduler for load balancing.
Scheduler priorities overridden by affinity rules.
Periodic interrupt used for scheduling decisions.
Core on which multiple workloads time-share CPU cycles under the scheduler, even if they have affinity constraints; typical for Burstable and BestEffort pods or overcommitted vCPUs.
Deciding whether workloads should share sibling hyperthreads.
One of the logical CPUs that share a single physical core’s execution resources under SMT/Hyper-Threading; critical when deciding whether to co-locate noisy workloads on the same core.
Preferred CPU placement where migration is allowed if necessary.
Deferred interrupt processing that consumes CPU time.
Lock contention worsened by poor CPU placement.
Binding storage interrupts to CPUs for predictable I/O performance.
Linux command used to view or set CPU affinity.
Binding individual threads to specific CPU cores.
Preventing threads from migrating between CPU cores.
Pinning worker threads for consistent performance.
Kubernetes component aligning CPU, memory, and devices.
Aligning outgoing network traffic with specific CPUs.
Binding virtual CPUs to physical CPU cores.
Relationship between virtual and physical CPUs.
Gradual loss of NUMA locality over time.
No matching data found.