33.8. Setting KVM processor affinities
The first step in deciding what policy to apply is to determine the host’s memory and CPU topology. The
virsh nodeinfo command provides information about how many sockets, cores and hyperthreads there are attached a host.
# virsh nodeinfo CPU model: x86_64 CPU(s): 8 CPU frequency: 1000 MHz CPU socket(s): 2 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 8179176 kB
virsh capabilitiesto get additional output data on the CPU configuration.
# virsh capabilities <capabilities> <host> <cpu> <arch>x86_64</arch> </cpu> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='2'> <cell id='0'> <cpus num='4'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> </cpus> </cell> <cell id='1'> <cpus num='4'> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> [ Additional XML removed ] </capabilities>
Locking a guest to a particular NUMA node offers no benefit if that node does not have sufficient free memory for that guest. libvirt stores information on the free memory available on each node. Use the
virsh freecell command to display the free memory on all NUMA nodes.
# virsh freecell 0: 2203620 kB 1: 3354784 kB
Once you have determined which node to run the guest on, see the capabilities data (the output of the
virsh capabilities command) about NUMA topology.
- Extract from the
<topology> <cells num='2'> <cell id='0'> <cpus num='4'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> </cpus> </cell> <cell id='1'> <cpus num='4'> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology>
- Observe that the node 1,
<cell id='1'>, has physical CPUs 4 to 7.
- The guest can be locked to a set of CPUs by appending the
cpusetattribute to the configuration file.
- While the guest is offline, open the configuration file with
- Locate where the guest's virtual CPU count is specified. Find the
<vcpus>4</vcpus>The guest in this example has four CPUs.
- Add a
cpusetattribute with the CPU numbers for the relevant NUMA cell.
- Save the configuration file and restart the guest.
virt-install provisioning tool provides a simple way to automatically apply a 'best fit' NUMA policy when guests are created.
virt-installcan use a CPU set of processors or the parameter
autoparameter automatically determines the optimal CPU locking using the available NUMA data.
virt-installcommand when creating new guests.
There may be times where modifying CPU affinities on running guests is preferable to rebooting the guest. The
virsh vcpuinfo and
virsh vcpupin commands can perform CPU affinity changes on running guests.
virsh vcpuinfocommand gives up to date information about where each virtual CPU is running.
# virsh vcpuinfo guest1 VCPU: 0 CPU: 3 State: running CPU time: 0.5s CPU Affinity: yyyyyyyy VCPU: 1 CPU: 1 State: running CPU Affinity: yyyyyyyy VCPU: 2 CPU: 1 State: running CPU Affinity: yyyyyyyy VCPU: 3 CPU: 2 State: running CPU Affinity: yyyyyyyy
virsh vcpuinfooutput (the
CPU Affinity) shows that the guest can presently run on any CPU.
# virsh vcpupin guest1 0 4 # virsh vcpupin guest1 1 5 # virsh vcpupin guest1 2 6 # virsh vcpupin guest1 3 7
virsh vcpuinfocommand confirms the change in affinity.
# virsh vcpuinfo guest1 VCPU: 0 CPU: 4 State: running CPU time: 32.2s CPU Affinity: ----y--- VCPU: 1 CPU: 5 State: running CPU time: 16.9s CPU Affinity: -----y-- VCPU: 2 CPU: 6 State: running CPU time: 11.9s CPU Affinity: ------y- VCPU: 3 CPU: 7 State: running CPU time: 14.6s CPU Affinity: -------y