23.12. CPU Models and Topology

This section covers the requirements for CPU models. Note that every hypervisor has its own policy for which CPU features guest will see by default. The set of CPU features presented to the guest by KVM depends on the CPU model chosen in the guest virtual machine configuration. qemu32 and qemu64 are basic CPU models, but there are other models (with additional features) available. Each model and its topology is specified using the following elements from the domain XML:
<cpu match='exact'>
    <model fallback='allow'>core2duo</model>
    <vendor>Intel</vendor>
    <topology sockets='1' cores='2' threads='1'/>
    <feature policy='disable' name='lahf_lm'/>
</cpu>

Figure 23.14. CPU model and topology example 1

<cpu mode='host-model'>
   <model fallback='forbid'/>
   <topology sockets='1' cores='2' threads='1'/>
</cpu>

Figure 23.15. CPU model and topology example 2

<cpu mode='host-passthrough'/>

Figure 23.16. CPU model and topology example 3

In cases where no restrictions are to be put on the CPU model or its features, a simpler <cpu> element such as the following may be used:
<cpu>
   <topology sockets='1' cores='2' threads='1'/>
</cpu>

Figure 23.17. CPU model and topology example 4

<cpu mode='custom'>
   <model>POWER8</model>
</cpu>

Figure 23.18. PPC64/PSeries CPU model example

<cpu mode='host-passthrough'/>

Figure 23.19. aarch64/virt CPU model example

The components of this section of the domain XML are as follows:

Table 23.8. CPU model and topology elements

Element Description
<cpu> This is the main container for describing guest virtual machine CPU requirements.
<match> Specifies how the virtual CPU is provided to the guest virtual machine must match these requirements. The match attribute can be omitted if topology is the only element within <cpu>. Possible values for the match attribute are:
  • minimum - the specified CPU model and features describes the minimum requested CPU.
  • exact - the virtual CPU provided to the guest virtual machine will exactly match the specification.
  • strict - the guest virtual machine will not be created unless the host physical machine CPU exactly matches the specification.
Note that the match attribute can be omitted and will default to exact.
<mode> This optional attribute may be used to make it easier to configure a guest virtual machine CPU to be as close to the host physical machine CPU as possible. Possible values for the mode attribute are:
  • custom - Describes how the CPU is presented to the guest virtual machine. This is the default setting when the mode attribute is not specified. This mode makes it so that a persistent guest virtual machine will see the same hardware no matter what host physical machine the guest virtual machine is booted on.
  • host-model - A shortcut to copying host physical machine CPU definition from the capabilities XML into the domain XML. As the CPU definition is copied just before starting a domain, the same XML can be used on different host physical machines while still providing the best guest virtual machine CPU each host physical machine supports. The match attribute and any feature elements cannot be used in this mode. For more information, see the libvirt upstream website.
  • host-passthrough With this mode, the CPU visible to the guest virtual machine is exactly the same as the host physical machine CPU, including elements that cause errors within libvirt. The obvious the downside of this mode is that the guest virtual machine environment cannot be reproduced on different hardware and therefore, this mode is recommended with great caution. The model and feature elements are not allowed in this mode.
<model> Specifies the CPU model requested by the guest virtual machine. The list of available CPU models and their definition can be found in the cpu_map.xml file installed in libvirt's data directory. If a hypervisor is unable to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features. An optional fallback attribute can be used to forbid this behavior, in which case an attempt to start a domain requesting an unsupported CPU model will fail. Supported values for fallback attribute are: allow (the default), and forbid. The optional vendor_id attribute can be used to set the vendor ID seen by the guest virtual machine. It must be exactly 12 characters long. If not set, the vendor iID of the host physical machine is used. Typical possible values are AuthenticAMD and GenuineIntel.
<vendor> Specifies the CPU vendor requested by the guest virtual machine. If this element is missing, the guest virtual machine runs on a CPU matching given features regardless of its vendor. The list of supported vendors can be found in cpu_map.xml.
<topology> Specifies the requested topology of the virtual CPU provided to the guest virtual machine. Three non-zero values must be given for sockets, cores, and threads: the total number of CPU sockets, number of cores per socket, and number of threads per core, respectively.
<feature> Can contain zero or more elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the cpu_map.xml file. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values:
  • force - forces the virtual to be supported, regardless of whether it is actually supported by host physical machine CPU.
  • require - dictates that guest virtual machine creation will fail unless the feature is supported by host physical machine CPU. This is the default setting,
  • optional - this feature is supported by virtual CPU but only if it is supported by host physical machine CPU.
  • disable - this is not supported by virtual CPU.
  • forbid - guest virtual machine creation will fail if the feature is supported by host physical machine CPU.

23.12.1. Changing the Feature Set for a Specified CPU

Although CPU models have an inherent feature set, the individual feature components can either be allowed or forbidden on a feature by feature basis, allowing for a more individualized configuration for the CPU.

Procedure 23.1. Enabling and disabling CPU features

  1. To begin, shut down the guest virtual machine.
  2. Open the guest virtual machine's configuration file by running the virsh edit [domain] command.
  3. Change the parameters within the <feature> or <model> to include the attribute value 'allow' to force the feature to be allowed, or 'forbid' to deny support for the feature.
    		      
    <!-- original feature set -->
    <cpu mode='host-model'>
       <model fallback='allow'/>
       <topology sockets='1' cores='2' threads='1'/>
    </cpu>
    
    <!--changed feature set-->
    <cpu mode='host-model'>
       <model fallback='forbid'/>
       <topology sockets='1' cores='2' threads='1'/>
    </cpu>
    		      

    Figure 23.20. Example for enabling or disabling CPU features

    		         
    <!--original feature set-->
    <cpu match='exact'>
        <model fallback='allow'>core2duo</model>
        <vendor>Intel</vendor>
        <topology sockets='1' cores='2' threads='1'/>
        <feature policy='disable' name='lahf_lm'/>
    </cpu>
    
    <!--changed feature set-->
    <cpu match='exact'>
        <model fallback='allow'>core2duo</model>
        <vendor>Intel</vendor>
        <topology sockets='1' cores='2' threads='1'/>
        <feature policy='enable' name='lahf_lm'/>
      </cpu>
    		         

    Figure 23.21. Example 2 for enabling or disabling CPU features

  4. When you have completed the changes, save the configuration file and start the guest virtual machine.

23.12.2. Guest Virtual Machine NUMA Topology

Guest virtual machine NUMA topology can be specified using the <numa> element in the domain XML:

  <cpu>
    <numa>
      <cell cpus='0-3' memory='512000'/>
      <cell cpus='4-7' memory='512000'/>
    </numa>
  </cpu>
  ...

Figure 23.22. Guest virtual machine NUMA topology

Each cell element specifies a NUMA cell or a NUMA node. cpus specifies the CPU or range of CPUs that are part of the node. memory specifies the node memory in kibibytes (blocks of 1024 bytes). Each cell or node is assigned a cellid or nodeid in increasing order starting from 0.