Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

3.11. Hypervisors

Red Hat OpenStack Platform is only supported for use with the libvirt driver (using KVM as the hypervisor on Compute nodes).
With this release of Red Hat OpenStack Platform, Ironic is now fully supported. Ironic allows you to provision bare-metal machines using common technologies (such as PXE boot and IPMI) to cover a wide range of hardware while supporting pluggable drivers to allow the addition of vendor-specific functionality.
Red Hat does not provide support for other Compute virtualization drivers such as the deprecated VMware "direct-to-ESX" hypervisor, and non-KVM libvirt hypervisors.

3.11.1. Hypervisor configuration basics

The node where the nova-compute service is installed and operates on the same node that runs all of the virtual machines. This is referred to as the compute node in this guide.
By default, the selected hypervisor is KVM. To change to another hypervisor, change the virt_type option in the [libvirt] section of nova.conf and restart the nova-compute service.
Here are the general nova.conf options that are used to configure the compute node's hypervisor: Table 3.30, “Description of hypervisor configuration options”.
Specific options for particular hypervisors can be found in the following sections.

3.11.2. KVM

KVM is configured as the default hypervisor for Compute.
Note
This document contains several sections about hypervisor selection. If you are reading this document linearly, you do not want to load the KVM module before you install nova-compute. The nova-compute service depends on qemu-kvm, which installs /lib/udev/rules.d/45-qemu-kvm.rules, which sets the correct permissions on the /dev/kvm device node.
To enable KVM explicitly, add the following configuration options to the /etc/nova/nova.conf file:
compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = kvm
The KVM hypervisor supports the following virtual machine image formats:
  • Raw
  • QEMU Copy-on-write (qcow2)
  • QED Qemu Enhanced Disk
  • VMware virtual machine disk format (vmdk)
This section describes how to enable KVM on your system. For more information, see Installing virtualization packages on an existing Red Hat Enterprise Linux system from the Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide.

3.11.2.1. Enable KVM

The following sections outline how to enable KVM based hardware virtualisation on different architectures and platforms. To perform these steps, you must be logged in as the root user.
3.11.2.1.1. For x86 based systems
  1. To determine whether the svm or vmx CPU extensions are present, run this command:
    # grep -E 'svm|vmx' /proc/cpuinfo
    This command generates output if the CPU is capable of hardware virtualization. Even if output is shown, you might still need to enable virtualization in the system BIOS for full support.
    If no output appears, consult your system documentation to ensure that your CPU and motherboard support hardware virtualization. Verify that any relevant hardware virtualization options are enabled in the system BIOS.
    The BIOS for each manufacturer is different. If you must enable virtualization in the BIOS, look for an option containing the words virtualization, VT, VMX, or SVM.
  2. To list the loaded kernel modules and verify that the kvm modules are loaded, run this command:
    # lsmod | grep kvm
    If the output includes kvm_intel or kvm_amd, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.
    If the output does not show that the kvm module is loaded, run this command to load it:
    # modprobe -a kvm
    Run the command for your CPU. For Intel, run this command:
    # modprobe -a kvm-intel
    For AMD, run this command:
    # modprobe -a kvm-amd
    Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.
    If the kernel modules do not load automatically, use the procedures listed in these subsections.
If the checks indicate that required hardware virtualization support or kernel modules are disabled or unavailable, you must either enable this support on the system or find a system with this support.
Note
Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command did not produce output, reboot your machine, enter the system BIOS, and enable the VT option.
If KVM acceleration is not supported, configure Compute to use a different hypervisor, such as QEMU or Xen.
These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation.
3.11.2.1.1.1. Intel-based processors
If your compute host is Intel-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-intel
See Persistent Module Loading in Red Hat Enterprise Linux 6, or Persistent Module Loading in Red Hat Enterprise Linux 7 respectively, for instructions on how to load the kvm and kvm-amd modules automatically.
3.11.2.1.1.2. AMD-based processors
If your compute host is AMD-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-amd
See Persistent Module Loading in Red Hat Enterprise Linux 6, or Persistent Module Loading in Red Hat Enterprise Linux 7 respectively, for instructions on how to load the kvm and kvm-intel modules automatically.
3.11.2.1.2. For POWER based systems
KVM as a hypervisor is supported on POWER system's PowerNV platform.
  1. To determine if your POWER platform supports KVM based virtualization run the following command:
    # grep PowerNV /proc/cpuinfo
    If the previous command generates the following output, then CPU supports KVM based virtualization
    platform: PowerNV
    If no output is displayed, then your POWER platform does not support KVM based hardware virtualization.
  2. To list the loaded kernel modules and verify that the kvm modules are loaded, run the following command:
    # lsmod | grep kvm
    If the output includes kvm_hv, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.
    If the output does not show that the kvm module is loaded, run the following command to load it:
    # modprobe -a kvm
    For PowerNV platform, run the following command:
    # modprobe -a kvm-hv
    Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.

3.11.2.2. Specify the CPU model of KVM guests

The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include:
  • To maximize performance of virtual machines by exposing new host CPU features to the guest
  • To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults
In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model names. These models are defined in the /usr/share/libvirt/cpu_map.xml file. Check this file to determine which models are supported by your local installation.
Two Compute configuration options in the [libvirt] group of nova.conf define which type of CPU model is exposed to the hypervisor when using KVM: cpu_mode and cpu_model.
The cpu_mode option can take one of the following values: none, host-passthrough, host-model, and custom.
Host model (default for KVM & QEMU)
If your nova.conf file contains cpu_mode=host-model, libvirt identifies the CPU model in /usr/share/libvirt/cpu_map.xml file that most closely matches the host, and requests additional CPU flags to complete the match. This configuration provides the maximum functionality and performance and maintains good reliability and compatibility if the guest is migrated to another host with slightly different host CPUs.
Host pass through
If your nova.conf file contains cpu_mode=host-passthrough, libvirt tells KVM to pass through the host CPU with no modifications. The difference to host-model, instead of only matching feature flags, every last detail of the host CPU is matched. This gives the best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration. The guest can only be migrated to a matching host CPU.
Custom
If your nova.conf file contains cpu_mode=custom, you can explicitly specify one of the supported named models using the cpu_model configuration option. For example, to configure the KVM guests to expose Nehalem CPUs, your nova.conf file should contain:
[libvirt]
cpu_mode = custom
cpu_model = Nehalem
None (default for all libvirt-driven hypervisors other than KVM & QEMU)
If your nova.conf file contains cpu_mode=none, libvirt does not specify a CPU model. Instead, the hypervisor chooses the default model.

3.11.2.3. Guest agent support

Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol.
To enable this feature, you must set hw_qemu_guest_agent=yes as a metadata parameter on the image you want to use to create the guest-agent-capable instances from. You can explicitly disable the feature by setting hw_qemu_guest_agent=no in the image metadata.

3.11.2.4. KVM performance tweaks

The VHostNet kernel module improves network performance. To load the kernel module, run the following command as root:
# modprobe vhost_net

3.11.2.5. Troubleshoot KVM

Trying to launch a new virtual machine instance fails with the ERRORstate, and the following error appears in the /var/log/nova/nova-compute.log file:
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting, the permissions might not be set correctly. This can happen if you load the KVM module before you install nova-compute. To check whether the group is set to kvm, run:
# ls -l /dev/kvm
If it is not set to kvm, run:
# udevadm trigger

3.11.3. QEMU

From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment.
The typical uses cases for QEMU are
  • Running on older hardware that lacks virtualization support.
  • Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests.
To enable QEMU, add these settings to nova.conf:
compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = qemu
For some operations you may also have to install the guestmount utility:
# yum install libguestfs-tools
The QEMU hypervisor supports the following virtual machine image formats:
  • Raw
  • QEMU Copy-on-write (qcow2)
  • VMware virtual machine disk format (vmdk)