Chapter 4. Virtualization Restrictions

This chapter covers additional support and product restrictions of the virtualization packages in Red Hat Enterprise Linux 6.

4.1. KVM Restrictions

The following restrictions apply to the KVM hypervisor:
Maximum vCPUs per guest
The maximum amount of virtual CPUs that is supported per guest varies depending on which minor version of Red Hat Enterprise Linux 6 you are using as a host machine. The release of 6.0 introduced a maximum of 64, while 6.3 introduced a maximum of 160. Currently with the release of 6.7, a maximum of 240 virtual CPUs per guest is supported.
Constant TSC bit
Systems without a Constant Time Stamp Counter require additional configuration. Refer to Chapter 14, KVM Guest Timing Management for details on determining whether you have a Constant Time Stamp Counter and configuration steps for fixing any related issues.
Memory overcommit
KVM supports memory overcommit and can store the memory of guest virtual machines in swap. A virtual machine will run slower if it is swapped frequently. Red Hat Knowledgebase has an article on safely and efficiently determining an appropriate size for the swap partition, available here: https://access.redhat.com/site/solutions/15244. When KSM is used for memory overcommitting, make sure that the swap size follows the recommendations described in this article.

Important

When device assignment is in use, all virtual machine memory must be statically pre-allocated to enable DMA with the assigned device. Memory overcommit is therefore not supported with device assignment.
CPU overcommit
It is not recommended to have more than 10 virtual CPUs per physical processor core. Customers are encouraged to use a capacity planning tool in order to determine the CPU overcommit ratio. Estimating an ideal ratio is difficult as it is highly dependent on each workload. For instance, a guest virtual machine may consume 100% CPU on one use case, and multiple guests may be completely idle on another.
Red Hat does not support running more vCPUs to a single guest than the amount of overall physical cores that exist on the system. While Hyperthreads can be considered as cores, their performance can also vary from one scenario to the next, and they should not be expected to perform as well as regular cores.
Refer to the Red Hat Enterprise Linux Virtualization Administration Guide for tips and recommendations on overcommitting CPUs.
Virtualized SCSI devices
SCSI emulation is not supported with KVM in Red Hat Enterprise Linux.
Virtualized IDE devices
KVM is limited to a maximum of four virtualized (emulated) IDE devices per guest virtual machine.
PCI devices
Red Hat Enterprise Linux 6 supports 32 PCI device slots per virtual machine, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per guest when multi-function capabilities are enabled.
However, this theoretical maximum is subject to the following limitations:
  • Each virtual machine supports a maximum of 8 assigned device functions.
  • 4 PCI device slots are configured with 5 emulated devices (two devices are in slot 1) by default. However, users can explicitly remove 2 of the emulated devices that are configured by default if the guest operating system does not require them for operation (the video adapter device in slot 2; and the memory balloon driver device in the lowest available slot, usually slot 3). This gives users a supported functional maximum of 30 PCI device slots per virtual machine.
The following restrictions also apply to PCI device assignment:
  • PCI device assignment (attaching PCI devices to virtual machines) requires host systems to have AMD IOMMU or Intel VT-d support to enable device assignment of PCI-e devices.
  • For parallel/legacy PCI, only single devices behind a PCI bridge are supported.
  • Multiple PCIe endpoints connected through a non-root PCIe switch require ACS support in the PCIe bridges of the PCIe switch. To disable this restriction, edit the /etc/libvirt/qemu.conf file and insert the line:
    relaxed_acs_check=1
  • Red Hat Enterprise Linux 6 has limited PCI configuration space access by guest device drivers. This limitation could cause drivers that are dependent on PCI configuration space to fail configuration.
  • Red Hat Enterprise Linux 6.2 introduced interrupt remapping as a requirement for PCI device assignment. If your platform does not provide support for interrupt remapping, circumvent the KVM check for this support with the following command as the root user at the command line prompt:
    # echo 1 > /sys/module/kvm/parameters/allow_unsafe_assigned_interrupts
Migration restrictions
Device assignment refers to physical devices that have been exposed to a virtual machine, for the exclusive use of that virtual machine. Because device assignment uses hardware on the specific host where the virtual machine runs, migration and save/restore are not supported when device assignment is in use. If the guest operating system supports hot plugging, assigned devices can be removed prior to the migration or save/restore operation to enable this feature.
Live migration is only possible between hosts with the same CPU type (that is, Intel to Intel or AMD to AMD only).
For live migration, both hosts must have the same value set for the No eXecution (NX) bit, either on or off.
For migration to work, cache=none must be specified for all block devices opened in write mode.

Warning

Failing to include the cache=none option can result in disk corruption.
Storage restrictions
There are risks associated with giving guest virtual machines write access to entire disks or block devices (such as /dev/sdb). If a guest virtual machine has access to an entire block device, it can share any volume label or partition table with the host machine. If bugs exist in the host system's partition recognition code, this can create a security risk. Avoid this risk by configuring the host machine to ignore devices assigned to a guest virtual machine.

Warning

Failing to adhere to storage restrictions can result in risks to security.
SR-IOV restrictions
SR-IOV is only thoroughly tested with the following devices (other SR-IOV devices may work but have not been tested at the time of release):
  • Intel® 82576NS Gigabit Ethernet Controller (igb driver)
  • Intel® 82576EB Gigabit Ethernet Controller (igb driver)
  • Intel® 82599ES 10 Gigabit Ethernet Controller (ixgbe driver)
  • Intel® 82599EB 10 Gigabit Ethernet Controller (ixgbe driver)
Core dumping restrictions
Core dumping uses the same infrastructure as migration and requires more device knowledge and control than device assignement can provide. Therefore, core dumping is not supported when device assignment is in use.