Chapter 22. Feature support and limitations in RHEL 9 virtualization

This document provides information about feature support and restrictions in Red Hat Enterprise Linux 9 (RHEL 9) virtualization.

22.1. How RHEL virtualization support works

A set of support limitations applies to virtualization in Red Hat Enterprise Linux 9 (RHEL 9). This means that when you use certain features or exceed a certain amount of allocated resources when using virtual machines in RHEL 9, Red Hat will not support these guests unless you have a specific subscription plan.

Features listed in Recommended features in RHEL 9 virtualization have been tested and certified by Red Hat to work with the KVM hypervisor on a RHEL 9 system. Therefore, they are fully supported and recommended for use in virtualization in RHEL 9.

Features listed in Unsupported features in RHEL 9 virtualization may work, but are not supported and not intended for use in RHEL 9. Therefore, Red Hat strongly recommends not using these features in RHEL 9 with KVM.

Resource allocation limits in RHEL 9 virtualization lists the maximum amount of specific resources supported on a KVM guest in RHEL 9. Guests that exceed these limits are not supported by Red Hat.

In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 9 virtualization are supported. However, some of these have not been completely tested and therefore may not be fully optimized.

Important

Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).

22.3. Unsupported features in RHEL 9 virtualization

The following features are not supported by the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9):

Important

Many of these limitations may not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).

Features supported by other virtualization solutions are described as such in the following paragraphs.

Host system architectures

RHEL 9 with KVM is not supported on any host architectures that are not listed in Recommended features in RHEL 9 virtualization.

Notably, the 64-bit ARM architecture (ARM 64) is provided only as a Technology Preview for KVM virtualization on RHEL 9, and Red Hat therefore discourages its use in production environments.

Guest operating systems

KVM virtual machines (VMs) that use the following guest operating systems (OSs) are not supported on a RHEL 9 host:

  • Microsoft Windows 8.1 and earlier
  • Microsoft Windows Server 2008 R2 and earlier
  • macOS
  • Solaris for x86 systems
  • Any OS released before 2009

For a list of guest OSs supported on RHEL hosts and other virtualization solutions, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.

Creating VMs in containers

Red Hat does not support creating KVM virtual machines in any type of container that includes the elements of the RHEL 9 hypervisor (such as the QEMU emulator or the libvirt package).

To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering.

Specific virsh commands and options

Not every parameter that you can use with the virsh utility has been tested and certified as production-ready by Red Hat. Therefore, any virsh commands and options that are not explicitly recommended by Red Hat documentation may not work correctly, and Red Hat recommends not using them in your production environment.

Notably, unsupported virsh commands include the following:

  • virsh iface-* commands, such as virsh iface-start and virsh iface-destroy
  • virsh blkdeviotune
  • virsh snapshot-* commands, such as virsh snapshot-create and virsh snapshot-revert

The QEMU command line

QEMU is an essential component of the virtualization architecture in RHEL 9, but it is difficult to manage manually, and improper QEMU configurations might cause security vulnerabilities. Therefore, using qemu-* command-line utilities, such as, qemu-kvm is not supported by Red Hat. Instead, use libvirt utilities, such as virt-install, virt-xml, and supported virsh commands, as these orchestrate QEMU according to the best practices. However, the qemu-img utility is supported for management of virtual disk images.

vCPU hot unplug

Removing a virtual CPU (vCPU) from a running VM, also referred to as a vCPU hot unplug, is not supported in RHEL 9.

Memory hot unplug

Removing a memory device attached to a running VM, also referred to as a memory hot unplug, is unsupported in RHEL 9.

QEMU-side I/O throttling

Using the virsh blkdeviotune utility to configure maximum input and output levels for operations on virtual disk, also known as QEMU-side I/O throttling, is not supported in RHEL 9.

To set up I/O throttling in RHEL 9, use virsh blkiotune. This is also known as libvirt-side I/O throttling. For instructions, see Disk I/O throttling in virtual machines.

Other solutions:

Storage live migration

Migrating a disk image of a running VM between hosts is not supported in RHEL 9.

Other solutions:

  • Storage live migration is supported in RHOSP, but with some limitations. For details, see Migrate a Volume.

Live snapshots

Creating or loading a snapshot of a running VM, also referred to as a live snapshot, is not supported in RHEL 9.

In addition, note that non-live VM snapshots are deprecated in RHEL 9. Therefore, creating or loading a snapshot of a shut-down VM is supported, but Red Hat recommends not using it.

Other solutions:

vHost Data Path Acceleration

On RHEL 9 hosts, it is possible to configure vHost Data Path Acceleration (vDPA) for virtio devices, but Red Hat currently does not support this feature, and strongly discourages its use in production environments.

vhost-user

RHEL 9 does not support the implementation of a user-space vHost interface.

Other solutions:

S3 and S4 system power states

Suspending a VM to the Suspend to RAM (S3) or Suspend to disk (S4) system power states is not supported. Note that these features are disabled by default, and enabling them will make your VM not supportable by Red Hat.

Note that the S3 and S4 states are also currently not supported in any other virtualization solution provided by Red Hat.

S3-PR on a multipathed vDisk

SCSI3 persistent reservation (S3-PR) on a multipathed vDisk is not supported in RHEL 9. As a consequence, Windows Cluster is not supported in RHEL 9.

virtio-crypto

Using the virtio-crypto device in RHEL 9 is not supported and its use is therefore highly discouraged.

Note that virtio-crypto devices are also not supported in any other virtualization solution provided by Red Hat.

Incremental live backup

Configuring a VM backup that only saves VM changes since the last backup, also known as incremental live backup, is not supported in RHEL 9, and Red Hat highly discourages its use.

net_failover

Using the net_failover driver to set up an automated network device failover mechanism is not supported in RHEL 9.

Note that net_failover is also currently not supported in any other virtualization solution provided by Red Hat.

Multi-FD migration

Migrating VMs using multiple file descriptors (FDs), also known as multi-FD migration, is not supported in RHEL 9.

TCG

QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This mode does not require hardware virtualization support. However, TCG is not supported by Red Hat.

TCG-based guests can be recognized by examining its XML configuration, for example using the virsh dumpxml command.

  • The configuration file of a TCG guest contains the following line:

    <domain type='qemu'>
  • The configuration file of a KVM guest contains the following line:

    <domain type='kvm'>

SR-IOV InfiniBand networking devices

Attaching InfiniBand networking devices to VMs using Single-root I/O virtualization (SR-IOV) is not supported.

22.4. Resource allocation limits in RHEL 9 virtualization

The following limits apply to virtualized resources that can be allocated to a single KVM virtual machine (VM) on a Red Hat Enterprise Linux 9 (RHEL 9) host.

Important

Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).

Maximum vCPUs per VM

For the maximum amount of vCPUs and memory that is supported on a single VM running on a RHEL 9 host, see: Virtualization limits for Red Hat Enterprise Linux with KVM

PCI devices per VM

RHEL 9 supports 32 PCI device slots per VM bus, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per bus when multi-function capabilities are enabled in the VM, and no PCI bridges are used.

Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some buses do not make all 256 device addresses available for the user; for example, the root bus has several built-in devices occupying slots.

Virtualized IDE devices

KVM is limited to a maximum of 4 virtualized IDE devices per VM.

22.5. How virtualization on IBM Z differs from AMD64 and Intel 64

KVM virtualization in RHEL 9 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:

PCI and USB devices

Virtual PCI and USB devices are not supported on IBM Z. This also means that virtio-*-pci devices are unsupported, and virtio-*-ccw devices should be used instead. For example, use virtio-net-ccw instead of virtio-net-pci.

Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.

Supported guest operating system
Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system.
Device boot order

IBM Z does not support the <boot dev='device'> XML configuration element. To define device boot order, use the <boot order='number'> element in the <devices> section of the XML.

In addition, you can select the required boot entry by using the architecture-specific loadparm attribute in the <boot> element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2'/>
  <source file='/path/to/qcow2'/>
  <target dev='vda' bus='virtio'/>
  <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
  <boot order='1' loadparm='2'/>
</disk>
Note

By using <boot order='number'> for boot order management is also preferred on AMD64 and Intel 64 hosts.

Memory hot plug
Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
NUMA topology
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by libvirt on IBM Z. Therefore, tuning vCPU performance by using NUMA is not possible on these systems.
vfio-ap
VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture.
vfio-ccw
VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture.
SMBIOS
SMBIOS configuration is not available on IBM Z.
Watchdog devices

If using watchdog devices in your VM on an IBM Z host, use the diag288 model. For example:

<devices>
  <watchdog model='diag288' action='poweroff'/>
</devices>
kvm-clock
The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z.
v2v and p2v
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z.
Migrations

To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are not recommended, as they are generally not migration-safe.

If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines:

  • Do not use CPU models that end with -base.
  • Do not use the qemu, max or host CPU model.

To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base at the end.

PXE installation and booting

When using PXE to run a VM on IBM Z, a specific configuration is required for the pxelinux.cfg/default file. For example:

# pxelinux
default linux
label linux
kernel kernel.img
initrd initrd.img
append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/
Secure Execution
You can boot a VM with a prepared secure guest image by defining <launchSecurity type="s390-pv"/> in the XML configuration of the VM. This encrypts the VM’s memory to protect it from unwanted access by the hypervisor.

Note that the following features are not supported when running a VM in secure execution mode:

  • Device passthrough by using vfio
  • Obtaining memory information by using virsh domstats and virsh memstat
  • The memballoon and virtio-rng virtual devices
  • Memory backing by using huge pages
  • Live and non-live VM migrations
  • Saving and restoring VMs
  • VM snapshots, including memory snapshots (using the --memspec option)
  • Full memory dumps. Instead, specify the --memory-only option for the virsh dump command.
  • 248 or more vCPUs. The vCPU limit for secure guests is 247.

22.6. How virtualization on ARM 64 differs from AMD64 and Intel 64

KVM virtualization in RHEL 9 on ARM 64 systems is different from KVM on AMD64 and Intel 64 systems in a number of aspects. These include, but are not limited to, the following:

Support
Virtualization on ARM 64 is only provided as a Technology Preview on RHEL 9, and is therefore unsupported.
Guest operating systems
The only guest operating system currently working on ARM 64 virtual machines (VMs) is RHEL 9.
Web console management
Some features of VM management in the RHEL 9 web console may not work correctly on ARM 64 hardware.
vCPU hot plug and hot unplug
Attaching a virtual CPU (vCPU) to a running VM, also referred to as a vCPU hot plug, is not supported on ARM 64 hosts. In addition, like on AMD64 and Intel 64 hosts, removing a vCPU from a running VM (vCPU hot unplug), is not supported on ARM 64.
SecureBoot
The SecureBoot feature is not available on ARM 64 systems.
PXE
Booting in the Preboot Execution Environment (PXE) is only possible with the virtio-net-pci network interface controller (NIC). In addition, the built-in VirtioNetDxe driver of the virtual machine UEFI platform firmware (installed with the edk2-aarch64 package) needs to be used for PXE booting. Note that iPXE option ROMs are not supported.
Device memory
Device memory features, such as the dual in-line memory module (DIMM) and non-volatile DIMM (NVDIMM), do not work on ARM 64.
pvpanic
The pvpanic device is currently not functional on ARM 64. Make sure to remove the <panic> element from the <devices> section of the guest XML configuration on ARM 64, as its presence can lead to the VM failing to boot.
OVMF

VMs on an ARM 64 host cannot use the OVMF UEFI firmware used on AMD64 and Intel 64, included in the edk2-ovmf package. Instead, these VMs use UEFI firmware included in the edk2-aarch64 package, which provides a similar interface and implements a similar set of features.

Specifically, edk2-aarch64 provides a built-in UEFI shell, but does not support the following functionality:

  • SecureBoot
  • Management Mode
  • TPM-1.2 support
kvm-clock
The kvm-clock service does not have to be configured for time management in VMs on ARM 64.
Peripheral devices
ARM 64 systems do not support all the peripheral devices that are supported on AMD64 and Intel 64 systems. In some cases, the device functionality is not supported at all, and in other cases, a different device is supported for the same functionality.
Serial console configuration
When setting up a serial console on a VM , use the console=ttyAMA0 kernel option instead of console=ttyS0 with the grubby utility.
Non-maskable interrupts
Sending non-maskable interrupts (NMIs) to an ARM 64 VM is currently not possible.
Nested virtualization
Creating nested VMs is currently not possible on ARM 64 hosts.
v2v and p2v
The virt-v2v and virt-p2v utilities are only supported on the AMD64 and Intel 64 architecture and are, therefore, not provided on ARM 64.

22.7. An overview of virtualization features support in RHEL 9

The following tables provide comparative information about the support state of selected virtualization features in RHEL 9 across the available system architectures.

Table 22.1. General support

Intel 64 and AMD64IBM ZARM 64

Supported

Supported

UNSUPPORTED

(Technology Preview)

Table 22.2. Device hot plug and hot unplug

 Intel 64 and AMD64IBM ZARM 64

CPU hot plug

Supported

Supported

UNAVAILABLE

CPU hot unplug

UNSUPPORTED

UNSUPPORTED

UNAVAILABLE

Memory hot plug

Supported

UNSUPPORTED

UNAVAILABLE

Memory hot unplug

UNSUPPORTED

UNSUPPORTED

UNAVAILABLE

Peripheral device hot plug

Supported

Supported [a]

Available but UNSUPPORTED

Peripheral device hot unplug

Supported

Supported [b]

Available but UNSUPPORTED

[a] Requires using virtio-*-ccw devices instead of virtio-*-pci
[b] Requires using virtio-*-ccw devices instead of virtio-*-pci

Table 22.3. Other selected features

 Intel 64 and AMD64IBM ZARM 64

NUMA tuning

Supported

UNSUPPORTED

UNAVAILABLE

SR-IOV devices

Supported

UNSUPPORTED

UNAVAILABLE

virt-v2v and p2v

Supported

UNSUPPORTED

UNAVAILABLE

Note that some of the unsupported features are supported on other Red Hat products, such as Red Hat Virtualization and Red Hat OpenStack platform. For more information, see Unsupported features in RHEL 9 virtualization.