Chapter 22. Feature support and limitations in RHEL 9 virtualization
This document provides information about feature support and restrictions in Red Hat Enterprise Linux 9 (RHEL 9) virtualization.
22.1. How RHEL virtualization support works
A set of support limitations applies to virtualization in Red Hat Enterprise Linux 9 (RHEL 9). This means that when you use certain features or exceed a certain amount of allocated resources when using virtual machines in RHEL 9, Red Hat will not support these guests unless you have a specific subscription plan.
Features listed in Recommended features in RHEL 9 virtualization have been tested and certified by Red Hat to work with the KVM hypervisor on a RHEL 9 system. Therefore, they are fully supported and recommended for use in virtualization in RHEL 9.
Features listed in Unsupported features in RHEL 9 virtualization may work, but are not supported and not intended for use in RHEL 9. Therefore, Red Hat strongly recommends not using these features in RHEL 9 with KVM.
Resource allocation limits in RHEL 9 virtualization lists the maximum amount of specific resources supported on a KVM guest in RHEL 9. Guests that exceed these limits are not supported by Red Hat.
In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 9 virtualization are supported. However, some of these have not been completely tested and therefore may not be fully optimized.
Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
22.2. Recommended features in RHEL 9 virtualization
The following features are recommended for use with the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9):
Host system architectures
RHEL 9 with KVM is only supported on the following host architectures:
- AMD64 and Intel 64
- IBM Z - IBM z13 systems and later
Any other hardware architectures are not supported for using RHEL 9 as a KVM virtualization host, and Red Hat highly discourages doing so. Notably, this includes the 64-bit ARM architecture (ARM 64), which is only provided as Technology Preview.
Guest operating systems
Red Hat provides support with KVM virtual machines that use specific guest operating systems (OSs). For a detailed list of supported guest OSs, see the Certified Guest Operating Systems in the Red Hat KnowledgeBase.
Note, however, that by default, your guest OS does not use the same subscription as your host. Therefore, you must activate a separate licence or subscription for the guest OS to work properly.
In addition, the pass-through devices that you attach to the VM must be supported by both the host OS and the guest OS.
Similarly, for optimal function of your deployment, Red Hat recommends that the CPU model and features that you define in the XML configuration of a VM are supported by both the host OS and the guest OS.
To view the certified CPUs and other hardware for various versions of RHEL, see the Red Hat Ecosystem Catalog.
Machine types
To ensure that your VM is compatible with your host architecture and that the guest OS runs optimally, the VM must use an appropriate machine type.
In RHEL 9, pc-i440fx-rhel7.5.0
and earlier machine types, which were default in earlier major versions of RHEL, are no longer supported. As a consequence, attempting to start a VM with such machine types on a RHEL 9 host fails with an unsupported configuration
error. If you encounter this problem after upgrading your host to RHEL 9, see the Red Hat KnowledgeBase.
When creating a VM using the command line, the virt-install
utility provides multiple methods of setting the machine type.
-
When you use the
--os-variant
option,virt-install
automatically selects the machine type recommended for your host CPU and supported by the guest OS. -
If you do not use
--os-variant
or require a different machine type, use the--machine
option to specify the machine type explicitly. -
If you specify a
--machine
value that is unsupported or not compatible with your host,virt-install
fails and displays an error message.
The recommended machine types for KVM virtual machines on supported architectures, and the corresponding values for the --machine
option, are as follows. Y stands for the latest minor version of RHEL 9.
-
On Intel 64 and AMD64 (x86_64):
pc-q35-rhel9.Y.0
→--machine=q35
-
On IBM Z (s390x):
s390-ccw-virtio-rhel9.Y.0
→--machine=s390-ccw-virtio
To obtain the machine type of an existing VM:
# virsh dumpxml VM-name | grep machine=
To view the full list of machine types supported on your host:
# /usr/libexec/qemu-kvm -M help
22.3. Unsupported features in RHEL 9 virtualization
The following features are not supported by the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9):
Many of these limitations may not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Features supported by other virtualization solutions are described as such in the following paragraphs.
Host system architectures
RHEL 9 with KVM is not supported on any host architectures that are not listed in Recommended features in RHEL 9 virtualization.
Notably, the 64-bit ARM architecture (ARM 64) is provided only as a Technology Preview for KVM virtualization on RHEL 9, and Red Hat therefore discourages its use in production environments.
Guest operating systems
KVM virtual machines (VMs) using the following guest operating systems (OSs) on a RHEL 9 host are not supported:
- Microsoft Windows 8.1 and earlier
- Microsoft Windows Server 2008 R2 and earlier
- macOS
- Solaris for x86 systems
- Any OS released prior to 2009
For a list of guest OSs supported on RHEL hosts and other virtualization solutions, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.
Creating VMs in containers
Red Hat does not support creating KVM virtual machines in any type of container that includes the elements of the RHEL 9 hypervisor (such as the QEMU
emulator or the libvirt
package).
Other solutions:
- To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering.
Specific virsh commands and options
Not every parameter that you can use with the virsh
utility has been tested and certified as production-ready by Red Hat. Therefore, any virsh
commands and options that are not explicitly recommended by Red Hat documentation may not work correctly, and Red Hat recommends not using them in your production environment.
Notably, unsupported virsh
commands include the following:
-
virsh iface-*
commands, such asvirsh iface-start
andvirsh iface-destroy
-
virsh blkdeviotune
-
virsh snapshot-*
commands, such asvirsh snapshot-create
andvirsh snapshot-revert
The QEMU command line
QEMU is an essential component of the virtualization architecture in RHEL 9, but it is difficult to manage manually, and improper QEMU configurations may cause security vulnerabilities. Therefore, using qemu-*
command-line utilities, such as qemu-kvm
is not supported by Red Hat.
Instead, use libvirt utilities, such as virt-install
, virt-xml
, and supported virsh
commands, as these orchestrate QEMU according to the best practices.
vCPU hot unplug
Removing a virtual CPU (vCPU) from a running VM, also referred to as a vCPU hot unplug, is not supported in RHEL 9.
Memory hot unplug
Removing a memory device attached to a running VM, also referred to as a memory hot unplug, is unsupported in RHEL 9.
QEMU-side I/O throttling
Using the virsh blkdeviotune
utility to configure maximum input and output levels for operations on virtual disk, also known as QEMU-side I/O throttling, is not supported in RHEL 9.
To set up I/O throttling in RHEL 9, use virsh blkiotune
. This is also known as libvirt-side I/O throttling. For instructions, see Disk I/O throttling in virtual machines.
Other solutions:
- QEMU-side I/O throttling is also supported in RHOSP. For details, see Setting Resource Limitation on Disk and the Use Quality-of-Service Specifications section in the RHOSP Storage Guide.
- In addition, OpenShift Virtualizaton supports QEMU-side I/O throttling as well.
Storage live migration
Migrating a disk image of a running VM between hosts is not supported in RHEL 9.
Other solutions:
- Storage live migration is also supported in RHOSP, but with some limitations. For details, see Migrate a Volume.
- It is also possible live-migrate VM storage when using OpenShift Virtualization. For more infrmation, see Virtual machine live migration.
Live snapshots
Creating or loading a snapshot of a running VM, also referred to as a live snapshot, is not supported in RHEL 9.
In addition, note that non-live VM snapshots are deprecated in RHEL 9. Therefore, creating or loading a snapshot of a shut-down VM is supported, but Red Hat recommends not using it.
Other solutions:
- RHOSP also supports live snapshots. For details, see Importing virtual machines into the overcloud.
vHost Data Path Acceleration
On RHEL 9 hosts, it is possible to configure vHost Data Path Acceleration (vDPA) for virtio devices, but Red Hat currently does not support this feature, and strongly discourages its use in production environments.
vhost-user
RHEL 9 does not support the implementation of a user-space vHost interface.
Other solutions:
-
vhost-user
is supported in RHOSP, but only forvirtio-net
interfaces. For details, see virtio-net implementation and vhost user ports. -
OpenShift Virtualization supports
vhost-user
as well.
S3 and S4 system power states
Suspending a VM to the Suspend to RAM (S3) or Suspend to disk (S4) system power states is not supported. Note that these features are disabled by default, and enabling them will make your VM not supportable by Red Hat.
Note that the S3 and S4 states are also currently not supported in any other virtualization solution provided by Red Hat.
S3-PR on a multipathed vDisk
SCSI3 persistent reservation (S3-PR) on a multipathed vDisk is not supported in RHEL 9. As a consequence, Windows Cluster is not supported in RHEL 9.
virtio-crypto
Using the virtio-crypto device in RHEL 9 is not supported and its use is therefore highly discouraged.
Note that virtio-crypto devices are also not supported in any other virtualization solution provided by Red Hat.
Incremental live backup
Configuring a VM backup that only saves VM changes since the last backup, also known as incremental live backup, is not supported in RHEL 9, and Red Hat highly discourages its use.
net_failover
Using the net_failover
driver to set up an automated network device failover mechanism is not supported in RHEL 9.
Note that net_failover
is also currently not supported in any other virtualization solution provided by Red Hat.
Multi-FD migration
Migrating VMs using multiple file descriptors (FDs), also known as multi-FD migration, is not supported in RHEL 9.
NVMe devices
Attaching Non-volatile Memory express (NVMe) devices to VMs as a PCIe device with PCI-passthrough is not supported.
Note that attaching NVMe
devices to VMs is also currently not supported in any other virtualization solution provided by Red Hat.
TCG
QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This mode does not require hardware virtualization support. However, TCG is not supported by Red Hat.
TCG-based guests can be recognized by examining its XML configuration, for example using the virsh dumpxml
command.
The configuration file of a TCG guest contains the following line:
<domain type='qemu'>
The configuration file of a KVM guest contains the following line:
<domain type='kvm'>
SR-IOV InfiniBand networking devices
Attaching InfiniBand networking devices to VMs using Single-root I/O virtualization (SR-IOV) is not supported.
22.4. Resource allocation limits in RHEL 9 virtualization
The following limits apply to virtualized resources that can be allocated to a single KVM virtual machine (VM) on a Red Hat Enterprise Linux 9 (RHEL 9) host.
Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Maximum vCPUs per VM
For the maximum amount of vCPUs and memory that is supported on a single VM running on a RHEL 9 host, see: Virtualization limits for Red Hat Enterprise Linux with KVM
PCI devices per VM
RHEL 9 supports 32 PCI device slots per VM bus, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per bus when multi-function capabilities are enabled in the VM, and no PCI bridges are used.
Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some buses do not make all 256 device addresses available for the user; for example, the root bus has several built-in devices occupying slots.
Virtualized IDE devices
KVM is limited to a maximum of 4 virtualized IDE devices per VM.
22.5. How virtualization on IBM Z differs from AMD64 and Intel 64
KVM virtualization in RHEL 9 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:
- PCI and USB devices
Virtual PCI and USB devices are not supported on IBM Z. This also means that
virtio-*-pci
devices are unsupported, andvirtio-*-ccw
devices should be used instead. For example, usevirtio-net-ccw
instead ofvirtio-net-pci
.Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.
- Supported guest operating system
- Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system.
- Device boot order
IBM Z does not support the
<boot dev='device'>
XML configuration element. To define device boot order, use the<boot order='number'>
element in the<devices>
section of the XML.In addition, you can select the required boot entry using the architecture-specific
loadparm
attribute in the<boot>
element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry:<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/qcow2'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/> <boot order='1' loadparm='2'/> </disk>
NoteUsing
<boot order='number'>
for boot order management is also preferred on AMD64 and Intel 64 hosts.- Memory hot plug
- Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
- NUMA topology
-
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by
libvirt
on IBM Z. Therefore, tuning vCPU performance using NUMA is not possible on these systems. - vfio-ap
- VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture.
- vfio-ccw
- VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture.
- SMBIOS
- SMBIOS configuration is not available on IBM Z.
- Watchdog devices
If using watchdog devices in your VM on an IBM Z host, use the
diag288
model. For example:<devices> <watchdog model='diag288' action='poweroff'/> </devices>
- kvm-clock
-
The
kvm-clock
service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z. - v2v and p2v
-
The
virt-v2v
andvirt-p2v
utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z. - Migrations
To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the
host-model
CPU mode. Thehost-passthrough
andmaximum
CPU modes are not recommended, as they are generally not migration-safe.If you want to specify an explicit CPU model in the
custom
CPU mode, follow these guidelines:-
Do not use CPU models that end with
-base
. -
Do not use the
qemu
,max
orhost
CPU model.
To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without
-base
at the end.-
If you have both the source host and the destination host running, you can instead use the
virsh hypervisor-cpu-baseline
command on the destination host to obtain a suitable CPU model. For details, see Verifying host CPU compatibility for virtual machine migration. - For more information about supported machine types in RHEL 9, see Recommended features in RHEL 9 virtualization.
-
Do not use CPU models that end with
- PXE installation and booting
When using PXE to run a VM on IBM Z, a specific configuration is required for the
pxelinux.cfg/default
file. For example:# pxelinux default linux label linux kernel kernel.img initrd initrd.img append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/
- Secure Execution
-
You can boot a VM with a prepared secure guest image by defining
<launchSecurity type="s390-pv"/>
in the XML configuration of the VM. This encrypts the VM’s memory to protect it from unwanted access by the hypervisor.
Note that the following features are not supported when running a VM in secure execution mode:
-
Device passthrough using
vfio
-
Obtaining memory information using
virsh domstats
andvirsh memstat
-
The
memballoon
andvirtio-rng
virtual devices - Memory backing using huge pages
- Live and non-live VM migrations
- Saving and restoring VMs
-
VM snapshots, including memory snapshots (using the
--memspec
option) -
Full memory dumps. Instead, specify the
--memory-only
option for thevirsh dump
command. - 248 or more vCPUs. The vCPU limit for secure guests is 247.
Additional resources
22.6. How virtualization on ARM 64 differs from AMD64 and Intel 64
KVM virtualization in RHEL 9 on ARM 64 systems is different from KVM on AMD64 and Intel 64 systems in a number of aspects. These include, but are not limited to, the following:
- Support
- Virtualization on ARM 64 is only provided as a Technology Preview on RHEL 9, and is therefore unsupported.
- Guest operating systems
- The only guest operating system currently working on ARM 64 virtual machines (VMs) is RHEL 9.
- Web console management
- Some features of VM management in the RHEL 9 web console may not work correctly on ARM 64 hardware.
- vCPU hot plug and hot unplug
- Attaching a virtual CPU (vCPU) to a running VM, also referred to as a vCPU hot plug, is not supported on ARM 64 hosts. In addition, like on AMD64 and Intel 64 hosts, removing a vCPU from a running VM (vCPU hot unplug), is not supported on ARM 64.
- SecureBoot
- The SecureBoot feature is not available on ARM 64 systems.
- PXE
-
Booting in the Preboot Execution Environment (PXE) is only possible with the
virtio-net-pci
network interface controller (NIC). In addition, the built-inVirtioNetDxe
driver of the virtual machine UEFI platform firmware (installed with theedk2-aarch64
package) needs to be used for PXE booting. Note that iPXE option ROMs are not supported. - Device memory
- Device memory features, such as the dual in-line memory module (DIMM) and non-volatile DIMM (NVDIMM), do not work on ARM 64.
- pvpanic
-
The pvpanic device is currently not functional on ARM 64. Make sure to remove the
<panic>
element from the<devices>
section of the guest XML configuration on ARM 64, as its presence can lead to the VM failing to boot. - OVMF
VMs on an ARM 64 host cannot use the OVMF UEFI firmware used on AMD64 and Intel 64, included in the
edk2-ovmf
package. Instead, these VMs use UEFI firmware included in theedk2-aarch64
package, which provides a similar interface and implements a similar set of features.Specifically,
edk2-aarch64
provides a built-in UEFI shell, but does not support the following functionality:- SecureBoot
- Management Mode
- TPM-1.2 support
- kvm-clock
-
The
kvm-clock
service does not have to be configured for time management in VMs on ARM 64. - Peripheral devices
- ARM 64 systems do not support all the peripheral devices that are supported on AMD64 and Intel 64 systems. In some cases, the device functionality is not supported at all, and in other cases, a different device is supported for the same functionality.
- Serial console configuration
-
When setting up a serial console on a VM , use the
console=ttyAMA0
kernel option instead ofconsole=ttyS0
with thegrubby
utility. - Non-maskable interrupts
- Sending non-maskable interrupts (NMIs) to an ARM 64 VM is currently not possible.
- Nested virtualization
- Creating nested VMs is currently not possible on ARM 64 hosts.
- v2v and p2v
-
The
virt-v2v
andvirt-p2v
utilities are only supported on the AMD64 and Intel 64 architecture and are, therefore, not provided on ARM 64.
22.7. An overview of virtualization features support in RHEL 9
The following tables provide comparative information about the support state of selected virtualization features in RHEL 9 across the available system architectures.
Table 22.1. General support
Intel 64 and AMD64 | IBM Z | ARM 64 |
---|---|---|
Supported | Supported | UNSUPPORTED |
Table 22.2. Device hot plug and hot unplug
Intel 64 and AMD64 | IBM Z | ARM 64 | |
---|---|---|---|
CPU hot plug | Supported | Supported | UNAVAILABLE |
CPU hot unplug | UNSUPPORTED | UNSUPPORTED | UNAVAILABLE |
Memory hot plug | Supported | UNSUPPORTED | UNAVAILABLE |
Memory hot unplug | UNSUPPORTED | UNSUPPORTED | UNAVAILABLE |
Peripheral device hot plug | Supported | Supported [a] | Available but UNSUPPORTED |
Peripheral device hot unplug | Supported | Supported [b] | Available but UNSUPPORTED |
Table 22.3. Other selected features
Intel 64 and AMD64 | IBM Z | ARM 64 | |
---|---|---|---|
NUMA tuning | Supported | UNSUPPORTED | UNAVAILABLE |
SR-IOV devices | Supported | UNSUPPORTED | UNAVAILABLE |
virt-v2v and p2v | Supported | UNSUPPORTED | UNAVAILABLE |
Note that some of the unsupported features are supported on other Red Hat products, such as Red Hat Virtualization and Red Hat OpenStack platform. For more information, see Unsupported features in RHEL 9 virtualization.
Additional sources