Menu Close
Settings Close

Language and Page Formatting Options

Red Hat Training

A Red Hat training course is available for RHEL 8

Chapter 4. Getting started with virtualization on IBM Z

You can use KVM virtualization when using RHEL 8 on IBM Z hardware. However, enabling the KVM hypervisor on your system requires extra steps compared to virtualization on AMD64 and Intel 64 architectures. Certain RHEL 8 virtualization features also have different or restricted functionality on IBM Z.

Apart from the information in the following sections, using virtualization on IBM Z works the same as on AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more information when using virtualization on IBM Z.


Running KVM on the z/VM OS is not supported.

4.1. Enabling virtualization on IBM Z

To set up a KVM hypervisor and create virtual machines (VMs) on an IBM Z system running RHEL 8, follow the instructions below.


  • RHEL 8.6 or later is installed and registered on your host machine.


    If you already enabled virtualization on an IBM Z machine using RHEL 8.5 or earlier, you should instead reconfigure your virtualization module and update your system. For instructions, see How virtualization on IBM Z differs from AMD64 and Intel 64.

  • The following minimum system resources are available:

    • 6 GB free disk space for the host, plus another 6 GB for each intended VM.
    • 2 GB of RAM for the host, plus another 2 GB for each intended VM.
    • 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load.
  • Your IBM Z host system is using a z13 CPU or later.
  • RHEL 8 is installed on a logical partition (LPAR). In addition, the LPAR supports the start-interpretive execution (SIE) virtualization functions.

    To verify this, search for sie in your /proc/cpuinfo file.

    # grep sie /proc/cpuinfo/
    features        : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te sie


  1. Load the KVM kernel module:

    # modprobe kvm
  2. Verify that the KVM kernel module is loaded:

    # lsmod | grep kvm

    If KVM loaded successfully, the output of this command includes kvm.

  3. Install the packages in the virt:rhel/common module:

    # yum module install virt:rhel/common
  4. Start the virtualization services:

    # for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done


  1. Verify that your system is prepared to be a virtualization host.

    # virt-host-validate
    QEMU: Checking if device /dev/kvm is accessible             : PASS
    QEMU: Checking if device /dev/vhost-net exists              : PASS
    QEMU: Checking if device /dev/net/tun exists                : PASS
    QEMU: Checking for cgroup 'memory' controller support       : PASS
    QEMU: Checking for cgroup 'memory' controller mount-point   : PASS
  2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.

    If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.

    If any of the checks return a WARN value, consider following the displayed instructions to improve virtualization capabilities.


  • If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:

    QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)

    However, VMs on such a host system will fail to boot, rather than have performance problems.

    To work around this, you can change the <domain type> value in the XML configuration of the VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain type, and setting this is highly discouraged in production environments.

4.2. Updating virtualization on IBM Z from RHEL 8.5 to RHEL 8.6 or later

If you installed RHEL 8 on IBM Z hardware prior to RHEL 8.6, you had to obtain virtualization RPMs from the AV stream, separate from the base RPM stream of RHEL 8. Starting with RHEL 8.6, virtualization RPMs previously available only from the AV stream are available on the base RHEL stream. In addition, the AV stream will be discontinued in a future minor release of RHEL 8. Therefore, using the AV stream is no longer recommended.

By following the instructions below, you will deactivate your AV stream and enable your access to virtualization RPMs available in RHEL 8.6 and later versions.


  • You are using a RHEL 8.5 on IBM Z, with the virt:av module installed. To confirm that this is the case:

    # hostnamectl | grep "Operating System"
    Operating System: Red Hat Enterprise Linux 8.5 (Ootpa)
    # yum module list --installed
    Advanced Virtualization for RHEL 8 IBM Z Systems (RPMs)
    Name                Stream                  Profiles                  Summary
    virt                av [e]                common [i]                Virtualization module


  1. Disable the virt:av module.

    # yum disable virt:av
  2. Remove the pre-existing virtualization packages and modules that your system already contains.

    # yum module reset virt -y
  3. Upgrade your packages to their latest RHEL versions.

    # yum update

    This also automatically enables the virt:rhel module on your system.


  • Ensure the virt module on your system is provided by the rhel stream.

    # yum module info virt
    Name             : virt
    Stream           : rhel [d][e][a]
    Version          : 8050020211203195115

4.3. How virtualization on IBM Z differs from AMD64 and Intel 64

KVM virtualization in RHEL 8 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:

PCI and USB devices

Virtual PCI and USB devices are not supported on IBM Z. This also means that virtio-*-pci devices are unsupported, and virtio-*-ccw devices should be used instead. For example, use virtio-net-ccw instead of virtio-net-pci.

Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.

Supported guest OS
Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system.
Device boot order

IBM Z does not support the <boot dev='device'> XML configuration element. To define device boot order, use the <boot order='number'> element in the <devices> section of the XML. For example:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2'/>
  <source file='/path/to/qcow2'/>
  <target dev='vda' bus='virtio'/>
  <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
  <boot order='2'>

Using <boot order='number'> for boot order management is also preferred on AMD64 and Intel 64 hosts.

Memory hot plug
Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
NUMA topology
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by libvirt on IBM Z. Therefore, tuning vCPU performance using NUMA is not possible on these systems.
VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architectures.
SMBIOS configuration is not available on IBM Z.
Watchdog devices

If using watchdog devices in your VM on an IBM Z host, use the diag288 model. For example:

  <watchdog model='diag288' action='poweroff'/>
The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z.
v2v and p2v
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z.
Nested virtualization
Creating nested VMs requires different settings on IBM Z than on AMD64 and Intel 64. For details, see Creating nested virtual machines.
No graphical output in earlier releases
When using RHEL 8.3 or an earlier minor version on your host, displaying the VM graphical output is not possible when connecting to the VM using the VNC protocol. This is because the gnome-desktop utility was not supported in earlier RHEL versions on IBM Z. In addition, the SPICE display protocol does not work on IBM Z.

To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are not recommended, as they are generally not migration-safe.

If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines:

  • Do not use CPU models that end with -base.
  • Do not use the qemu, max or host CPU model.

To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base at the end.

  • If you have both the source host and the destination host running, you can instead use the virsh hypervisor-cpu-baseline command on the destination host to obtain a suitable CPU model.
  1. Obtain the domain capabilities of the source host, and save them to an XML file.

    # virsh domcapabilities > capabilities1.xml
  2. Obtain the domain capabilities of the destination host, and save them to an XML file.

    # virsh domcapabilities > capabilities2.xml
  3. Remove all content except the <cpu> blocks in both XML files.

          <feature name="aen"/>
          <feature name="aefsi"/>
          <feature name="diag318"/>
          <feature name="msa5"/>
          <feature name="msa4"/>
          <feature name="msa3"/>
          <feature name="bpb"/>
          <feature name="ppa15"/>
          <feature name="zpci"/>
          <feature name="sea_esop2"/>
          <feature name="te"/>
  4. Combine the CPU data in the two files into a single XML file.

    # cat capabilities1.xml capabilities2.xml >> combined_capabilities.xml
  5. Optional: Verify that the new XML file contains the CPU capabilities of both hosts.

    # cat combined_capabilities.xml
          <model fallback='forbid'>gen15a-base</model>
          <feature policy='require' name='aen'/>
          <feature policy='require' name='vxpdeh'/>
          <feature policy='require' name='aefsi'/>
          <feature policy='require' name='diag318'/>
          <feature policy='require' name='zpci'/>
          <feature policy='require' name='sea_esop2'/>
          <feature policy='require' name='te'/>
          <model fallback='forbid'>z13.2-base</model>
          <feature policy='require' name='aen'/>
          <feature policy='require' name='aefsi'/>
          <feature policy='require' name='diag318'/>
          <feature policy='require' name='sea_esop2'/>
          <feature policy='require' name='te'/>
          <feature policy='require' name='cmm'/>
  6. Use the virsh hypervisor-cpu-baseline command to obtain the CPU features and models that can be used for the VM.

    # virsh hypervisor-cpu-baseline combined_capabilities.xml
        <feature name="aen"/>
        <feature name="aefsi"/>
        <feature name="diag318"/>
        <feature name="msa5"/>
        <feature name="msa4"/>
        <feature name="msa3"/>
        <feature name="ais"/>
        <feature name="bpb"/>
        <feature name="ppa15"/>
        <feature name="zpci"/>
        <feature name="sea_esop2"/>
        <feature name="te"/>
PXE installation and booting

When using PXE to run a VM on IBM Z, a specific configuration is required for the pxelinux.cfg/default file. For example:

# pxelinux
default linux
label linux
kernel kernel.img
initrd initrd.img
append ip=dhcp

4.4. Next steps

  • When setting up a VM on an IBM Z system, it is recommended to protect the guest OS from the "Spectre" vulnerability. To do so, use the virsh edit command to modify the VM’s XML configuration and configure its CPU in one of the following ways:

    • Use the host CPU model:

      <cpu mode='host-model' check='partial'>
        <model fallback='allow'/>

      This makes the ppa15 and bpb features available to the guest if the host supports them.

    • If using a specific host model, add the ppa15 and pbp features. The following example uses the zEC12 CPU model:

      <cpu mode='custom' match='exact' check='partial'>
          <model fallback='allow'>zEC12</model>
          <feature policy='force' name='ppa15'/>
          <feature policy='force' name='bpb'/>

      Note that when using the ppa15 feature with the z114 and z196 CPU models on a host machine that uses a z12 CPU, you also need to use the latest microcode level (bundle 95 or later).