Red Hat Training

A Red Hat training course is available for RHEL 8

Configuring and managing virtualization

Red Hat Enterprise Linux 8

Setting up your host, creating and administering virtual machines, and understanding virtualization features in Red Hat Enterprise Linux 8

Red Hat Customer Content Services

Abstract

To use a Red Hat Enterprise Linux (RHEL) system as a virtualization host, follow the instructions in this document.
The information provided includes:
  • What the capabilities and use cases of virtualization are
  • How to manage your host and your virtual machines by using command-line utilities, as well as by using the web console
  • What the support limitations of virtualization are on various system architectures, such as Intel 64, AMD64, IBM POWER, and IBM Z

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation. Let us know how we can improve it.

Submitting feedback through Jira (account required)

  1. Log in to the Jira website.
  2. Click Create in the top navigation bar.
  3. Enter a descriptive title in the Summary field.
  4. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
  5. Click Create at the bottom of the dialogue.

Chapter 1. Introducing virtualization in RHEL

If you are unfamiliar with the concept of virtualization or its implementation in Linux, the following sections provide a general overview of virtualization in RHEL 8: its basics, advantages, components, and other possible virtualization solutions provided by Red Hat.

1.1. What is virtualization?

RHEL 8 provides the virtualization functionality, which enables a machine running RHEL 8 to host multiple virtual machines (VMs), also referred to as guests. VMs use the host’s physical hardware and computing resources to run a separate, virtualized operating system (guest OS) as a user-space process on the host’s operating system.

In other words, virtualization makes it possible to have operating systems within operating systems.

VMs enable you to safely test software configurations and features, run legacy software, or optimize the workload efficiency of your hardware. For more information about the benefits, see Advantages of virtualization.

For more information about what virtualization is, see the Virtualization topic page.

Next steps

1.2. Advantages of virtualization

Using virtual machines (VMs) has the following benefits in comparison to using physical machines:

  • Flexible and fine-grained allocation of resources

    A VM runs on a host machine, which is usually physical, and physical hardware can also be assigned for the guest OS to use. However, the allocation of physical resources to the VM is done on the software level, and is therefore very flexible. A VM uses a configurable fraction of the host memory, CPUs, or storage space, and that configuration can specify very fine-grained resource requests.

    For example, what the guest OS sees as its disk can be represented as a file on the host file system, and the size of that disk is less constrained than the available sizes for physical disks.

  • Software-controlled configurations

    The entire configuration of a VM is saved as data on the host, and is under software control. Therefore, a VM can easily be created, removed, cloned, migrated, operated remotely, or connected to remote storage.

  • Separation from the host

    A guest OS runs on a virtualized kernel, separate from the host OS. This means that any OS can be installed on a VM, and even if the guest OS becomes unstable or is compromised, the host is not affected in any way.

  • Space and cost efficiency

    A single physical machine can host a large number of VMs. Therefore, it avoids the need for multiple physical machines to do the same tasks, and thus lowers the space, power, and maintenance requirements associated with physical hardware.

  • Software compatibility

    Because a VM can use a different OS than its host, virtualization makes it possible to run applications that were not originally released for your host OS. For example, using a RHEL 7 guest OS, you can run applications released for RHEL 7 on a RHEL 8 host system.

    Note

    Not all operating systems are supported as a guest OS in a RHEL 8 host. For details, see Recommended features in RHEL 8 virtualization.

1.3. Virtual machine components and their interaction

Virtualization in RHEL 8 consists of the following principal software components:

Hypervisor

The basis of creating virtual machines (VMs) in RHEL 8 is the hypervisor, a software layer that controls hardware and enables running multiple operating systems on a host machine.

The hypervisor includes the Kernel-based Virtual Machine (KVM) module and virtualization kernel drivers. These components ensure that the Linux kernel on the host machine provides resources for virtualization to user-space software.

At the user-space level, the QEMU emulator simulates a complete virtualized hardware platform that the guest operating system can run in, and manages how resources are allocated on the host and presented to the guest.

In addition, the libvirt software suite serves as a management and communication layer, making QEMU easier to interact with, enforcing security rules, and providing a number of additional tools for configuring and running VMs.

XML configuration

A host-based XML configuration file (also known as a domain XML file) determines all settings and devices in a specific VM. The configuration includes:

  • Metadata such as the name of the VM, time zone, and other information about the VM.
  • A description of the devices in the VM, including virtual CPUs (vCPUS), storage devices, input/output devices, network interface cards, and other hardware, real and virtual.
  • VM settings such as the maximum amount of memory it can use, restart settings, and other settings about the behavior of the VM.

For more information about the contents of an XML configuration, see Sample virtual machine XML configuration.

Component interaction

When a VM is started, the hypervisor uses the XML configuration to create an instance of the VM as a user-space process on the host. The hypervisor also makes the VM process accessible to the host-based interfaces, such as the virsh, virt-install, and guestfish utilities, or the web console GUI.

When these virtualization tools are used, libvirt translates their input into instructions for QEMU. QEMU communicates the instructions to KVM, which ensures that the kernel appropriately assigns the resources necessary to carry out the instructions. As a result, QEMU can execute the corresponding user-space changes, such as creating or modifying a VM, or performing an action in the VM’s guest operating system.

Note

While QEMU is an essential component of the architecture, it is not intended to be used directly on RHEL 8 systems, due to security concerns. Therefore, using qemu-* commands is not supported by Red Hat, and it is highly recommended to interact with QEMU using libvirt.

For more information about the host-based interfaces, see Tools and interfaces for virtualization management.

Figure 1.1. RHEL 8 virtualization architecture

virt architecture

1.4. Tools and interfaces for virtualization management

You can manage virtualization in RHEL 8 using the command-line interface (CLI) or several graphical user interfaces (GUIs).

Command-line interface

The CLI is the most powerful method of managing virtualization in RHEL 8. Prominent CLI commands for virtual machine (VM) management include:

  • virsh - A versatile virtualization command-line utility and shell with a great variety of purposes, depending on the provided arguments. For example:

    • Starting and shutting down a VM - virsh start and virsh shutdown
    • Listing available VMs - virsh list
    • Creating a VM from a configuration file - virsh create
    • Entering a virtualization shell - virsh

    For more information, see the virsh(1) man page.

  • virt-install - A CLI utility for creating new VMs. For more information, see the virt-install(1) man page.
  • virt-xml - A utility for editing the configuration of a VM.
  • guestfish - A utility for examining and modifying VM disk images. For more information, see the guestfish(1) man page.

Graphical interfaces

You can use the following GUIs to manage virtualization in RHEL 8:

  • The RHEL 8 web console, also known as Cockpit, provides a remotely accessible and easy to use graphical user interface for managing VMs and virtualization hosts.

    For instructions on basic virtualization management with the web console, see Managing virtual machines in the web console.

  • The Virtual Machine Manager (virt-manager) application provides a specialized GUI for managing VMs and virtualization hosts.

    Important

    Although still supported in RHEL 8, virt-manager has been deprecated. The web console is intended to become its replacement in a subsequent release. It is, therefore, recommended that you get familiar with the web console for managing virtualization in a GUI.

    However, in RHEL 8, some features may only be accessible from either virt-manager or the command line. For details, see Differences between virtualization features in Virtual Machine Manager and the web console.

  • The Gnome Boxes application is a lightweight graphical interface to view and access VMs and remote systems. Gnome Boxes is primarily designed for use on desktop systems.

    Important

    Gnome Boxes is provided as a part of the GNOME desktop environment and is supported on RHEL 8, but Red Hat recommends that you use the web console for managing virtualization in a GUI.

1.5. Red Hat virtualization solutions

The following Red Hat products are built on top of RHEL 8 virtualization features and expand the KVM virtualization capabilities available in RHEL 8. In addition, many limitations of RHEL 8 virtualization do not apply to these products:

OpenShift Virtualization

Based on the KubeVirt technology, OpenShift Virtualization is a part of the Red Hat OpenShift Container Platform, and makes it possible to run virtual machines in containers.

For more information about OpenShift Virtualization see the Red Hat Hybrid Cloud pages.

Red Hat OpenStack Platform (RHOSP)

Red Hat OpenStack Platform offers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud.

For more information about Red Hat OpenStack Platform, see the Red Hat Customer Portal or the Red Hat OpenStack Platform documentation suite.

Note

For details on virtualization features not supported in RHEL but supported in other Red Hat virtualization solutions, see: Unsupported features in RHEL 8 virtualization

Chapter 2. Getting started with virtualization

To start using virtualization in RHEL 8, follow the steps below. The default method for this is using the command-line interface (CLI), but for user convenience, some of the steps can be completed in the web console GUI.

Note

The web console currently provides only a subset of VM management functions, so using the command line is recommended for advanced use of virtualization in RHEL 8.

2.1. Enabling virtualization

To use virtualization in RHEL 8, you must enable the virtualization module, install virtualization packages, and ensure your system is configured to host virtual machines (VMs).

Prerequisites

  • RHEL 8 is installed and registered on your host machine.
  • Your system meets the following hardware requirements to work as a virtualization host:

    • The following minimum system resources are available:

      • 6 GB free disk space for the host, plus another 6 GB for each intended VM.
      • 2 GB of RAM for the host, plus another 2 GB for each intended VM.
      • 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load.
    • The architecture of your host machine supports KVM virtualization.

      • Notably, RHEL 8 does not support virtualization on the 64-bit ARM architecture (ARM 64).
      • The procedure below applies to the AMD64 and Intel 64 architecture (x86_64). To enable virtualization on a host with a different supported architecture, see one of the following sections:

Procedure

  1. Install the packages in the RHEL 8 virtualization module:

    # yum module install virt
  2. Install the virt-install and virt-viewer packages:

    # yum install virt-install virt-viewer
  3. Start the libvirtd service:

    # systemctl start libvirtd

Verification

  1. Verify that your system is prepared to be a virtualization host:

    # virt-host-validate
    [...]
    QEMU: Checking for device assignment IOMMU support       : PASS
    QEMU: Checking if IOMMU is enabled by kernel             : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
    LXC: Checking for Linux >= 2.6.26                        : PASS
    [...]
    LXC: Checking for cgroup 'blkio' controller mount-point  : PASS
    LXC: Checking if device /sys/fs/fuse/connections exists  : FAIL (Load the 'fuse' module to enable /proc/ overrides)
  2. Review the return values of virt-host-validate checks and take appropriate actions:

    1. If all virt-host-validate checks return the PASS value, your system is prepared for creating VMs.
    2. If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
    3. If any of the checks return a WARN value, consider following the displayed instructions to improve virtualization capabilities

Troubleshooting

  • If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:

    QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)

    However, VMs on such a host system will fail to boot, rather than have performance problems.

    To work around this, you can change the <domain type> value in the XML configuration of the VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain type, and setting this is highly discouraged in production environments.

2.2. Creating virtual machines

To create a virtual machine (VM) in RHEL 8, use the command-line interface or the RHEL 8 web console.

2.2.1. Creating virtual machines using the command-line interface

To create a virtual machine (VM) on your RHEL 8 host using the virt-install utility, follow the instructions below.

Prerequisites

  • Virtualization is enabled on your host system.
  • You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs.
  • An operating system (OS) installation source is available locally or on a network. This can be one of the following:

    • An ISO image of an installation medium
    • A disk image of an existing VM installation

      Warning

      Installing from a host CD-ROM or DVD-ROM device is not possible in RHEL 8. If you select a CD-ROM or DVD-ROM as the installation source when using any VM installation method available in RHEL 8, the installation will fail. For more information, see the Red Hat Knowledgebase.

      Also note that Red Hat provides support only for a limited set of guest operating systems.

  • Optional: A Kickstart file can be provided for faster and easier configuration of the installation.

Procedure

To create a VM and start its OS installation, use the virt-install command, along with the following mandatory arguments:

  • --name: the name of the new machine
  • --memory: the amount of allocated memory
  • --vcpus: the number of allocated virtual CPUs
  • --disk: the type and size of the allocated storage
  • --cdrom or --location: the type and location of the OS installation source

Based on the chosen installation method, the necessary options and values can vary. See below for examples:

  • The following creates a VM named demo-guest1 that installs the Windows 10 OS from an ISO image locally stored in the /home/username/Downloads/Win10install.iso file. This VM is also allocated with 2048 MiB of RAM and 2 vCPUs, and an 80 GiB qcow2 virtual disk is automatically configured for the VM.

    # virt-install \
        --name demo-guest1 --memory 2048 \
        --vcpus 2 --disk size=80 --os-variant win10 \
        --cdrom /home/username/Downloads/Win10install.iso
  • The following creates a VM named demo-guest2 that uses the /home/username/Downloads/rhel8.iso image to run a RHEL 8 OS from a live CD. No disk space is assigned to this VM, so changes made during the session will not be preserved. In addition, the VM is allocated with 4096 MiB of RAM and 4 vCPUs.

    # virt-install \
        --name demo-guest2 --memory 4096 --vcpus 4 \
        --disk none --livecd --os-variant rhel8.0 \
        --cdrom /home/username/Downloads/rhel8.iso
  • The following creates a RHEL 8 VM named demo-guest3 that connects to an existing disk image, /home/username/backup/disk.qcow2. This is similar to physically moving a hard drive between machines, so the OS and data available to demo-guest3 are determined by how the image was handled previously. In addition, this VM is allocated with 2048 MiB of RAM and 2 vCPUs.

    # virt-install \
        --name demo-guest3 --memory 2048 --vcpus 2 \
        --os-variant rhel8.0 --import \
        --disk /home/username/backup/disk.qcow2

    Note that the --os-variant option is highly recommended when importing a disk image. If it is not provided, the performance of the created VM will be negatively affected.

  • The following creates a VM named demo-guest4 that installs from the http://example.com/OS-install URL. For the installation to start successfully, the URL must contain a working OS installation tree. In addition, the OS is automatically configured using the /home/username/ks.cfg kickstart file. This VM is also allocated with 2048 MiB of RAM, 2 vCPUs, and a 160 GiB qcow2 virtual disk.

    # virt-install \
        --name demo-guest4 --memory 2048 --vcpus 2 --disk size=160 \
        --os-variant rhel8.0 --location http://example.com/OS-install \
        --initrd-inject /home/username/ks.cfg --extra-args="inst.ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8"
  • The following creates a VM named demo-guest5 that installs from a RHEL8.iso image file in text-only mode, without graphics. It connects the guest console to the serial console. The VM has 16384 MiB of memory, 16 vCPUs, and 280 GiB disk. This kind of installation is useful when connecting to a host over a slow network link.

    # virt-install \
        --name demo-guest5 --memory 16384 --vcpus 16 --disk size=280 \
        --os-variant rhel8.0 --location RHEL8.iso \
        --graphics none --extra-args='console=ttyS0'
  • The following creates a VM named demo-guest6, which has the same configuration as demo-guest5, but resides on the 10.0.0.1 remote host.

    # virt-install \
        --connect qemu+ssh://root@10.0.0.1/system --name demo-guest6 --memory 16384 \
        --vcpus 16 --disk size=280 --os-variant rhel8.0 --location RHEL8.iso \
        --graphics none --extra-args='console=ttyS0'

Verification

  • If the VM is created successfully, a virt-viewer window opens with a graphical console of the VM and starts the guest OS installation.

Troubleshooting

  • If virt-install fails with a cannot find default network error:

    1. Ensure that the libvirt-daemon-config-network package is installed:

      # yum info libvirt-daemon-config-network
      Installed Packages
      Name         : libvirt-daemon-config-network
      [...]
    2. Verify that the libvirt default network is active and configured to start automatically:

      # virsh net-list --all
       Name      State    Autostart   Persistent
      --------------------------------------------
       default   active   yes         yes
    3. If it is not, activate the default network and set it to auto-start:

      # virsh net-autostart default
      Network default marked as autostarted
      
      # virsh net-start default
      Network default started
      1. If activating the default network fails with the following error, the libvirt-daemon-config-network package has not been installed correctly.

        error: failed to get network 'default'
        error: Network not found: no network with matching name 'default'

        To fix this, re-install libvirt-daemon-config-network.

        # yum reinstall libvirt-daemon-config-network
      2. If activating the default network fails with an error similar to the following, a conflict has occurred between the default network’s subnet and an existing interface on the host.

        error: Failed to start network default
        error: internal error: Network is already in use by interface ens2

        To fix this, use the virsh net-edit default command and change the 192.168.122.* values in the configuration to a subnet not already in use on the host.

Additional resources

2.2.2. Creating virtual machines and installing guest operating systems using the web console

To manage virtual machines (VMs) in a GUI on a RHEL 8 host, use the web console. The following sections provide information about how to use the RHEL 8 web console to create VMs and install guest operating systems on them.

2.2.2.1. Creating virtual machines using the web console

To create a virtual machine (VM) on a host machine to which your RHEL 8 web console is connected, use the instructions below.

Prerequisites

Procedure

  1. In the Virtual Machines interface of the web console, click Create VM.

    The Create new virtual machine dialog appears.

    Image displaying the Create new virtual machine dialog box.
  2. Enter the basic configuration of the VM you want to create.

    • Name - The name of the VM.
    • Connection - The level of privileges granted to the session. For more details, expand the associated dialog box in the web console.
    • Installation type - The installation can use a local installation medium, a URL, a PXE network boot, a cloud base image, or download an operating system from a limited set of operating systems.
    • Operating system - The guest operating system running on the VM. Note that Red Hat provides support only for a limited set of guest operating systems.

      Note

      To download and install Red Hat Enterprise Linux directly from web console, you must add an offline token in the Offline token field.

    • Storage - The type of storage.
    • Storage Limit - The amount of storage space.
    • Memory - The amount of memory.
  3. Create the VM:

    • If you want the VM to automatically install the operating system, click Create and run.
    • If you want to edit the VM before the operating system is installed, click Create and edit.

2.2.2.2. Creating virtual machines by importing disk images using the web console

You can create a virtual machine (VM) by importing a disk image of an existing VM installation in the RHEL 8 web console.

Prerequisites

  • The web console VM plug-in is installed on your system.
  • You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values can vary significantly depending on the intended tasks and workload of the VMs.
  • You have downloaded a disk image of an existing VM installation.

Procedure

  1. In the Virtual Machines interface of the web console, click Import VM.

    The Import a virtual machine dialog appears.

    Image displaying the Import a virtual machine dialog box.
  2. Enter the basic configuration of the VM you want to create:

    • Name - The name of the VM.
    • Disk image - The path to the existing disk image of a VM on the host system.
    • Operating system - The operating system running on a VM disk. Note that Red Hat provides support only for a limited set of guest operating systems.
    • Memory - The amount of memory to allocate for use by the VM.
  3. Import the VM:

    • To install the operating system on the VM without additional edits to the VM settings, click Import and run.
    • To edit the VM settings before the installation of the operating system, click Import and edit.

2.2.2.3. Installing guest operating systems using the web console

When a virtual machine (VM) boots for the first time, you must install an operating system on the VM.

Note

If you click Create and run or Import and run while creating a new VM, the installation routine for the operating system starts automatically when the VM is created.

Procedure

  1. In the Virtual Machines interface, click the VM on which you want to install a guest OS.

    A new page opens with basic information about the selected VM and controls for managing various aspects of the VM.

    Page displaying detailed information about the virtual machine.
  2. Optional: Change the firmware.

    Note

    You can change the firmware only if you selected Create and edit or Import and edit while creating a new VM and if the OS is not already installed on the VM.

    Image displaying the Change Firmware dialog box.
    1. Click the firmware.
    2. In the Change Firmware window, select the required firmware.
    3. Click Save.
  3. Click Install.

    The installation routine of the operating system runs in the VM console.

Troubleshooting

  • If the installation routine fails, delete and recreate the VM before starting the installation again.

2.2.3. Creating virtual machines with cloud image authentication using the web console

By default, distro cloud images have no login accounts. However, using the RHEL web console, you can now create a virtual machine (VM) and specify the root and user account login credentials, which are then passed to cloud-init.

Prerequisites

  • The web console VM plug-in is installed on your system.
  • Virtualization is enabled on your host system.
  • You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs.

Procedure

  1. In the Virtual Machines interface of the web console, click Create VM.

    The Create new virtual machine dialog appears.

    Image displaying the Create new virtual machine dialog box.
  2. In the Name field, enter a name for the VM.
  3. On the Details tab, in the Installation type field, select Cloud base image.

    Image displaying the Create new virtual machine using cloud-init dialog box.
  4. In the Installation source field, set the path to the image file on your host system.
  5. Enter the configuration for the VM that you want to create.

    • Operating system - The VM’s operating system. Note that Red Hat provides support only for a limited set of guest operating systems.
    • Storage - The type of storage with which to configure the VM.
    • Storage Limit - The amount of storage space with which to configure the VM.
    • Memory - The amount of memory with which to configure the VM.
  6. Click on the Automation tab.

    Set your cloud authentication credentials.

    • Root password - Enter a root password for your VM. Leave the field blank if you do not wish to set a root password.
    • User login - Enter a cloud-init user login. Leave this field blank if you do not wish to create a user account.
    • User password - Enter a password. Leave this field blank if you do not wish to create a user account.

      Image displaying the Automation tab of the Create new virtual machine dialog box.
  7. Click Create and run.

    The VM is created.

2.3. Starting virtual machines

To start a virtual machine (VM) in RHEL 8, you can use the command line interface or the web console GUI.

Prerequisites

  • Before a VM can be started, it must be created and, ideally, also installed with an OS. For instruction to do so, see Creating virtual machines.

2.3.1. Starting a virtual machine using the command-line interface

You can use the command line interface (CLI) to start a shut-down virtual machine (VM) or restore a saved VM. Using the CLI, you can start both local and remote VMs.

Prerequisites

  • An inactive VM that is already defined.
  • The name of the VM.
  • For remote VMs:

    • The IP address of the host where the VM is located.
    • Root access privileges to the host.

Procedure

  • For a local VM, use the virsh start utility.

    For example, the following command starts the demo-guest1 VM.

    # virsh start demo-guest1
    Domain 'demo-guest1' started
  • For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH connection to the host.

    For example, the following command starts the demo-guest1 VM on the 192.168.123.123 host.

    # virsh -c qemu+ssh://root@192.168.123.123/system start demo-guest1
    
    root@192.168.123.123's password:
    
    Domain 'demo-guest1' started

2.3.2. Starting virtual machines using the web console

If a virtual machine (VM) is in the shut off state, you can start it using the RHEL 8 web console. You can also configure the VM to be started automatically when the host starts.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM you want to start.

    A new page opens with detailed information about the selected VM and controls for shutting down and deleting the VM.

  2. Click Run.

    The VM starts, and you can connect to its console or graphical output.

  3. Optional: To configure the VM to start automatically when the host starts, toggle the Autostart checkbox in the Overview section.

    If you use network interfaces that are not managed by libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see starting virtual machines automatically when the host starts.

2.3.3. Starting virtual machines automatically when the host starts

When a host with a running virtual machine (VM) restarts, the VM is shut down, and must be started again manually by default. To ensure a VM is active whenever its host is running, you can configure the VM to be started automatically.

Procedure

  1. Use the virsh autostart utility to configure the VM to start automatically when the host starts.

    For example, the following command configures the demo-guest1 VM to start automatically.

    # virsh autostart demo-guest1
    Domain 'demo-guest1' marked as autostarted
  2. If you use network interfaces that are not managed by libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start.

    Note

    These interfaces include for example:

    • Bridge devices created by NetworkManager
    • Networks configured to use <forward mode='bridge'/>
    1. In the systemd configuration directory tree, create a libvirtd.service.d directory if it does not exist yet.

      # mkdir -p /etc/systemd/system/libvirtd.service.d/
    2. Create a 10-network-online.conf systemd unit override file in the previously created directory. The content of this file overrides the default systemd configuration for the libvirtd service.

      # touch /etc/systemd/system/libvirtd.service.d/10-network-online.conf
    3. Add the following lines to the 10-network-online.conf file. This configuration change ensures systemd starts the libvirtd service only after the network on the host is ready.

      [Unit]
      After=network-online.target

Verification

  1. View the VM configuration, and check that the autostart option is enabled.

    For example, the following command displays basic information about the demo-guest1 VM, including the autostart option.

    # virsh dominfo demo-guest1
    Id:             2
    Name:           demo-guest1
    UUID:           e46bc81c-74e2-406e-bd7a-67042bae80d1
    OS Type:        hvm
    State:          running
    CPU(s):         2
    CPU time:       385.9s
    Max memory:     4194304 KiB
    Used memory:    4194304 KiB
    Persistent:     yes
    Autostart:      enable
    Managed save:   no
    Security model: selinux
    Security DOI:   0
    Security label: system_u:system_r:svirt_t:s0:c873,c919 (enforcing)
  2. If you use network interfaces that are not managed by libvirt, check if the content of the 10-network-online.conf file matches the following output.

    $ cat /etc/systemd/system/libvirtd.service.d/10-network-online.conf
    [Unit]
    After=network-online.target

Additional resources

2.4. Connecting to virtual machines

To interact with a virtual machine (VM) in RHEL 8, you need to connect to it by doing one of the following:

If the VMs to which you are connecting are on a remote host rather than a local one, you can optionally configure your system for more convenient access to remote hosts.

Prerequisites

2.4.1. Interacting with virtual machines using the web console

To interact with a virtual machine (VM) in the RHEL 8 web console, you need to connect to the VM’s console. These include both graphical and serial consoles.

2.4.1.1. Viewing the virtual machine graphical console in the web console

Using the virtual machine (VM) console interface, you can view the graphical output of a selected VM in the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose graphical console you want to view.

    A new page opens with an Overview and a Console section for the VM.

  2. Select VNC console in the console drop down menu.

    The VNC console appears below the menu in the web interface.

    The graphical console appears in the web interface.

    Image displaying the interface of the selected virtual machine.
  3. Click Expand

    You can now interact with the VM console using the mouse and keyboard in the same manner you interact with a real machine. The display in the VM console reflects the activities being performed on the VM.

Note

The host on which the web console is running may intercept specific key combinations, such as Ctrl+Alt+Del, preventing them from being sent to the VM.

To send such key combinations, click the Send key menu and select the key sequence to send.

For example, to send the Ctrl+Alt+Del combination to the VM, click the Send key and select the Ctrl+Alt+Del menu entry.

Troubleshooting

  • If clicking in the graphical console does not have any effect, expand the console to full screen. This is a known issue with the mouse cursor offset.

2.4.1.2. Viewing the graphical console in a remote viewer using the web console

Using the web console interface, you can display the graphical console of a selected virtual machine (VM) in a remote viewer, such as Virt Viewer.

Note

You can launch Virt Viewer from within the web console. Other VNC and SPICE remote viewers can be launched manually.

Prerequisites

  • The web console VM plug-in is installed on your system.
  • Ensure that both the host and the VM support a graphical interface.
  • Before you can view the graphical console in Virt Viewer, you must install Virt Viewer on the machine to which the web console is connected.

    1. Click Launch remote viewer.

      The virt viewer, .vv, file downloads.

    2. Open the file to launch Virt Viewer.
Note

Remote Viewer is available on most operating systems. However, some browser extensions and plug-ins do not allow the web console to open Virt Viewer.

Procedure

  1. In the Virtual Machines interface, click the VM whose graphical console you want to view.

    A new page opens with an Overview and a Console section for the VM.

  2. Select Desktop Viewer in the console drop down menu.

    Page displaying the Console section of the virtual machine interface along with other VM details.
  3. Click Launch Remote Viewer.

    The graphical console opens in Virt Viewer.

    You can interact with the VM console using the mouse and keyboard in the same manner in which you interact with a real machine. The display in the VM console reflects the activities being performed on the VM.

Note

The server on which the web console is running can intercept specific key combinations, such as Ctrl+Alt+Del, preventing them from being sent to the VM.

To send such key combinations, click the Send key menu and select the key sequence to send.

For example, to send the Ctrl+Alt+Del combination to the VM, click the Send key menu and select the Ctrl+Alt+Del menu entry.

Troubleshooting

  • If clicking in the graphical console does not have any effect, expand the console to full screen. This is a known issue with the mouse cursor offset.
  • If launching the Remote Viewer in the web console does not work or is not optimal, you can manually connect with any viewer application using the following protocols:

    • Address - The default address is 127.0.0.1. You can modify the vnc_listen or the spice_listen parameter in /etc/libvirt/qemu.conf to change it to the host’s IP address.
    • SPICE port - 5900
    • VNC port - 5901

2.4.1.3. Viewing the virtual machine serial console in the web console

You can view the serial console of a selected virtual machine (VM) in the RHEL 8 web console. This is useful when the host machine or the VM is not configured with a graphical interface.

For more information about the serial console, see Opening a virtual machine serial console.

Prerequisites

Procedure

  1. In the Virtual Machines pane, click the VM whose serial console you want to view.

    A new page opens with an Overview and a Console section for the VM.

  2. Select Serial console in the console drop down menu.

    The graphical console appears in the web interface.

    Page displaying the virtual machine serial console along with other VM details.

You can disconnect and reconnect the serial console from the VM.

  • To disconnect the serial console from the VM, click Disconnect.
  • To reconnect the serial console to the VM, click Reconnect.

2.4.2. Opening a virtual machine graphical console using Virt Viewer

To connect to a graphical console of a KVM virtual machine (VM) and open it in the Virt Viewer desktop application, follow the procedure below.

Prerequisites

  • Your system, as well as the VM you are connecting to, must support graphical displays.
  • If the target VM is located on a remote host, connection and root access privileges to the host are needed.
  • Optional: If the target VM is located on a remote host, set up your libvirt and SSH for more convenient access to remote hosts.

Procedure

  • To connect to a local VM, use the following command and replace guest-name with the name of the VM you want to connect to:

    # virt-viewer guest-name
  • To connect to a remote VM, use the virt-viewer command with the SSH protocol. For example, the following command connects as root to a VM called guest-name, located on remote system 10.0.0.1. The connection also requires root authentication for 10.0.0.1.

    # virt-viewer --direct --connect qemu+ssh://root@10.0.0.1/system guest-name
    root@10.0.0.1's password:

Verification

If the connection works correctly, the VM display is shown in the Virt Viewer window.

You can interact with the VM console using the mouse and keyboard in the same manner you interact with a real machine. The display in the VM console reflects the activities being performed on the VM.

Troubleshooting

  • If clicking in the graphical console does not have any effect, expand the console to full screen. This is a known issue with the mouse cursor offset.

2.4.3. Connecting to a virtual machine using SSH

To interact with the terminal of a virtual machine (VM) using the SSH connection protocol, follow the procedure below.

Prerequisites

  • You have network connection and root access privileges to the target VM.
  • If the target VM is located on a remote host, you also have connection and root access privileges to that host.
  • Your VM network assigns IP addresses by dnsmasq generated by libvirt. This is the case for example in libvirt NAT networks.

    Notably, if your VM is using one of the following network configurations, you cannot connect to the VM using SSH:

    • hostdev interfaces
    • direct interfaces
    • bridge interaces
  • The libvirt-nss component is installed and enabled on the VM’s host. If it is not, do the following:

    1. Install the libvirt-nss package:

      # yum install libvirt-nss
    2. Edit the /etc/nsswitch.conf file and add libvirt_guest to the hosts line:

      [...]
      passwd:      compat
      shadow:      compat
      group:       compat
      hosts:       files libvirt_guest dns
      [...]

Procedure

  1. When connecting to a remote VM, SSH into its physical host first. The following example demonstrates connecting to a host machine 10.0.0.1 using its root credentials:

    # ssh root@10.0.0.1
    root@10.0.0.1's password:
    Last login: Mon Sep 24 12:05:36 2021
    root~#
  2. Use the VM’s name and user access credentials to connect to it. For example, the following connects to to the testguest1 VM using its root credentials:

    # ssh root@testguest1
    root@testguest1's password:
    Last login: Wed Sep 12 12:05:36 2018
    root~]#

Troubleshooting

  • If you do not know the VM’s name, you can list all VMs available on the host using the virsh list --all command:

    # virsh list --all
    Id    Name                           State
    ----------------------------------------------------
    2     testguest1                    running
    -     testguest2                    shut off

Additional resources

2.4.4. Opening a virtual machine serial console

Using the virsh console command, it is possible to connect to the serial console of a virtual machine (VM).

This is useful when the VM:

  • Does not provide VNC or SPICE protocols, and thus does not offer video display for GUI tools.
  • Does not have a network connection, and thus cannot be interacted with using SSH.

Prerequisites

  • The VM must have a serial console device configured, such as console type='pty'. To verify, do the following:

    # virsh dumpxml vm-name | grep console
    
    <console type='pty' tty='/dev/pts/2'>
    </console>
  • The VM must have the serial console configured in its kernel command line. To verify this, the cat /proc/cmdline command output on the VM should include console=ttyS0. For example:

    # cat /proc/cmdline
    BOOT_IMAGE=/vmlinuz-3.10.0-948.el7.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,9600n8 rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb

    If the serial console is not set up properly on a VM, using virsh console to connect to the VM connects you to an unresponsive guest console. However, you can still exit the unresponsive console by using the Ctrl+] shortcut.

    • To set up serial console on the VM, do the following:

      1. On the VM, enable the console=ttyS0 kernel option:

        # grubby --update-kernel=ALL --args="console=ttyS0"
      2. Clear the kernel options that might prevent your changes from taking effect.

        # grub2-editenv - unset kernelopts
      3. Reboot the VM.

Procedure

  1. On your host system, use the virsh console command. The following example connects to the guest1 VM, if the libvirt driver supports safe console handling:

    # virsh console guest1 --safe
    Connected to domain 'guest1'
    Escape character is ^]
    
    Subscription-name
    Kernel 3.10.0-948.el7.x86_64 on an x86_64
    
    localhost login:
  2. You can interact with the virsh console in the same way as with a standard command-line interface.

Additional resources

  • The virsh man page

2.4.5. Setting up easy access to remote virtualization hosts

When managing VMs on a remote host system using libvirt utilities, it is recommended to use the -c qemu+ssh://root@hostname/system syntax. For example, to use the virsh list command as root on the 10.0.0.1 host:

# virsh -c qemu+ssh://root@10.0.0.1/system list

root@10.0.0.1's password:

Id   Name              State
---------------------------------
1    remote-guest      running

However, for convenience, you can remove the need to specify the connection details in full by modifying your SSH and libvirt configuration. For example, you will be able to do:

# virsh -c remote-host list

root@10.0.0.1's password:

Id   Name              State
---------------------------------
1    remote-guest      running

To enable this improvement, follow the instructions below.

Procedure

  1. Edit or create the ~/.ssh/config file, and add the following to it, where host-alias is a shortened name associated with a specific remote host, and hosturl is the URL address of the host.

    Host host-alias
            User                    root
            Hostname                hosturl

    For example, the following sets up the tyrannosaurus alias for root@10.0.0.1:

    Host tyrannosaurus
            User                    root
            Hostname                10.0.0.1
  2. Edit or create the /etc/libvirt/libvirt.conf file, and add the following, where qemu-host-alias is a host alias that QEMU and libvirt utilities will associate with the intended host:

    uri_aliases = [
      "qemu-host-alias=qemu+ssh://host-alias/system",
    ]

    For example, the following uses the tyrannosaurus alias configured in the previous step to set up the t-rex alias, which stands for qemu+ssh://10.0.0.1/system:

    uri_aliases = [
      "t-rex=qemu+ssh://tyrannosaurus/system",
    ]

Verification

  1. Confirm that you can manage remote VMs by using libvirt-based utilities on the local system with an added -c qemu-host-alias parameter. This automatically performs the commands over SSH on the remote host.

    For example, verify that the following lists VMs on the 10.0.0.1 remote host, the connection to which was set up as t-rex in the previous steps:

    $ virsh -c t-rex list
    
    root@10.0.0.1's password:
    
    Id   Name              State
    ---------------------------------
    1    velociraptor      running
    Note

    In addition to virsh, the -c (or --connect) option and the remote host access configuration described above can be used by the following utilities:

Next steps

  • If you want to use libvirt utilities exclusively on a single remote host, you can also set a specific connection as the default target for libvirt-based utilities. To do so, edit the /etc/libvirt/libvirt.conf file and set the value of the uri_default parameter to qemu-host-alias. For example, the following uses the t-rex host alias set up in the previous steps as a default libvirt target.

    # These can be used in cases when no URI is supplied by the application
    # (@uri_default also prevents probing of the hypervisor driver).
    #
    uri_default = "t-rex"

    As a result, all libvirt-based commands will automatically be performed on the specified remote host.

    $ virsh list
    root@10.0.0.1's password:
    
    Id   Name              State
    ---------------------------------
    1    velociraptor      running

    However, this is not recommended if you also want to manage VMs on your local host or on different remote hosts.

  • When connecting to a remote host, you can avoid having to provide the root password to the remote system. To do so, use one or more of the following methods:

  • The -c (or --connect) option can be used to run the virt-install, virt-viewer, virsh and virt-manager commands on a remote host.

2.5. Shutting down virtual machines

To shut down a running virtual machine hosted on RHEL 8, use the command line interface or the web console GUI.

2.5.1. Shutting down a virtual machine using the command-line interface

To shut down a responsive virtual machine (VM), do one of the following:

  • Use a shutdown command appropriate to the guest OS while connected to the guest.
  • Use the virsh shutdown command on the host:

    • If the VM is on a local host:

      # virsh shutdown demo-guest1
      Domain 'demo-guest1' is being shutdown
    • If the VM is on a remote host, in this example 10.0.0.1:

      # virsh -c qemu+ssh://root@10.0.0.1/system shutdown demo-guest1
      
      root@10.0.0.1's password:
      Domain 'demo-guest1' is being shutdown

To force a VM to shut down, for example if it has become unresponsive, use the virsh destroy command on the host:

# virsh destroy demo-guest1
Domain 'demo-guest1' destroyed
Note

The virsh destroy command does not actually delete or remove the VM configuration or disk images. It only terminates the running VM instance of the VM, similarly to pulling the power cord from a physical machine. As such, in rare cases, virsh destroy may cause corruption of the VM’s file system, so using this command is only recommended if all other shutdown methods have failed.

2.5.2. Shutting down and restarting virtual machines using the web console

Using the RHEL 8 web console, you can shut down or restart running virtual machines. You can also send a non-maskable interrupt to an unresponsive virtual machine.

2.5.2.1. Shutting down virtual machines in the web console

If a virtual machine (VM) is in the running state, you can shut it down using the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, find the row of the VM you want to shut down.
  2. On the right side of the row, click Shut Down.

    The VM shuts down.

Troubleshooting

  • If the VM does not shut down, click the Menu button next to the Shut Down button and select Force Shut Down.
  • To shut down an unresponsive VM, you can also send a non-maskable interrupt.

2.5.2.2. Restarting virtual machines using the web console

If a virtual machine (VM) is in the running state, you can restart it using the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, find the row of the VM you want to restart.
  2. On the right side of the row, click the Menu button .

    A drop-down menu of actions appears.

  3. In the drop-down menu, click Reboot.

    The VM shuts down and restarts.

Troubleshooting

  • If the VM does not restart, click the Menu button next to the Reboot button and select Force Reboot.
  • To shut down an unresponsive VM, you can also send a non-maskable interrupt.

2.5.2.3. Sending non-maskable interrupts to VMs using the web console

Sending a non-maskable interrupt (NMI) may cause an unresponsive running virtual machine (VM) to respond or shut down. For example, you can send the Ctrl+Alt+Del NMI to a VM that is not responding to standard input.

Prerequisites

Procedure

  1. In the Virtual Machines interface, find the row of the VM to which you want to send an NMI.
  2. On the right side of the row, click the Menu button .

    A drop-down menu of actions appears.

  3. In the drop-down menu, click Send non-maskable interrupt.

    An NMI is sent to the VM.

2.6. Deleting virtual machines

To delete virtual machines in RHEL 8, use the command line interface or the web console GUI.

2.6.1. Deleting virtual machines using the command line interface

To delete a virtual machine (VM), you can remove its XML configuration and associated storage files from the host using the command line. Follow the procedure below:

Prerequisites

  • Back up important data from the VM.
  • Shut down the VM.
  • Make sure no other VMs use the same associated storage.

Procedure

  • Use the virsh undefine utility.

    For example, the following command removes the guest1 VM, its associated storage volumes, and non-volatile RAM, if any.

    # virsh undefine guest1 --remove-all-storage --nvram
    Domain 'guest1' has been undefined
    Volume 'vda'(/home/images/guest1.qcow2) removed.

Additional resources

  • The virsh undefine --help command
  • The virsh man page

2.6.2. Deleting virtual machines using the web console

To delete a virtual machine (VM) and its associated storage files from the host to which the RHEL 8 web console is connected with, follow the procedure below:

Prerequisites

  • The web console VM plug-in is installed on your system.
  • Back up important data from the VM.
  • Make sure no other VM uses the same associated storage.
  • Optional: Shut down the VM.

Procedure

  1. In the Virtual Machines interface, click the Menu button of the VM that you want to delete.

    A drop down menu appears with controls for various VM operations.

  2. Click Delete.

    A confirmation dialog appears.

    Image displaying the Confirm deletion of VM dialog box.
  3. Optional: To delete all or some of the storage files associated with the VM, select the checkboxes next to the storage files you want to delete.
  4. Click Delete.

    The VM and any selected storage files are deleted.

Chapter 3. Getting started with virtualization on IBM POWER

You can use KVM virtualization when using RHEL 8 on IBM POWER8 or POWER9 hardware. However, enabling the KVM hypervisor on your system requires extra steps compared to virtualization on AMD64 and Intel64 architectures. Certain RHEL 8 virtualization features also have different or restricted functionality on IBM POWER.

Apart from the information in the following sections, using virtualization on IBM POWER works the same as on AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more information when using virtualization on IBM POWER.

3.1. Enabling virtualization on IBM POWER

To set up a KVM hypervisor and create virtual machines (VMs) on an IBM POWER8 or IBM POWER9 system running RHEL 8, follow the instructions below.

Prerequisites

  • RHEL 8 is installed and registered on your host machine.
  • The following minimum system resources are available:

    • 6 GB free disk space for the host, plus another 6 GB for each intended VM.
    • 2 GB of RAM for the host, plus another 2 GB for each intended VM.
    • 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load.
  • Your CPU machine type must support IBM POWER virtualization.

    To verify this, query the platform information in your /proc/cpuinfo file.

    # grep ^platform /proc/cpuinfo/
    platform        : PowerNV

    If the output of this command includes the PowerNV entry, you are running a PowerNV machine type and can use virtualization on IBM POWER.

Procedure

  1. Load the KVM-HV kernel module

    # modprobe kvm_hv
  2. Verify that the KVM kernel module is loaded

    # lsmod | grep kvm

    If KVM loaded successfully, the output of this command includes kvm_hv.

  3. Install the packages in the virtualization module:

    # yum module install virt
  4. Install the virt-install package:

    # yum install virt-install
  5. Start the libvirtd service.

    # systemctl start libvirtd

Verification

  1. Verify that your system is prepared to be a virtualization host:

    # virt-host-validate
    [...]
    QEMU: Checking if device /dev/vhost-net exists                : PASS
    QEMU: Checking if device /dev/net/tun exists                  : PASS
    QEMU: Checking for cgroup 'memory' controller support         : PASS
    QEMU: Checking for cgroup 'memory' controller mount-point     : PASS
    [...]
    QEMU: Checking for cgroup 'blkio' controller support          : PASS
    QEMU: Checking for cgroup 'blkio' controller mount-point      : PASS
    QEMU: Checking if IOMMU is enabled by kernel                  : PASS
  2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.

    If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.

    If any of the checks return a WARN value, consider following the displayed instructions to improve virtualization capabilities.

Troubleshooting

  • If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:

    QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)

    However, VMs on such a host system will fail to boot, rather than have performance problems.

    To work around this, you can change the <domain type> value in the XML configuration of the VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain type, and setting this is highly discouraged in production environments.

3.2. How virtualization on IBM POWER differs from AMD64 and Intel 64

KVM virtualization in RHEL 8 on IBM POWER systems is different from KVM on AMD64 and Intel 64 systems in a number of aspects, notably:

Memory requirements
VMs on IBM POWER consume more memory. Therefore, the recommended minimum memory allocation for a virtual machine (VM) on an IBM POWER host is 2GB RAM.
Display protocols

The SPICE protocol is not supported on IBM POWER systems. To display the graphical output of a VM, use the VNC protocol. In addition, only the following virtual graphics card devices are supported:

  • vga - only supported in -vga std mode and not in -vga cirrus mode.
  • virtio-vga
  • virtio-gpu
SMBIOS
SMBIOS configuration is not available.
Memory allocation errors

POWER8 VMs, including compatibility mode VMs, may fail with an error similar to:

qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate memory

This is significantly more likely to occur on VMs that use RHEL 7.3 and prior as the guest OS.

To fix the problem, increase the CMA memory pool available for the guest’s hashed page table (HPT) by adding kvm_cma_resv_ratio=memory to the host’s kernel command line, where memory is the percentage of the host memory that should be reserved for the CMA pool (defaults to 5).

Huge pages

Transparent huge pages (THPs) do not provide any notable performance benefits on IBM POWER8 VMs. However, IBM POWER9 VMs can benefit from THPs as expected.

In addition, the size of static huge pages on IBM POWER8 systems are 16 MiB and 16 GiB, as opposed to 2 MiB and 1 GiB on AMD64, Intel 64, and IBM POWER9. As a consequence, to migrate a VM configured with static huge pages from an IBM POWER8 host to an IBM POWER9 host, you must first set up 1GiB huge pages on the VM.

kvm-clock
The kvm-clock service does not have to be configured for time management in VMs on IBM POWER9.
pvpanic

IBM POWER9 systems do not support the pvpanic device. However, an equivalent functionality is available and activated by default on this architecture. To enable it in a VM, use the <on_crash> XML configuration element with the preserve value.

In addition, make sure to remove the <panic> element from the <devices> section, as its presence can lead to the VM failing to boot on IBM POWER systems.

Single-threaded host
On IBM POWER8 systems, the host machine must run in single-threaded mode to support VMs. This is automatically configured if the qemu-kvm packages are installed. However, VMs running on single-threaded hosts can still use multiple threads.
Peripheral devices

A number of peripheral devices supported on AMD64 and Intel 64 systems are not supported on IBM POWER systems, or a different device is supported as a replacement.

  • Devices used for PCI-E hierarchy, including ioh3420 and xio3130-downstream, are not supported. This functionality is replaced by multiple independent PCI root bridges provided by the spapr-pci-host-bridge device.
  • UHCI and EHCI PCI controllers are not supported. Use OHCI and XHCI controllers instead.
  • IDE devices, including the virtual IDE CD-ROM (ide-cd) and the virtual IDE disk (ide-hd), are not supported. Use the virtio-scsi and virtio-blk devices instead.
  • Emulated PCI NICs (rtl8139) are not supported. Use the virtio-net device instead.
  • Sound devices, including intel-hda, hda-output, and AC97, are not supported.
  • USB redirection devices, including usb-redir and usb-tablet, are not supported.
v2v and p2v
The virt-v2v and virt-p2v utilities are only supported on the AMD64 and Intel 64 architecture, and are not provided on IBM POWER.

Additional sources

Chapter 4. Getting started with virtualization on IBM Z

You can use KVM virtualization when using RHEL 8 on IBM Z hardware. However, enabling the KVM hypervisor on your system requires extra steps compared to virtualization on AMD64 and Intel 64 architectures. Certain RHEL 8 virtualization features also have different or restricted functionality on IBM Z.

Apart from the information in the following sections, using virtualization on IBM Z works the same as on AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more information when using virtualization on IBM Z.

Note

Running KVM on the z/VM OS is not supported.

4.1. Enabling virtualization on IBM Z

To set up a KVM hypervisor and create virtual machines (VMs) on an IBM Z system running RHEL 8, follow the instructions below.

Prerequisites

  • RHEL 8.6 or later is installed and registered on your host machine.

    Important

    If you already enabled virtualization on an IBM Z machine using RHEL 8.5 or earlier, you should instead reconfigure your virtualization module and update your system. For instructions, see How virtualization on IBM Z differs from AMD64 and Intel 64.

  • The following minimum system resources are available:

    • 6 GB free disk space for the host, plus another 6 GB for each intended VM.
    • 2 GB of RAM for the host, plus another 2 GB for each intended VM.
    • 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load.
  • Your IBM Z host system is using a z13 CPU or later.
  • RHEL 8 is installed on a logical partition (LPAR). In addition, the LPAR supports the start-interpretive execution (SIE) virtualization functions.

    To verify this, search for sie in your /proc/cpuinfo file.

    # grep sie /proc/cpuinfo
    features        : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te sie

Procedure

  1. Load the KVM kernel module:

    # modprobe kvm
  2. Verify that the KVM kernel module is loaded:

    # lsmod | grep kvm

    If KVM loaded successfully, the output of this command includes kvm.

  3. Install the packages in the virt:rhel/common module:

    # yum module install virt:rhel/common
  4. Start the virtualization services:

    # for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done

Verification

  1. Verify that your system is prepared to be a virtualization host.

    # virt-host-validate
    [...]
    QEMU: Checking if device /dev/kvm is accessible             : PASS
    QEMU: Checking if device /dev/vhost-net exists              : PASS
    QEMU: Checking if device /dev/net/tun exists                : PASS
    QEMU: Checking for cgroup 'memory' controller support       : PASS
    QEMU: Checking for cgroup 'memory' controller mount-point   : PASS
    [...]
  2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.

    If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.

    If any of the checks return a WARN value, consider following the displayed instructions to improve virtualization capabilities.

Troubleshooting

  • If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:

    QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)

    However, VMs on such a host system will fail to boot, rather than have performance problems.

    To work around this, you can change the <domain type> value in the XML configuration of the VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain type, and setting this is highly discouraged in production environments.

4.2. Updating virtualization on IBM Z from RHEL 8.5 to RHEL 8.6 or later

If you installed RHEL 8 on IBM Z hardware prior to RHEL 8.6, you had to obtain virtualization RPMs from the AV stream, separate from the base RPM stream of RHEL 8. Starting with RHEL 8.6, virtualization RPMs previously available only from the AV stream are available on the base RHEL stream. In addition, the AV stream will be discontinued in a future minor release of RHEL 8. Therefore, using the AV stream is no longer recommended.

By following the instructions below, you will deactivate your AV stream and enable your access to virtualization RPMs available in RHEL 8.6 and later versions.

Prerequisites

  • You are using a RHEL 8.5 on IBM Z, with the virt:av module installed. To confirm that this is the case:

    # hostnamectl | grep "Operating System"
    Operating System: Red Hat Enterprise Linux 8.5 (Ootpa)
    # yum module list --installed
    [...]
    Advanced Virtualization for RHEL 8 IBM Z Systems (RPMs)
    Name                Stream                  Profiles                  Summary
    virt                av [e]                common [i]                Virtualization module

Procedure

  1. Disable the virt:av module.

    # yum disable virt:av
  2. Remove the pre-existing virtualization packages and modules that your system already contains.

    # yum module reset virt -y
  3. Upgrade your packages to their latest RHEL versions.

    # yum update

    This also automatically enables the virt:rhel module on your system.

Verification

  • Ensure the virt module on your system is provided by the rhel stream.

    # yum module info virt
    
    Name             : virt
    Stream           : rhel [d][e][a]
    Version          : 8050020211203195115
    [...]

4.3. How virtualization on IBM Z differs from AMD64 and Intel 64

KVM virtualization in RHEL 8 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:

PCI and USB devices

Virtual PCI and USB devices are not supported on IBM Z. This also means that virtio-*-pci devices are unsupported, and virtio-*-ccw devices should be used instead. For example, use virtio-net-ccw instead of virtio-net-pci.

Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.

Supported guest operating system
Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system.
Device boot order

IBM Z does not support the <boot dev='device'> XML configuration element. To define device boot order, use the <boot order='number'> element in the <devices> section of the XML.

In addition, you can select the required boot entry using the architecture-specific loadparm attribute in the <boot> element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2'/>
  <source file='/path/to/qcow2'/>
  <target dev='vda' bus='virtio'/>
  <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
  <boot order='1' loadparm='2'/>
</disk>
Note

Using <boot order='number'> for boot order management is also preferred on AMD64 and Intel 64 hosts.

Memory hot plug
Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
NUMA topology
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by libvirt on IBM Z. Therefore, tuning vCPU performance using NUMA is not possible on these systems.
vfio-ap
VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture.
vfio-ccw
VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture.
SMBIOS
SMBIOS configuration is not available on IBM Z.
Watchdog devices

If using watchdog devices in your VM on an IBM Z host, use the diag288 model. For example:

<devices>
  <watchdog model='diag288' action='poweroff'/>
</devices>
kvm-clock
The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z.
v2v and p2v
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z.
Nested virtualization
Creating nested VMs requires different settings on IBM Z than on AMD64 and Intel 64. For details, see Creating nested virtual machines.
No graphical output in earlier releases
When using RHEL 8.3 or an earlier minor version on your host, displaying the VM graphical output is not possible when connecting to the VM using the VNC protocol. This is because the gnome-desktop utility was not supported in earlier RHEL versions on IBM Z. In addition, the SPICE display protocol does not work on IBM Z.
Migrations

To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are not recommended, as they are generally not migration-safe.

If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines:

  • Do not use CPU models that end with -base.
  • Do not use the qemu, max or host CPU model.

To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base at the end.

PXE installation and booting

When using PXE to run a VM on IBM Z, a specific configuration is required for the pxelinux.cfg/default file. For example:

# pxelinux
default linux
label linux
kernel kernel.img
initrd initrd.img
append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/
Secure Execution
You can boot a VM with a prepared secure guest image by defining <launchSecurity type="s390-pv"/> in the XML configuration of the VM. This encrypts the VM’s memory to protect it from unwanted access by the hypervisor.

Note that the following features are not supported when running a VM in secure execution mode:

  • Device passthrough using vfio
  • Obtaining memory information using virsh domstats and virsh memstat
  • The memballoon and virtio-rng virtual devices
  • Memory backing using huge pages
  • Live and non-live VM migrations
  • Saving and restoring VMs
  • VM snapshots, including memory snapshots (using the --memspec option)
  • Full memory dumps. Instead, specify the --memory-only option for the virsh dump command.
  • 248 or more vCPUs. The vCPU limit for secure guests is 247.
  • Nested virtualization

4.4. Next steps

  • When setting up a VM on an IBM Z system, it is recommended to protect the guest OS from the "Spectre" vulnerability. To do so, use the virsh edit command to modify the VM’s XML configuration and configure its CPU in one of the following ways:

    • Use the host CPU model:

      <cpu mode='host-model' check='partial'>
        <model fallback='allow'/>
      </cpu>

      This makes the ppa15 and bpb features available to the guest if the host supports them.

    • If using a specific host model, add the ppa15 and pbp features. The following example uses the zEC12 CPU model:

      <cpu mode='custom' match='exact' check='partial'>
          <model fallback='allow'>zEC12</model>
          <feature policy='force' name='ppa15'/>
          <feature policy='force' name='bpb'/>
      </cpu>

      Note that when using the ppa15 feature with the z114 and z196 CPU models on a host machine that uses a z12 CPU, you also need to use the latest microcode level (bundle 95 or later).

Chapter 5. Managing virtual machines in the web console

To manage virtual machines in a graphical interface on a RHEL 8 host, you can use the Virtual Machines pane in the RHEL 8 web console.

Image displaying the virtual machine tab of the web console.

5.1. Overview of virtual machine management using the web console

The RHEL 8 web console is a web-based interface for system administration. As one of its features, the web console provides a graphical view of virtual machines (VMs) on the host system, and makes it possible to create, access, and configure these VMs.

Note that to use the web console to manage your VMs on RHEL 8, you must first install a web console plug-in for virtualization.

Next steps

5.2. Setting up the web console to manage virtual machines

Before using the RHEL 8 web console to manage virtual machines (VMs), you must install the web console virtual machine plug-in on the host.

Prerequisites

  • Ensure that the web console is installed and enabled on your machine.

    # systemctl status cockpit.socket
    cockpit.socket - Cockpit Web Service Socket
    Loaded: loaded (/usr/lib/systemd/system/cockpit.socket
    [...]

    If this command returns Unit cockpit.socket could not be found, follow the Installing the web console document to enable the web console.

Procedure

  • Install the cockpit-machines plug-in.

    # yum install cockpit-machines

Verification

  1. Access the web console, for example by entering the https://localhost:9090 address in your browser.
  2. Log in.
  3. If the installation was successful, Virtual Machines appears in the web console side menu.

    Image displaying the virtual machine tab of the web console.

5.3. Renaming virtual machines using the web console

After create a virtual machine (VM), you might wish to rename the VM to avoid conflicts or assign a new unique name based on your use case. You can use the RHEL web console to rename the VM.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the Menu button of the VM that you want to rename.

    A drop down menu appears with controls for various VM operations.

  2. Click Rename.

    The Rename a VM dialog appears.

    Image displaying the rename a VM dialog box.
  3. In the New name field, enter a name for the VM.
  4. Click Rename.

Verification

  • The new VM name should appear in the Virtual Machines interface.

5.4. Virtual machine management features available in the web console

Using the RHEL 8 web console, you can perform the following actions to manage the virtual machines (VMs) on your system.

Table 5.1. VM management tasks that you can perform in the RHEL 8 web console

TaskFor details, see

Create a VM and install it with a guest operating system

Creating virtual machines and installing guest operating systems using the web console

Delete a VM

Deleting virtual machines using the web console

Start, shut down, and restart the VM

Starting virtual machines using the web console and Shutting down and restarting virtual machines using the web console

Connect to and interact with a VM using a variety of consoles

Interacting with virtual machines using the web console

View a variety of information about the VM

Viewing virtual machine information using the web console

Adjust the host memory allocated to a VM

Adding and removing virtual machine memory using the web console

Manage network connections for the VM

Using the web console for managing virtual machine network interfaces

Manage the VM storage available on the host and attach virtual disks to the VM

Managing storage for virtual machines using the web console

Configure the virtual CPU settings of the VM

Managing virtal CPUs using the web console

Live migrate a VM

Live migrating a virtual machine using the web console

Manage host devices

Managing host devices using the web console

Manage virtual optical drives

Managing virtual optical drives

Attach watchdog device

Attaching a watchdog device to a virtual machine using the web cosnole

5.5. Differences between virtualization features in Virtual Machine Manager and the web console

The Virtual Machine Manager (virt-manager) application is supported in RHEL 8, but has been deprecated. The web console is intended to become its replacement in a subsequent major release. It is, therefore, recommended that you get familiar with the web console for managing virtualization in a GUI.

However, in RHEL 8, some VM management tasks can only be performed in virt-manager or the command line. The following table highlights the features that are available in virt-manager but not available in the RHEL 8.0 web console.

If a feature is available in a later minor version of RHEL 8, the minimum RHEL 8 version appears in the Support in web console introduced column.

Table 5.2. VM managemennt tasks that cannot be performed using the web console in RHEL 8.0

TaskSupport in web console introducedAlternative method using CLI

Setting a virtual machine to start when the host boots

RHEL 8.1

virsh autostart

Suspending a virtual machine

RHEL 8.1

virsh suspend

Resuming a suspended virtual machine

RHEL 8.1

virsh resume

Creating file-system directory storage pools

RHEL 8.1

virsh pool-define-as

Creating NFS storage pools

RHEL 8.1

virsh pool-define-as

Creating physical disk device storage pools

RHEL 8.1

virsh pool-define-as

Creating LVM volume group storage pools

RHEL 8.1

virsh pool-define-as

Creating partition-based storage pools

CURRENTLY UNAVAILABLE

virsh pool-define-as

Creating GlusterFS-based storage pools

CURRENTLY UNAVAILABLE

virsh pool-define-as

Creating vHBA-based storage pools with SCSI devices

CURRENTLY UNAVAILABLE

virsh pool-define-as

Creating Multipath-based storage pools

CURRENTLY UNAVAILABLE

virsh pool-define-as

Creating RBD-based storage pools

CURRENTLY UNAVAILABLE

virsh pool-define-as

Creating a new storage volume

RHEL 8.1

virsh vol-create

Adding a new virtual network

RHEL 8.1

virsh net-create or virsh net-define

Deleting a virtual network

RHEL 8.1

virsh net-undefine

Creating a bridge from a host machine’s interface to a virtual machine

CURRENTLY UNAVAILABLE

virsh iface-bridge

Creating a snapshot

CURRENTLY UNAVAILABLE

virsh snapshot-create-as

Reverting to a snapshot

CURRENTLY UNAVAILABLE

virsh snapshot-revert

Deleting a snapshot

CURRENTLY UNAVAILABLE

virsh snapshot-delete

Cloning a virtual machine

RHEL 8.4

virt-clone

Migrating a virtual machine to another host machine

RHEL 8.5

virsh migrate

Attaching a host device to a VM

RHEL 8.5

virt-xml --add-device

Removing a host device from a VM

RHEL 8.5

virt-xml --remove-device

Chapter 6. Viewing information about virtual machines

When you need to adjust or troubleshoot any aspect of your virtualization deployment on RHEL 8, the first step you need to perform usually is to view information about the current state and configuration of your virtual machines. To do so, you can use the command-line interface or the web console. You can also view the information in the VM’s XML configuration.

6.1. Viewing virtual machine information using the command-line interface

To retrieve information about virtual machines (VMs) on your host and their configurations, use one or more of the following commands.

Procedure

  • To obtain a list of VMs on your host:

    # virsh list --all
    Id   Name              State
    ----------------------------------
    1    testguest1             running
    -    testguest2             shut off
    -    testguest3             shut off
    -    testguest4             shut off
  • To obtain basic information about a specific VM:

    # virsh dominfo testguest1
    Id:             1
    Name:           testguest1
    UUID:           a973666f-2f6e-415a-8949-75a7a98569e1
    OS Type:        hvm
    State:          running
    CPU(s):         2
    CPU time:       188.3s
    Max memory:     4194304 KiB
    Used memory:    4194304 KiB
    Persistent:     yes
    Autostart:      disable
    Managed save:   no
    Security model: selinux
    Security DOI:   0
    Security label: system_u:system_r:svirt_t:s0:c486,c538 (enforcing)
  • To obtain the complete XML configuration of a specific VM:

    # virsh dumpxml testguest2
    
    <domain type='kvm' id='1'>
      <name>testguest2</name>
      <uuid>a973434f-2f6e-4ěša-8949-76a7a98569e1</uuid>
      <metadata>
    [...]
  • For information about a VM’s disks and other block devices:

    # virsh domblklist testguest3
     Target   Source
    ---------------------------------------------------------------
     vda      /var/lib/libvirt/images/testguest3.qcow2
     sda      -
     sdb      /home/username/Downloads/virt-p2v-1.36.10-1.el7.iso

    For instructions on managing a VM’s storage, see Managing storage for virtual machines.

  • To obtain information about a VM’s file systems and their mountpoints:

    # virsh domfsinfo testguest3
    Mountpoint   Name   Type   Target
    ------------------------------------
     /            dm-0   xfs
     /boot        vda1   xfs
  • To obtain more details about the vCPUs of a specific VM:

    # virsh vcpuinfo testguest4
    VCPU:           0
    CPU:            3
    State:          running
    CPU time:       103.1s
    CPU Affinity:   yyyy
    
    VCPU:           1
    CPU:            0
    State:          running
    CPU time:       88.6s
    CPU Affinity:   yyyy

    To configure and optimize the vCPUs in your VM, see Optimizing virtual machine CPU performance.

  • To list all virtual network interfaces on your host:

    # virsh net-list --all
     Name       State    Autostart   Persistent
    ---------------------------------------------
     default    active   yes         yes
     labnet     active   yes         yes

    For information about a specific interface:

    # virsh net-info default
    Name:           default
    UUID:           c699f9f6-9202-4ca8-91d0-6b8cb9024116
    Active:         yes
    Persistent:     yes
    Autostart:      yes
    Bridge:         virbr0

    For details about network interfaces, VM networks, and instructions for configuring them, see Configuring virtual machine network connections.

6.2. Viewing virtual machine information using the web console

Using the RHEL 8 web console, you can view information about all VMs and storage pools the web console session can access.

You can view information about a selected VM to which the web console session is connected. This includes information about its disks, virtual network interface and resource usage.

6.2.1. Viewing a virtualization overview in the web console

Using the web console, you can access a virtualization overview that contains summarized information about available virtual machines (VMs), storage pools, and networks.

Prerequisites

Procedure

  • Click Virtual Machines in the web console’s side menu.

    A dialog box appears with information about the available storage pools, available networks, and the VMs to which the web console is connected.

    Image displaying the virtual machine tab of the web console.

The information includes the following:

  • Storage Pools - The number of storage pools, active or inactive, that can be accessed by the web console and their state.
  • Networks - The number of networks, active or inactive, that can be accessed by the web console and their state.
  • Name - The name of the VM.
  • Connection - The type of libvirt connection, system or session.
  • State - The state of the VM.

6.2.2. Viewing storage pool information using the web console

Using the web console, you can view detailed information about storage pools available on your system. Storage pools can be used to create disk images for your virtual machines.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines interface.

    The Storage pools window appears, showing a list of configured storage pools.

    Image displaying the storage pool tab of the web console with information about existing storage pools.

    The information includes the following:

    • Name - The name of the storage pool.
    • Size - The current allocation and the total capacity of the storage pool.
    • Connection - The connection used to access the storage pool.
    • State - The state of the storage pool.
  2. Click the arrow next to the storage pool whose information you want to see.

    The row expands to reveal the Overview pane with detailed information about the selected storage pool.

    Image displaying the detailed information about the selected storage pool.

    The information includes:

    • Target path - The source for the types of storage pools backed by directories, such as dir or netfs.
    • Persistent - Indicates whether or not the storage pool has a persistent configuration.
    • Autostart - Indicates whether or not the storage pool starts automatically when the system boots up.
    • Type - The type of the storage pool.
  3. To view a list of storage volumes associated with the storage pool, click Storage Volumes.

    The Storage Volumes pane appears, showing a list of configured storage volumes.

    Image displaying the list of storage volumes associated with the selected storage pool.

    The information includes:

    • Name - The name of the storage volume.
    • Used by - The VM that is currently using the storage volume.
    • Size - The size of the volume.

6.2.3. Viewing basic virtual machine information in the web console

Using the web console, you can view basic information, such as assigned resources or hypervisor details, about a selected virtual machine (VM).

Prerequisites

Procedure

  1. Click Virtual Machines in the web console side menu.
  2. Click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

    Image displaying the interface of the selected virtual machine.

    The Overview section includes the following general VM details:

    • State - The VM state, Running or Shut off.
    • Memory - The amount of memory assigned to the VM.
    • vCPUs - The number of virtual CPUs configured for the VM.
    • CPU Type - The architecture of the virtual CPUs configured for the VM.
    • Boot Order - The boot order configured for the VM.
    • Autostart - Whether or not autostart is enabled for the VM.

    The information also includes the following hypervisor details:

    • Emulated Machine - The machine type emulated by the VM.
    • Firmware - The firmware of the VM.

6.2.4. Viewing virtual machine resource usage in the web console

Using the web console, you can view memory and virtual CPU usage of a selected virtual machine (VM).

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Usage.

    The Usage section displays information about the memory and virtual CPU usage of the VM.

    Image displaying the memory and CPU usage of the selected VM.

6.2.5. Viewing virtual machine disk information in the web console

Using the web console, you can view detailed information about disks assigned to a selected virtual machine (VM).

Prerequisites

Procedure

  1. Click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

    Image displaying the disk usage of the selected VM.

The information includes the following:

  • Device - The device type of the disk.
  • Used - The amount of disk currently allocated.
  • Capacity - The maximum size of the storage volume.
  • Bus - The type of disk device that is emulated.
  • Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the access to Writeable and shared.
  • Source - The disk device or file.

6.2.6. Viewing and editing virtual network interface information in the web console

Using the RHEL 8 web console, you can view and modify the virtual network interfaces on a selected virtual machine (VM):

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

    Image displaying the network interface details of the selected virtual machine.

    The information includes the following:

    • Type - The type of network interface for the VM. The types include virtual network, bridge to LAN, and direct attachment.

      Note

      Generic Ethernet connection is not supported in RHEL 8 and later.

    • Model type - The model of the virtual network interface.
    • MAC Address - The MAC address of the virtual network interface.
    • IP Address - The IP address of the virtual network interface.
    • Source - The source of the network interface. This is dependent on the network type.
    • State - The state of the virtual network interface.
  3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings dialog opens.

    Image displaying the various options that can be edited for the selected network interface.
  4. Change the interface type, source, model, or MAC address.
  5. Click Save. The network interface is modified.

    Note

    Changes to the virtual network interface settings take effect only after restarting the VM.

    Additionally, MAC address can only be modified when the VM is shut off.

6.3. Sample virtual machine XML configuration

The XML configuration of a VM, also referred to as a domain XML, determines the VM’s settings and components. The following table shows sections of a sample XML configuration of a virtual machine (VM) and explains the contents.

To obtain the XML configuration of a VM, you can use the virsh dumpxml command followed by the VM’s name.

# virsh dumpxml testguest1

Table 6.1. Sample XML configuration

Domain XML SectionDescription
<domain type='kvm'>
 <name>Testguest1</name>
 <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid>
 <memory unit='KiB'>1048576</memory>
 <currentMemory unit='KiB'>1048576</currentMemory>

This is a KVM virtual machine called Testguest1, with 1024 MiB allocated RAM.

 <vcpu placement='static'>1</vcpu>

The VM is allocated with a single virtual CPU (vCPU).

For information about configuring vCPUs, see Optimizing virtual machine CPU performance.

 <os>
  <type arch='x86_64' machine='pc-q35-4.1'>hvm</type>
  <boot dev='hd'/>
 </os>

The machine architecture is set to the AMD64 and Intel 64 architecture, and uses the Intel Q35 machine type to determine feature compatibility. The OS is set to be booted from the hard disk drive.

For information about creating a VM with an installed OS, see Creating virtual machines and installing guest operating systems using the web console.

 <features>
  <acpi/>
  <apic/>
 </features>

The acpi and apic hypervisor features are disabled.

 <cpu mode='host-model' check='partial'/>

The host CPU definitions from capabilities XML (obtainable with virsh capabilities) are automatically copied into the VM’s XML configuration. Therefore, when the VM is booted, libvirt picks a CPU model that is similar to the host CPU, and then adds extra features to approximate the host model as closely as possible.

 <clock offset='utc'>
  <timer name='rtc' tickpolicy='catchup'/>
  <timer name='pit' tickpolicy='delay'/>
  <timer name='hpet' present='no'/>
 </clock>

The VM’s virtual hardware clock uses the UTC time zone. In addition, three different timers are set up for synchronization with the QEMU hypervisor.

 <on_poweroff>destroy</on_poweroff>
 <on_reboot>restart</on_reboot>
 <on_crash>destroy</on_crash>

When the VM powers off, or its OS terminates unexpectedly, libvirt terminates the VM and releases all its allocated resources. When the VM is rebooted, libvirt restarts it with the same configuration.

 <pm>
  <suspend-to-mem enabled='no'/>
  <suspend-to-disk enabled='no'/>
 </pm>

The S3 and S4 ACPI sleep states are disabled for this VM.

 <devices>
  <emulator>/usr/bin/qemu-kvm</emulator>
  <disk type='file' device='disk'>
   <driver name='qemu' type='qcow2'/>
   <source file='/var/lib/libvirt/images/Testguest.qcow2'/>
   <target dev='hda' bus='ide'/>
  </disk>
  <disk type='file' device='cdrom'>
   <driver name='qemu' type='raw'/>
   <target dev='hdb' bus='ide'/>
   <readonly/>
  </disk>

The VM uses the /usr/bin/qemu-kvm binary file for emulation and it has two disk devices attached.

The first disk is a virtualized hard-drive based on the /var/lib/libvirt/images/Testguest.qcow2 stored on the host, and its logical device name is set to hda.

The second disk is a virtualized CD-ROM and its logical device name is set to hdb.

  <controller type='usb' index='0' model='qemu-xhci' ports='15'/>
  <controller type='sata' index='0'/>
  <controller type='pci' index='0' model='pcie-root'/>
  <controller type='pci' index='1' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='1' port='0x10'/>
  </controller>
  <controller type='pci' index='2' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='2' port='0x11'/>
  </controller>
  <controller type='pci' index='3' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='3' port='0x12'/>
  </controller>
  <controller type='pci' index='4' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='4' port='0x13'/>
  </controller>
  <controller type='pci' index='5' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='5' port='0x14'/>
  </controller>
  <controller type='pci' index='6' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='6' port='0x15'/>
  </controller>
  <controller type='pci' index='7' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='7' port='0x16'/>
  </controller>
  <controller type='virtio-serial' index='0'/>

The VM uses a single controller for attaching USB devices, and a root controller for PCI-Express (PCIe) devices. In addition, a virtio-serial controller is available, which enables the VM to interact with the host in a variety of ways, such as the serial console.

For more information about virtual devices, see Types of virtual devices.

 <interface type='network'>
  <mac address='52:54:00:65:29:21'/>
  <source network='default'/>
  <model type='rtl8139'/>
 </interface>

A network interface is set up in the VM that uses the default virtual network and the rtl8139 network device model.

For information about configuring the network interface, see Optimizing virtual machine network performance.

  <serial type='pty'>
   <target type='isa-serial' port='0'>
    <model name='isa-serial'/>
   </target>
  </serial>
  <console type='pty'>
   <target type='serial' port='0'/>
  </console>
  <channel type='unix'>
   <target type='virtio' name='org.qemu.guest_agent.0'/>
   <address type='virtio-serial' controller='0' bus='0' port='1'/>
  </channel>
  <channel type='spicevmc'>
   <target type='virtio' name='com.redhat.spice.0'/>
    <address type='virtio-serial' controller='0' bus='0' port='2'/>
  </channel>

A pty serial console is set up on the VM, which enables rudimentary VM communication with the host. The console uses the UNIX channel on port 1, and the paravirtualized SPICE on port 2. This is set up automatically and changing these settings is not recommended.

For more information about interacting with VMs, see Interacting with virtual machines using the web console.

  <input type='tablet' bus='usb'>
   <address type='usb' bus='0' port='1'/>
  </input>
  <input type='mouse' bus='ps2'/>
  <input type='keyboard' bus='ps2'/>

The VM uses a virtual usb port, which is set up to receive tablet input, and a virtual ps2 port set up to receive mouse and keyboard input. This is set up automatically and changing these settings is not recommended.

  <graphics type='spice' autoport='yes' listen='127.0.0.1'>
   <listen type='address' address='127.0.0.1'/>
   <image compression='off'/>
  </graphics>
  <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
   <listen type='address' address='127.0.0.1'/>
  </graphics>

The VM uses the VNC and SPICE protocols for rendering its graphical output, and image compression is turned off.

  <sound model='ich6'>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
  </sound>
  <video>
   <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
  </video>

An ICH6 HDA sound device is set up for the VM, and the QEMU QXL paravirtualized framebuffer device is set up as the video accelerator. This is set up automatically and changing these settings is not recommended.

  <redirdev bus='usb' type='spicevmc'>
   <address type='usb' bus='0' port='1'/>
  </redirdev>
  <redirdev bus='usb' type='spicevmc'>
   <address type='usb' bus='0' port='2'/>
  </redirdev>
  <memballoon model='virtio'>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
  </memballoon>
 </devices>
</domain>

The VM has two re-directors for attaching USB devices remotely, and memory ballooning is turned on. This is set up automatically and changing these settings is not recommended.

Chapter 7. Saving and restoring virtual machines

To free up system resources, you can shut down a virtual machine (VM) running on that system. However, when you require the VM again, you must boot up the guest operating system (OS) and restart the applications, which may take a considerable amount of time. To reduce this downtime and enable the VM workload to start running sooner, you can use the save and restore feature to avoid the OS shutdown and boot sequence entirely.

This section provides information about saving VMs, as well as about restoring them to the same state without a full VM boot-up.

7.1. How saving and restoring virtual machines works

Saving a virtual machine (VM) saves its memory and device state to the host’s disk, and immediately stops the VM process. You can save a VM that is either in a running or paused state, and upon restoring, the VM will return to that state.

This process frees up RAM and CPU resources on the host system in exchange for disk space, which may improve the host system performance. When the VM is restored, because the guest OS does not need to be booted, the long boot-up period is avoided as well.

To save a VM, you can use the command-line interface (CLI). For instructions, see Saving virtual machines using the command line interface.

To restore a VM you can use the CLI or the web console GUI.

7.2. Saving a virtual machine using the command line interface

You can save a virtual machine (VM) and its current state to the host’s disk. This is useful, for example, when you need to use the host’s resources for some other purpose. The saved VM can then be quickly restored to its previous running state.

To save a VM using the command line, follow the procedure below.

Prerequisites

  • Ensure you have sufficient disk space to save the VM and its configuration. Note that the space occupied by the VM depends on the amount of RAM allocated to that VM.
  • Ensure the VM is persistent.
  • Optional: Back up important data from the VM if required.

Procedure

  • Use the virsh managedsave utility.

    For example, the following command stops the demo-guest1 VM and saves its configuration.

    # virsh managedsave demo-guest1
    Domain 'demo-guest1' saved by libvirt

    The saved VM file is located by default in the /var/lib/libvirt/qemu/save directory as demo-guest1.save.

    The next time the VM is started, it will automatically restore the saved state from the above file.

Verification

  • List the VMs that have managed save enabled. In the following example, the VMs listed as saved have their managed save enabled.

    # virsh list --managed-save --all
    Id    Name                           State
    ----------------------------------------------------
    -     demo-guest1                    saved
    -     demo-guest2                    shut off

    To list the VMs that have a managed save image:

    # virsh list --with-managed-save --all
    Id    Name                           State
    ----------------------------------------------------
    -     demo-guest1                    shut off

    Note that to list the saved VMs that are in a shut off state, you must use the --all or --inactive options with the command.

Troubleshooting

  • If the saved VM file becomes corrupted or unreadable, restoring the VM will initiate a standard VM boot instead.

7.3. Starting a virtual machine using the command-line interface

You can use the command line interface (CLI) to start a shut-down virtual machine (VM) or restore a saved VM. Using the CLI, you can start both local and remote VMs.

Prerequisites

  • An inactive VM that is already defined.
  • The name of the VM.
  • For remote VMs:

    • The IP address of the host where the VM is located.
    • Root access privileges to the host.

Procedure

  • For a local VM, use the virsh start utility.

    For example, the following command starts the demo-guest1 VM.

    # virsh start demo-guest1
    Domain 'demo-guest1' started
  • For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH connection to the host.

    For example, the following command starts the demo-guest1 VM on the 192.168.123.123 host.

    # virsh -c qemu+ssh://root@192.168.123.123/system start demo-guest1
    
    root@192.168.123.123's password:
    
    Domain 'demo-guest1' started

7.4. Starting virtual machines using the web console

If a virtual machine (VM) is in the shut off state, you can start it using the RHEL 8 web console. You can also configure the VM to be started automatically when the host starts.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM you want to start.

    A new page opens with detailed information about the selected VM and controls for shutting down and deleting the VM.

  2. Click Run.

    The VM starts, and you can connect to its console or graphical output.

  3. Optional: To configure the VM to start automatically when the host starts, toggle the Autostart checkbox in the Overview section.

    If you use network interfaces that are not managed by libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see starting virtual machines automatically when the host starts.

Chapter 8. Cloning virtual machines

To quickly create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM.

Cloning creates a new VM that uses its own disk image for storage, but most of the clone’s configuration and stored data is identical to the source VM. This makes it possible to prepare multiple VMs optimized for a certain task without the need to optimize each VM individually.

8.1. How cloning virtual machines works

Cloning a virtual machine (VM) copies the XML configuration of the source VM and its disk images, and makes adjustments to the configurations to ensure the uniqueness of the new VM. This includes changing the name of the VM and ensuring it uses the disk image clones. Nevertheless, the data stored on the clone’s virtual disks is identical to the source VM.

This process is faster than creating a new VM and installing it with a guest operating system, and can be used to rapidly generate VMs with a specific configuration and content.

If you are planning to create multiple clones of a VM, first create a VM template that does not contain:

  • Unique settings, such as persistent network MAC configuration, which can prevent the clones from working correctly.
  • Sensitive data, such as SSH keys and password files.

For instructions, see Creating virtual machines templates.

8.2. Creating virtual machine templates

To create multiple virtual machine (VM) clones that work correctly, you can remove information and configurations that are unique to a source VM, such as SSH keys or persistent network MAC configuration. This creates a VM template, which you can use to easily and safely create VM clones.

You can create VM templates using the virt-sysprep utility or you can create them manually based on your requirements.

8.2.1. Creating a virtual machine template using virt-sysprep

To create a cloning template from an existing virtual machine (VM), you can use the virt-sysprep utility. This removes certain configurations that might cause the clone to work incorrectly, such as specific network settings or system registration metadata. As a result, virt-sysprep makes creating clones of the VM more efficient, and ensures that the clones work more reliably.

Prerequisites

  • The libguestfs-tools-c package, which contains the virt-sysprep utility, is installed on your host:

    # yum install libguestfs-tools-c
  • The source VM intended as a template is shut down.
  • You know where the disk image for the source VM is located, and you are the owner of the VM’s disk image file.

    Note that disk images for VMs created in the system connection of libvirt are located in the /var/lib/libvirt/images directory and owned by the root user by default:

    # ls -la /var/lib/libvirt/images
    -rw-------.  1 root root  9665380352 Jul 23 14:50 a-really-important-vm.qcow2
    -rw-------.  1 root root  8591507456 Jul 26  2017 an-actual-vm-that-i-use.qcow2
    -rw-------.  1 root root  8591507456 Jul 26  2017 totally-not-a-fake-vm.qcow2
    -rw-------.  1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
  • Optional: Any important data on the source VM’s disk has been backed up. If you want to preserve the source VM intact, clone it first and turn the clone into a template.

Procedure

  1. Ensure you are logged in as the owner of the VM’s disk image:

    # whoami
    root
  2. Optional: Copy the disk image of the VM.

    # cp /var/lib/libvirt/images/a-really-important-vm.qcow2 /var/lib/libvirt/images/a-really-important-vm-original.qcow2

    This is used later to verify that the VM was successfully turned into a template.

  3. Use the following command, and replace /var/lib/libvirt/images/a-really-important-vm.qcow2 with the path to the disk image of the source VM.

    # virt-sysprep -a /var/lib/libvirt/images/a-really-important-vm.qcow2
    [   0.0] Examining the guest ...
    [   7.3] Performing "abrt-data" ...
    [   7.3] Performing "backup-files" ...
    [   9.6] Performing "bash-history" ...
    [   9.6] Performing "blkid-tab" ...
    [...]

Verification

  • To confirm that the process was successful, compare the modified disk image to the original one. The following example shows a successful creation of a template:

    # virt-diff -a /var/lib/libvirt/images/a-really-important-vm-orig.qcow2 -A /var/lib/libvirt/images/a-really-important-vm.qcow2
    - - 0644       1001 /etc/group-
    - - 0000        797 /etc/gshadow-
    = - 0444         33 /etc/machine-id
    [...]
    - - 0600        409 /home/username/.bash_history
    - d 0700          6 /home/username/.ssh
    - - 0600        868 /root/.bash_history
    [...]

Additional resources

8.2.2. Creating a virtual machine template manually

To create a template from an existing virtual machine (VM), you can manually reset or unconfigure a guest VM to prepare it for cloning.

Prerequisites

  • Ensure that you know the location of the disk image for the source VM and are the owner of the VM’s disk image file.

    Note that disk images for VMs created in the system connection of libvirt are by default located in the /var/lib/libvirt/images directory and owned by the root user:

    # ls -la /var/lib/libvirt/images
    -rw-------.  1 root root  9665380352 Jul 23 14:50 a-really-important-vm.qcow2
    -rw-------.  1 root root  8591507456 Jul 26  2017 an-actual-vm-that-i-use.qcow2
    -rw-------.  1 root root  8591507456 Jul 26  2017 totally-not-a-fake-vm.qcow2
    -rw-------.  1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
  • Ensure that the VM is shut down.
  • Optional: Any important data on the VM’s disk has been backed up. If you want to preserve the source VM intact, clone it first and edit the clone to create a template.

Procedure

  1. Configure the VM for cloning:

    1. Install any software needed on the clone.
    2. Configure any non-unique settings for the operating system.
    3. Configure any non-unique application settings.
  2. Remove the network configuration:

    1. Remove any persistent udev rules using the following command:

      # rm -f /etc/udev/rules.d/70-persistent-net.rules
      Note

      If udev rules are not removed, the name of the first NIC might be eth1 instead of eth0.

    2. Remove unique network details from ifcfg scripts by editing /etc/sysconfig/network-scripts/ifcfg-eth[x] as follows:

      1. Remove the HWADDR and Static lines:

        Note

        If the HWADDR does not match the new guest’s MAC address, the ifcfg will be ignored.

        DEVICE=eth[x] BOOTPROTO=none ONBOOT=yes #NETWORK=10.0.1.0 <- REMOVE #NETMASK=255.255.255.0 <- REMOVE #IPADDR=10.0.1.20 <- REMOVE #HWADDR=xx:xx:xx:xx:xx <- REMOVE #USERCTL=no <- REMOVE # Remove any other *unique or non-desired settings, such as UUID.*
      2. Configure DHCP but do not include HWADDR or any other unique information:

        DEVICE=eth[x] BOOTPROTO=dhcp ONBOOT=yes
    3. Ensure the following files also contain the same content, if they exist on your system:

      • /etc/sysconfig/networking/devices/ifcfg-eth[x]
      • /etc/sysconfig/networking/profiles/default/ifcfg-eth[x]

        Note

        If you had used NetworkManager or any special settings with the VM, ensure that any additional unique information is removed from the ifcfg scripts.

  3. Remove registration details:

    • For VMs registered on the Red Hat Network (RHN):

      # rm /etc/sysconfig/rhn/systemid
    • For VMs registered with Red Hat Subscription Manager (RHSM):

      • If you do not plan to use the original VM:

        # subscription-manager unsubscribe --all # subscription-manager unregister # subscription-manager clean
      • If you plan to use the original VM:

        # subscription-manager clean
        Note

        The original RHSM profile remains in the Portal along with your ID code. Use the following command to reactivate your RHSM registration on the VM after it is cloned:

        # subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9
  4. Remove other unique details:

    1. Remove SSH public and private key pairs:

      # rm -rf /etc/ssh/ssh_host_example
    2. Remove the configuration of LVM devices:

      # rm /etc/lvm/devices/system.devices
    3. Remove any other application-specific identifiers or configurations that might cause conflicts if running on multiple machines.
  5. Remove the gnome-initial-setup-done file to configure the VM to run the configuration wizard on the next boot:

    # rm ~/.config/gnome-initial-setup-done
    Note

    The wizard that runs on the next boot depends on the configurations that have been removed from the VM. In addition, on the first boot of the clone, it is recommended that you change the hostname.

8.3. Cloning a virtual machine using the command-line interface

To quickly create a new virtual machine (VM) with a specific set of properties, for example for testing purposes, you can clone an existing VM. To do so using the CLI, follow the instructions below.

Prerequisites

  • The source VM is shut down.
  • Ensure that there is sufficient disk space to store the cloned disk images.
  • Optional: When creating multiple VM clones, remove unique data and settings from the source VM to ensure the cloned VMs work properly. For instructions, see Creating virtual machine templates.

Procedure

  1. Use the virt-clone utility with options that are appropriate for your environment and use case.

    Sample use cases

    • The following command clones a local VM named example-VM-1 and creates the example-VM-1-clone VM. It also creates and allocates the example-VM-1-clone.qcow2 disk image in the same location as the disk image of the original VM, and with the same data:

      # virt-clone --original example-VM-1 --auto-clone
      Allocating 'example-VM-1-clone.qcow2'                            | 50.0 GB  00:05:37
      
      Clone 'example-VM-1-clone' created successfully.
    • The following command clones a VM named example-VM-2, and creates a local VM named example-VM-3, which uses only two out of multiple disks of example-VM-2:

      # virt-clone --original example-VM-2 --name example-VM-3 --file /var/lib/libvirt/images/disk-1-example-VM-2.qcow2 --file /var/lib/libvirt/images/disk-2-example-VM-2.qcow2
      Allocating 'disk-1-example-VM-2-clone.qcow2'                                      | 78.0 GB  00:05:37
      Allocating 'disk-2-example-VM-2-clone.qcow2'                                      | 80.0 GB  00:05:37
      
      Clone 'example-VM-3' created successfully.
    • To clone your VM to a different host, migrate the VM without undefining it on the local host. For example, the following commands clone the previously created example-VM-3 VM to the 10.0.0.1 remote system, including its local disks. Note that using these commands also requires root privileges for 10.0.0.1:

      # virsh migrate --offline --persistent example-VM-3 qemu+ssh://root@10.0.0.1/system
      root@10.0.0.1's password:
      
      # scp /var/lib/libvirt/images/<disk-1-example-VM-2-clone>.qcow2 root@10.0.0.1/<user@remote_host.com>://var/lib/libvirt/images/
      
      # scp /var/lib/libvirt/images/<disk-1-example-VM-2-clone>.qcow2 root@10.0.0.1/<user@remote_host.com>://var/lib/libvirt/images/

Verification

To verify the VM has been successfully cloned and is working correctly:

  1. Confirm the clone has been added to the list of VMs on your host:

    # virsh list --all
    Id   Name                  State
    ---------------------------------------
    -    example-VM-1          shut off
    -    example-VM-1-clone    shut off
  2. Start the clone and observe if it boots up:

    # virsh start example-VM-1-clone
    Domain 'example-VM-1-clone' started

Additional resources

8.4. Cloning a virtual machine using the web console

To quickly create new virtual machines (VMs) with a specific set of properties, you can clone a VM that you had previously configured. The following instructions explain how to do so using the web console.

Note

Cloning a VM also clones the disks associated with that VM.

Prerequisites

Procedure

  1. In the Virtual Machines interface of the web console, click the Menu button of the VM that you want to clone.

    A drop down menu appears with controls for various VM operations.

  2. Click Clone.

    The Create a clone VM dialog appears.

    Create a clone VM dialog box with an option to enter a new name for the VM.
  3. Optional: Enter a new name for the VM clone.
  4. Click Clone.

    A new VM is created based on the source VM.

Verification

  • Confirm whether the cloned VM appears in the list of VMs available on your host.

Chapter 9. Migrating virtual machines

If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host.

9.1. How migrating virtual machines works

The essential part of virtual machine (VM) migration is copying the XML configuration of a VM to a different host machine. If the migrated VM is not shut down, the migration also transfers the state of the VM’s memory and any virtualized devices to a destination host machine. For the VM to remain functional on the destination host, the VM’s disk images must remain available to it.

By default, the migrated VM is transient on the destination host, and remains defined also on the source host.

You can migrate a running VM using live or non-live migrations. To migrate a shut-off VM, you must use an offline migration. For details, see the following table.

Table 9.1. VM migration types

Migration typeDescriptionUse caseStorage requirements

Live migration

The VM continues to run on the source host machine while KVM is transferring the VM’s memory pages to the destination host. When the migration is nearly complete, KVM very briefly suspends the VM, and resumes it on the destination host.

Useful for VMs that require constant uptime. However, VMs that modify memory pages faster than KVM can transfer them, such as VMs under heavy I/O load, cannot be live-migrated, and non-live migration must be used instead.

The VM’s disk images must be located on a shared network, accessible both to the source host and the destination host.

Non-live migration

Suspends the VM, copies its configuration and its memory to the destination host, and resumes the VM.

Creates downtime for the VM, but is generally more reliable than live migration. Recommended for VMs under heavy memory load.

The VM’s disk images must be located on a shared network, accessible both to the source host and the destination host.

Offline migration

Moves the VM’s configuration to the destination host

Recommended for shut-off VMs and in situations when shutting down the VM does not disrupt your workloads.

The VM’s disk images do not have to be available on a shared network, and can be copied or moved manually to the destination host instead.

You can also combine live migration and non-live migration. This is recommended for example when live-migrating a VM that uses very many vCPUs or a large amount of memory, which prevents the migration from completing. In such a scenario, you can suspend the source VM. This prevents additional dirty memory pages from being generated, and thus makes it significantly more likely for the migration to complete. Based on guest workload and the number of static pages during migration, such a hybrid migration might cause significantly less downtime than a non-live migration.

9.2. Benefits of migrating virtual machines

Migrating virtual machines (VMs) can be useful for:

Load balancing
VMs can be moved to host machines with lower usage if their host becomes overloaded, or if another host is under-utilized.
Hardware independence
When you need to upgrade, add, or remove hardware devices on the host machine, you can safely relocate VMs to other hosts. This means that VMs do not experience any downtime for hardware improvements.
Energy saving
VMs can be redistributed to other hosts, and the unloaded host systems can thus be powered off to save energy and cut costs during low usage periods.
Geographic migration
VMs can be moved to another physical location for lower latency or when required for other reasons.

9.3. Limitations for migrating virtual machines

Before migrating virtual machines (VMs) in RHEL 8, ensure you are aware of the migration’s limitations.

  • Live storage migration cannot be performed on RHEL 8, but you can migrate storage while the VM is powered down. Note that live storage migration is available on Red Hat Virtualization.
  • Migrating VMs from or to a session connection of libvirt is unreliable and therefore not recommended.
  • VMs that use certain features and configurations will not work correctly if migrated, or the migration will fail. Such features include:

    • Device passthrough
    • SR-IOV device assignment
    • Mediated devices, such as vGPUs
  • A migration between hosts that use Non-Uniform Memory Access (NUMA) pinning works only if the hosts have similar topology. However, the performance on running workloads might be negatively affected by the migration.
  • The emulated CPUs, both on the source VM and the destination VM, must be identical, otherwise the migration might fail. Any differences between the VMs in the following CPU related areas can cause problems with the migration:

    • CPU model

    • Firmware settings
    • Microcode version
    • BIOS version
    • BIOS settings
    • QEMU version
    • Kernel version
  • Live migrating a VM that uses more than 1 TB of memory might in some cases be unreliable. For instructions on how to prevent or fix this problem, see Live migration of a VM takes a long time without completing.

9.4. Verifying host CPU compatibility for virtual machine migration

For migrated virtual machines (VMs) to work correctly on the destination host, the CPUs on the source and the destination hosts must be compatible. To ensure that this is the case, calculate a common CPU baseline before you begin the migration.

Note

The instructions in this section use an example migration scenario with the following host CPUs:

  • Source host: Intel Core i7-8650U
  • Destination hosts: Intel Xeon CPU E5-2620 v2

Prerequisites

  • Virtualization is installed and enabled on your system.
  • You have administrator access to the source host and the destination host for the migration.

Procedure

  1. On the source host, obtain its CPU features and paste them into a new XML file, such as domCaps-CPUs.xml.

    # virsh domcapabilities | xmllint --xpath "//cpu/mode[@name='host-model']" - > domCaps-CPUs.xml
  2. In the XML file, replace the <mode> </mode> tags with <cpu> </cpu>.
  3. Optional: Verify that the content of the domCaps-CPUs.xml file looks similar to the following:

    # cat domCaps-CPUs.xml
    
        <cpu>
              <model fallback="forbid">Skylake-Client-IBRS</model>
              <vendor>Intel</vendor>
              <feature policy="require" name="ss"/>
              <feature policy="require" name="vmx"/>
              <feature policy="require" name="pdcm"/>
              <feature policy="require" name="hypervisor"/>
              <feature policy="require" name="tsc_adjust"/>
              <feature policy="require" name="clflushopt"/>
              <feature policy="require" name="umip"/>
              <feature policy="require" name="md-clear"/>
              <feature policy="require" name="stibp"/>
              <feature policy="require" name="arch-capabilities"/>
              <feature policy="require" name="ssbd"/>
              <feature policy="require" name="xsaves"/>
              <feature policy="require" name="pdpe1gb"/>
              <feature policy="require" name="invtsc"/>
              <feature policy="require" name="ibpb"/>
              <feature policy="require" name="ibrs"/>
              <feature policy="require" name="amd-stibp"/>
              <feature policy="require" name="amd-ssbd"/>
              <feature policy="require" name="rsba"/>
              <feature policy="require" name="skip-l1dfl-vmentry"/>
              <feature policy="require" name="pschange-mc-no"/>
              <feature policy="disable" name="hle"/>
              <feature policy="disable" name="rtm"/>
        </cpu>
  4. On the destination host, use the following command to obtain its CPU features:

    # virsh domcapabilities | xmllint --xpath "//cpu/mode[@name='host-model']" -
    
        <mode name="host-model" supported="yes">
                <model fallback="forbid">IvyBridge-IBRS</model>
                <vendor>Intel</vendor>
                <feature policy="require" name="ss"/>
                <feature policy="require" name="vmx"/>
                <feature policy="require" name="pdcm"/>
                <feature policy="require" name="pcid"/>
                <feature policy="require" name="hypervisor"/>
                <feature policy="require" name="arat"/>
                <feature policy="require" name="tsc_adjust"/>
                <feature policy="require" name="umip"/>
                <feature policy="require" name="md-clear"/>
                <feature policy="require" name="stibp"/>
                <feature policy="require" name="arch-capabilities"/>
                <feature policy="require" name="ssbd"/>
                <feature policy="require" name="xsaveopt"/>
                <feature policy="require" name="pdpe1gb"/>
                <feature policy="require" name="invtsc"/>
                <feature policy="require" name="ibpb"/>
                <feature policy="require" name="amd-ssbd"/>
                <feature policy="require" name="skip-l1dfl-vmentry"/>
                <feature policy="require" name="pschange-mc-no"/>
        </mode>
  5. Add the obtained CPU features from the destination host to the domCaps-CPUs.xml file on the source host. Again, replace the <mode> </mode> tags with <cpu> </cpu> and save the file.
  6. Optional: Verify that the XML file now contains the CPU features from both hosts.

    # cat domCaps-CPUs.xml
    
        <cpu>
              <model fallback="forbid">Skylake-Client-IBRS</model>
              <vendor>Intel</vendor>
              <feature policy="require" name="ss"/>
              <feature policy="require" name="vmx"/>
              <feature policy="require" name="pdcm"/>
              <feature policy="require" name="hypervisor"/>
              <feature policy="require" name="tsc_adjust"/>
              <feature policy="require" name="clflushopt"/>
              <feature policy="require" name="umip"/>
              <feature policy="require" name="md-clear"/>
              <feature policy="require" name="stibp"/>
              <feature policy="require" name="arch-capabilities"/>
              <feature policy="require" name="ssbd"/>
              <feature policy="require" name="xsaves"/>
              <feature policy="require" name="pdpe1gb"/>
              <feature policy="require" name="invtsc"/>
              <feature policy="require" name="ibpb"/>
              <feature policy="require" name="ibrs"/>
              <feature policy="require" name="amd-stibp"/>
              <feature policy="require" name="amd-ssbd"/>
              <feature policy="require" name="rsba"/>
              <feature policy="require" name="skip-l1dfl-vmentry"/>
              <feature policy="require" name="pschange-mc-no"/>
              <feature policy="disable" name="hle"/>
              <feature policy="disable" name="rtm"/>
        </cpu>
        <cpu>
              <model fallback="forbid">IvyBridge-IBRS</model>
              <vendor>Intel</vendor>
              <feature policy="require" name="ss"/>
              <feature policy="require" name="vmx"/>
              <feature policy="require" name="pdcm"/>
              <feature policy="require" name="pcid"/>
              <feature policy="require" name="hypervisor"/>
              <feature policy="require" name="arat"/>
              <feature policy="require" name="tsc_adjust"/>
              <feature policy="require" name="umip"/>
              <feature policy="require" name="md-clear"/>
              <feature policy="require" name="stibp"/>
              <feature policy="require" name="arch-capabilities"/>
              <feature policy="require" name="ssbd"/>
              <feature policy="require" name="xsaveopt"/>
              <feature policy="require" name="pdpe1gb"/>
              <feature policy="require" name="invtsc"/>
              <feature policy="require" name="ibpb"/>
              <feature policy="require" name="amd-ssbd"/>
              <feature policy="require" name="skip-l1dfl-vmentry"/>
              <feature policy="require" name="pschange-mc-no"/>
        </cpu>
  7. Use the XML file to calculate the CPU feature baseline for the VM you intend to migrate.

    # virsh hypervisor-cpu-baseline domCaps-CPUs.xml
    
        <cpu mode='custom' match='exact'>
          <model fallback='forbid'>IvyBridge-IBRS</model>
          <vendor>Intel</vendor>
          <feature policy='require' name='ss'/>
          <feature policy='require' name='vmx'/>
          <feature policy='require' name='pdcm'/>
          <feature policy='require' name='pcid'/>
          <feature policy='require' name='hypervisor'/>
          <feature policy='require' name='arat'/>
          <feature policy='require' name='tsc_adjust'/>
          <feature policy='require' name='umip'/>
          <feature policy='require' name='md-clear'/>
          <feature policy='require' name='stibp'/>
          <feature policy='require' name='arch-capabilities'/>
          <feature policy='require' name='ssbd'/>
          <feature policy='require' name='xsaveopt'/>
          <feature policy='require' name='pdpe1gb'/>
          <feature policy='require' name='invtsc'/>
          <feature policy='require' name='ibpb'/>
          <feature policy='require' name='amd-ssbd'/>
          <feature policy='require' name='skip-l1dfl-vmentry'/>
          <feature policy='require' name='pschange-mc-no'/>
        </cpu>
  8. Open the XML configuration of the VM you intend to migrate, and replace the contents of the <cpu> section with the settings obtained in the previous step.

    # virsh edit VM-name
  9. If the VM is running, restart it.

    # virsh reboot VM-name

9.5. Sharing virtual machine disk images with other hosts

To perform a live migration of a virtual machine (VM) between supported KVM hosts, shared VM storage is required. The following procedure provides instructions for sharing a locally stored VM image with the source host and the destination host using the NFS protocol.

Prerequisites

  • The VM intended for migration is shut down.
  • Optional: A host system is available for hosting the storage that is not the source or destination host, but both the source and the destination host can reach it through the network. This is the optimal solution for shared storage and is recommended by Red Hat.
  • Make sure that NFS file locking is not used as it is not supported in KVM.
  • The NFS is installed and enabled on the source and destination hosts. If it is not:

    • Install the NFS packages:

      # {PackageManagerCommand} install nfs-utils
    • Make sure that the ports for NFS, such as 2049, are open in the firewall.

      # firewall-cmd --permanent --add-service=nfs
      # firewall-cmd --permanent --add-service=mountd
      # firewall-cmd --permanent --add-service=rpc-bind
      # firewall-cmd --permanent --add-port=2049/tcp
      # firewall-cmd --permanent --add-port=2049/udp
      # firewall-cmd --reload
    • Start the NFS service.

      # systemctl start nfs-server

Procedure

  1. Connect to the host that will provide shared storage. In this example, it is the example-shared-storage host:

    # ssh root@example-shared-storage
    root@example-shared-storage's password:
    Last login: Mon Sep 24 12:05:36 2019
    root~#
  2. Create a directory on the source host that will hold the disk image and will be shared with the migration hosts:

    # mkdir /var/lib/libvirt/shared-images
  3. Copy the disk image of the VM from the source host to the newly created directory. The following example copies the disk image example-disk-1 of the VM to the /var/lib/libvirt/shared-images/ directory of the example-shared-storage host:

    # scp /var/lib/libvirt/images/example-disk-1.qcow2 root@example-shared-storage:/var/lib/libvirt/shared-images/example-disk-1.qcow2
  4. On the host that you want to use for sharing the storage, add the sharing directory to the /etc/exports file. The following example shares the /var/lib/libvirt/shared-images directory with the example-source-machine and example-destination-machine hosts:

    # /var/lib/libvirt/shared-images example-source-machine(rw,no_root_squash) example-destination-machine(rw,no\_root_squash)
  5. On both the source and destination host, mount the shared directory in the /var/lib/libvirt/images directory:

    # mount example-shared-storage:/var/lib/libvirt/shared-images /var/lib/libvirt/images

Verification

  • Start the VM on the source host and observe if it boots successfully.

Additional resources

9.6. Migrating a virtual machine using the command-line interface

If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host. The following procedure provides instructions and examples for various scenarios of such migrations.

Prerequisites

  • The source host and the destination host both use the KVM hypervisor.
  • The source host and the destination host are able to reach each other over the network. Use the ping utility to verify this.
  • Ensure the following ports are open on the destination host.

    • Port 22 is needed for connecting to the destination host by using SSH.
    • Port 16509 is needed for connecting to the destination host by using TLS.
    • Port 16514 is needed for connecting to the destination host by using TCP.
    • Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration data.
  • For the migration to be supportable by Red Hat, the source host and destination host must be using specific operating systems and machine types. To ensure this is the case, see Supported hosts for virtual machine migration.
  • The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
  • The disk images of VMs that will be migrated are located on a separate networked location accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM.

    For instructions to set up such shared VM storage, see Sharing virtual machine disk images with other hosts.

  • When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.

    To obtain the dirty page rate of your VM before you start the live migration, do the following:

    • Monitor the rate of dirty page generation of the VM for a short period of time.

      # virsh domdirtyrate-calc example-VM 30
    • After the monitoring finishes, obtain its results:

      # virsh domstats example-VM --dirtyrate
      Domain: 'example-VM'
        dirtyrate.calc_status=2
        dirtyrate.calc_start_time=200942
        dirtyrate.calc_period=30
        dirtyrate.megabytes_per_second=2

      In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.

      To ensure that the live migration finishes successfully, Red Hat recommends that your network bandwidth is significantly greater than the VM’s dirty page generation rate.

  • When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not operate after migration.
  • Ensure that the libvirtd service is enabled and running.

    # systemctl enable --now libvirtd.service

Procedure

  1. Use the virsh migrate command with options appropriate for your migration requirements.

    1. The following command migrates the example-VM-1 VM from your local host to the system connection of the example-destination host using an SSH tunnel. The VM keeps running during the migration.

      # virsh migrate --persistent --live example-VM-1 qemu+ssh://example-destination/system
    2. The following commands enable you to make manual adjustments to the configuration of the example-VM-2 VM running on your local host, and then migrate the VM to the example-destination host. The migrated VM will automatically use the updated configuration.

      # virsh dumpxml --migratable example-VM-2 > example-VM-2.xml
      # vi example-VM-2.xml
      # virsh migrate --live --persistent --xml example-VM-2.xml example-VM-2 qemu+ssh://example-destination/system

      This procedure can be useful for example when the destination host needs to use a different path to access the shared VM storage or when configuring a feature specific to the destination host.

    3. The following command suspends the example-VM-3 VM from the example-source host, migrates it to the example-destination host, and instructs it to use the adjusted XML configuration, provided by the example-VM-3-alt.xml file. When the migration is completed, libvirt resumes the VM on the destination host.

      # virsh migrate example-VM-3 qemu+ssh://example-source/system qemu+ssh://example-destination/system --xml example-VM-3-alt.xml

      After the migration, the VM is in the shut off state on the source host, and the migrated copy is deleted after it is shut down.

    4. The following deletes the shut-down example-VM-4 VM from the example-source host, and moves its configuration to the example-destination host.

      # virsh migrate --offline --persistent --undefinesource example-VM-4 qemu+ssh://example-source/system qemu+ssh://example-destination/system

      Note that this type of migration does not require moving the VM’s disk image to shared storage. However, for the VM to be usable on the destination host, you also need to migrate the VM’s disk image. For example:

      # scp root@example-source:/var/lib/libvirt/images/example-VM-4.qcow2 root@example-destination:/var/lib/libvirt/images/example-VM-4.qcow2
  2. Wait for the migration to complete. The process may take some time depending on network bandwidth, system load, and the size of the VM. If the --verbose option is not used for virsh migrate, the CLI does not display any progress indicators except errors.

    When the migration is in progress, you can use the virsh domjobinfo utility to display the migration statistics.

Verification

  • On the destination host, list the available VMs to verify if the VM has been migrated:

    # virsh list
    Id      Name             State
    ----------------------------------
    10    example-VM-1      running

    If the migration is still running, this command will list the VM state as paused.

Troubleshooting

  • In some cases, the target host will not be compatible with certain values of the migrated VM’s XML configuration, such as the network name or CPU type. As a result, the VM will fail to boot on the target host. To fix these problems, you can update the problematic values by using the virsh edit command. After updating the values, you must restart the VM for the changes to be applied.
  • If a live migration is taking a long time to complete, this may be because the VM is under heavy load and too many memory pages are changing for live migration to be possible. To fix this problem, change the migration to a non-live one by suspending the VM.

    # virsh suspend example-VM-1

Additional resources

  • The virsh migrate --help command
  • The virsh (1) man page

9.7. Live migrating a virtual machine using the web console

If you wish to migrate a virtual machine (VM) that is performing tasks which require it to be constantly running, you can migrate that VM to another KVM host without shutting it down. This is also known as live migration. The following instructions explain how to do so using the web console.

Warning

For tasks that modify memory pages faster than KVM can transfer them, such as heavy I/O load tasks, it is recommended that you do not live migrate the VM.

Prerequisites

  • The web console VM plug-in is installed on your system.
  • The source and destination hosts are running.
  • Ensure the following ports are open on the destination host.

    • Port 22 is needed for connecting to the destination host by using SSH.
    • Port 16509 is needed for connecting to the destination host by using TLS.
    • Port 16514 is needed for connecting to the destination host by using TCP.
    • Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration data.
  • The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
  • The VM’s disk images are located on a shared storage that is accessible to the source host as well as the destination host.
  • When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.

    To obtain the dirty page rate of your VM before you start the live migration, do the following in your command-line interface:

    1. Monitor the rate of dirty page generation of the VM for a short period of time.

      # virsh domdirtyrate-calc vm-name 30
    2. After the monitoring finishes, obtain its results:

      # virsh domstats vm-name --dirtyrate
      Domain: 'vm-name'
        dirtyrate.calc_status=2
        dirtyrate.calc_start_time=200942
        dirtyrate.calc_period=30
        dirtyrate.megabytes_per_second=2

      In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.

      To ensure that the live migration finishes successfully, Red Hat recommends that your network bandwidth is significantly greater than the VM’s dirty page generation rate.

Procedure

  1. In the Virtual Machines interface of the web console, click the Menu button of the VM that you want to migrate.

    A drop down menu appears with controls for various VM operations.

    The virtual machines main page displaying the available options when the VM is running.
  2. Click Migrate

    The Migrate VM to another host dialog appears.

    The Migrate VM to another host dialog box with fields to enter the URI of the destination host and set the migration duration.
  3. Enter the URI of the destination host.
  4. Configure the duration of the migration:

    • Permanent - Do not check the box if you wish to migrate the VM permanently. Permanent migration completely removes the VM configuration from the source host.
    • Temporary - Temporary migration migrates a copy of the VM to the destination host. This copy is deleted from the destination host when the VM is shut down. The original VM remains on the source host.
  5. Click Migrate

    Your VM is migrated to the destination host.

Verification

To verify whether the VM has been successfully migrated and is working correctly:

  • Confirm whether the VM appears in the list of VMs available on the destination host.
  • Start the migrated VM and observe if it boots up.

9.8. Troubleshooting virtual machine migrations

If you are facing one of the following problems when migrating virtual machines (VMs), see the provided instructions to fix or avoid the issue.

9.8.1. Live migration of a VM takes a long time without completing

Cause

In some cases, migrating a running VM might cause the the VM to generate dirty memory pages faster than they can be migrated. When this occurs, the migration cannot complete successfully.

The following scenarios frequently cause this problem:

  • Live migrating a VM under a heavy load
  • Live migrating a VM that uses a large amount of memory, such as 1 TB or more

    Important

    Red Hat has successfully tested live migration of VMs with up to 6 TB of memory. However, for live migration scenarios that involve VMs with more than 1 TB of memory, customers should reach out to Red Hat technical support.

Diagnosis

If your VM live migration is taking longer than expected, use the virsh domjobinfo command to obtain the memory page data for the VM:

# virsh domjobinfo vm-name

Job type:         Unbounded
Operation:        Outgoing migration
Time elapsed:     168286974    ms
Data processed:   26.106 TiB
Data remaining:   34.383 MiB
Data total:       10.586 TiB
Memory processed: 26.106 TiB
Memory remaining: 34.383 MiB
Memory total:     10.586 TiB
Memory bandwidth: 29.056 MiB/s
Dirty rate: 17225 pages/s
Page size: 4096 bytes

In this output, the multiplication of Dirty rate and Page size is greater than Memory bandwidth. This means that the VM is generating dirty memory pages faster than the network can migrate them. As a consequence, the state of the VM on the destination host cannot converge with the state of the VM on the source host, which prevents the migration from completing.

Fix

To improve the chances that a stalled live migration finishes successfully, you can do any of the following:

  • Reduce the workload of the VM, especially memory updates.

    • To do this, stop or cancel non-essential processes in the guest operating system of the source VM.
  • Increase the downtime allowed for the live migration:

    1. Display the current maximum downtime at the end of a live migration for the VM that is being migrated:

      # virsh migrate-getmaxdowntime vm-name
    2. Set a higher maximum downtime:

      # virsh migrate-setmaxdowntime vm-name downtime-in-miliseconds

      The higher you set the maximum downtime, the more likely it will be for the migration to complete.

  • Switch the live migration to post-copy mode.

    # virsh migrate-start-postcopy vm-name
    • This ensures that the memory pages of the VM can converge on the destination host, and that the migration can complete.

      However, when post-copy mode is active, the VM might slow down significantly, due to remote page requests from the destination host to the source host. In addition, if the network connection between the source host and the destination host stops working during post-copy migration, some of the VM processes may halt due to missing memory pages.

      Therefore, do not use post-copy migration if the VM availability is critical or if the migration network is unstable.

  • If your workload allows it, suspend the VM and let the migration finish as a non-live migration. This increases the downtime of the VM, but in most cases ensures that the migration completes successfully.

Prevention

The probability of successfully completing a live migration of a VM depends on the following:

  • The workload of the VM during the migration

    • Before starting the migration, stop or cancel non-essential processes in the guest operating system of the VM.
  • The network bandwidth that the host can use for migration

    • For optimal results of a live migration, the bandwidth of the network used for the migration must be significantly higher than the dirty page generation of the VM. For instructions on obtaining the VM dirty page generation rate, see the Prerequisites in Migrating a virtual machine using the command-line interface.
    • Both the source host and the destination host must have a dedicated network interface controller (NIC) for the migration. For live migrating a VM with more than 1 TB of memory, Red Hat recommends a NIC with the speed of 25 Gb/s or more.
    • You can also specify the network bandwidth assigned to the live migration by using the --bandwidth option when initiating the migration. For migrating very large VMs, assign as much bandwidth as viable for your deployment.
  • The mode of live migration

    • The default pre-copy migration mode copies memory pages repeatedly if they become dirty.
    • Post-copy migration copies memory pages only once.

      To enable your live migration to switch to post-copy mode if the migration stalls, use the --postcopy option with virsh migrate when starting the migration.

  • The downtime specified for the deployment

    • You can adjust this during the migration using virsh migrate-setmaxdowntime as described previously.

9.9. Supported hosts for virtual machine migration

For the virtual machine (VM) migration to work properly and be supported by Red Hat, the source and destination hosts must be specific RHEL versions and machine types. The following table shows supported VM migration paths.

Table 9.2. Live migration compatibility

Migration methodRelease typeExampleSupport status

Forward

Major release

7.6+ → 8.1

On supported RHEL 7 systems: machine types i440fx and q35

Backward

Major release

8.1 → 7.6+

On supported RHEL 8 systems: machine types i440fx and q35

Forward

Minor release

8.0.1+ → 8.1+

On supported RHEL 7 systems: machine types i440fx and q35 on RHEL 7.6.0 and later.

On supported RHEL 8 systems: machine type q35.

Backward

Minor release

8.1 → 8.0.1

On supported RHEL 7 systems. Fully supported for machine types i440fx and q35.

On supported RHEL 8 systems: machine type q35.

9.10. Additional resources

Chapter 10. Managing virtual devices

One of the most effective ways to manage the functionality, features, and performance of a virtual machine (VM) is to adjust its virtual devices.

The following sections provide a general overview of what virtual devices are, and instructions on how to manage them using the CLI or the web console.

10.1. How virtual devices work

Just like physical machines, virtual machines (VMs) require specialized devices to provide functions to the system, such as processing power, memory, storage, networking, or graphics. Physical systems usually use hardware devices for these purposes. However, because VMs work as software implements, they need to use software abstractions of such devices instead, referred to as virtual devices.

The basics

Virtual devices attached to a VM can be configured when creating the VM, and can also be managed on an existing VM. Generally, virtual devices can be attached or detached from a VM only when the VM is shut off, but some can be added or removed when the VM is running. This feature is referred to as device hot plug and hot unplug.

When creating a new VM, libvirt automatically creates and configures a default set of essential virtual devices, unless specified otherwise by the user. These are based on the host system architecture and machine type, and usually include:

  • the CPU
  • memory
  • a keyboard
  • a network interface controller (NIC)
  • various device controllers
  • a video card
  • a sound card

To manage virtual devices after the VM is created, use the command-line interface (CLI). However, to manage virtual storage devices and NICs, you can also use the RHEL 8 web console.

Performance or flexibility

For some types of devices, RHEL 8 supports multiple implementations, often with a trade-off between performance and flexibility.

For example, the physical storage used for virtual disks can be represented by files in various formats, such as qcow2 or raw, and presented to the VM using a variety of controllers:

  • an emulated controller
  • virtio-scsi
  • virtio-blk

An emulated controller is slower than a virtio controller, because virtio devices are designed specifically for virtualization purposes. On the other hand, emulated controllers make it possible to run operating systems that have no drivers for virtio devices. Similarly, virtio-scsi offers a more complete support for SCSI commands, and makes it possible to attach a larger number of disks to the VM. Finally, virtio-blk provides better performance than both virtio-scsi and emulated controllers, but a more limited range of use-cases. For example, attaching a physical disk as a LUN device to a VM is not possible when using virtio-blk.

For more information about types of virtual devices, see Types of virtual devices.

10.2. Types of virtual devices

Virtualization in RHEL 8 can present several distinct types of virtual devices that you can attach to virtual machines (VMs):

Emulated devices

Emulated devices are software implementations of widely used physical devices. Drivers designed for physical devices are also compatible with emulated devices. Therefore, emulated devices can be used very flexibly.

However, since they need to faithfully emulate a particular type of hardware, emulated devices may suffer a significant performance loss compared with the corresponding physical devices or more optimized virtual devices.

The following types of emulated devices are supported:

  • Virtual CPUs (vCPUs), with a large choice of CPU models available. The performance impact of emulation depends significantly on the differences between the host CPU and the emulated vCPU.
  • Emulated system components, such as PCI bus controllers.
  • Emulated storage controllers, such as SATA, SCSI or even IDE.
  • Emulated sound devices, such as ICH9, ICH6 or AC97.
  • Emulated graphics cards, such as VGA or QXL cards.
  • Emulated network devices, such as rtl8139.
Paravirtualized devices

Paravirtualization provides a fast and efficient method for exposing virtual devices to VMs. Paravirtualized devices expose interfaces that are designed specifically for use in VMs, and thus significantly increase device performance. RHEL 8 provides paravirtualized devices to VMs using the virtio API as a layer between the hypervisor and the VM. The drawback of this approach is that it requires a specific device driver in the guest operating system.

It is recommended to use paravirtualized devices instead of emulated devices for VM whenever possible, notably if they are running I/O intensive applications. Paravirtualized devices decrease I/O latency and increase I/O throughput, in some cases bringing them very close to bare-metal performance. Other paravirtualized devices also add functionality to VMs that is not otherwise available.

The following types of paravirtualized devices are supported:

  • The paravirtualized network device (virtio-net).
  • Paravirtualized storage controllers:

    • virtio-blk - provides block device emulation.
    • virtio-scsi - provides more complete SCSI emulation.
  • The paravirtualized clock.
  • The paravirtualized serial device (virtio-serial).
  • The balloon device (virtio-balloon), used to dynamically distribute memory between a VM and its host.
  • The paravirtualized random number generator (virtio-rng).
  • The paravirtualized graphics card (QXL).
Physically shared devices

Certain hardware platforms enable VMs to directly access various hardware devices and components. This process is known as device assignment or passthrough.

When attached in this way, some aspects of the physical device are directly available to the VM as they would be to a physical machine. This provides superior performance for the device when used in the VM. However, devices physically attached to a VM become unavailable to the host, and also cannot be migrated.

Nevertheless, some devices can be shared across multiple VMs. For example, a single physical device can in certain cases provide multiple mediated devices, which can then be assigned to distinct VMs.

The following types of passthrough devices are supported:

  • USB, PCI, and SCSI passthrough - expose common industry standard buses directly to VMs in order to make their specific features available to guest software.
  • Single-root I/O virtualization (SR-IOV) - a specification that enables hardware-enforced isolation of PCI Express resources. This makes it safe and efficient to partition a single physical PCI resource into virtual PCI functions. It is commonly used for network interface cards (NICs).
  • N_Port ID virtualization (NPIV) - a Fibre Channel technology to share a single physical host bus adapter (HBA) with multiple virtual ports.
  • GPUs and vGPUs - accelerators for specific kinds of graphic or compute workloads. Some GPUs can be attached directly to a VM, while certain types also offer the ability to create virtual GPUs (vGPUs) that share the underlying physical hardware.

10.3. Managing devices attached to virtual machines using the CLI

To modify the functionality of your virtual machine (VM), you can manage the devices attached to your VM using the command-line interface (CLI).

You can use the CLI to:

10.3.1. Attaching devices to virtual machines

You can add a specific functionality to your virtual machines (VMs) by attaching a new virtual device.

The following procedure creates and attaches virtual devices to your virtual machines (VMs) using the command-line interface (CLI). Some devices can also be attached to VMs using the RHEL web console.

For example, you can increase the storage capacity of a VM by attaching a new virtual disk device to it. This is also referred to as memory hot plug.

Warning

Removing a memory device from a VM, also known as memory hot unplug, is not supported in RHEL 8, and Red Hat highly discourages its use.

Prerequisites

  • Obtain the required options for the device you intend to attach to a VM. To see the available options for a specific device, use the virt-xml --device=? command. For example:

    # virt-xml --network=?
    --network options:
    [...]
    address.unit
    boot_order
    clearxml
    driver_name
    [...]

Procedure

  1. To attach a device to a VM, use the virt-xml --add-device command, including the definition of the device and the required options:

    • For example, the following command creates a 20GB newdisk qcow2 disk image in the /var/lib/libvirt/images/ directory, and attaches it as a virtual disk to the running testguest VM on the next start-up of the VM:

      # virt-xml testguest --add-device --disk /var/lib/libvirt/images/newdisk.qcow2,format=qcow2,size=20
      Domain 'testguest' defined successfully.
      Changes will take effect after the domain is fully powered off.
    • The following attaches a USB flash drive, attached as device 004 on bus 002 on the host, to the testguest2 VM while the VM is running:

      # virt-xml testguest2 --add-device --update --hostdev 002.004
      Device hotplug successful.
      Domain 'testguest2' defined successfully.

      The bus-device combination for defining the USB can be obtained using the lsusb command.

Verification

To verify the device has been added, do any of the following:

  • Use the virsh dumpxml command and see if the device’s XML definition has been added to the <devices> section in the VM’s XML configuration.

    For example, the following output shows the configuration of the testguest VM and confirms that the 002.004 USB flash disk device has been added.

    # virsh dumpxml testguest
    [...]
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x4146'/>
        <product id='0x902e'/>
        <address bus='2' device='4'/>
      </source>
      <alias name='hostdev0'/>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    [...]
  • Run the VM and test if the device is present and works properly.

Additional resources

  • The man virt-xml command

10.3.2. Modifying devices attached to virtual machines

You can change the functionality of your virtual machines (VMs) by editing a configuration of the attached virtual devices. For example, if you want to optimize the performance of your VMs, you can change their virtual CPU models to better match the CPUs of the hosts.

The following procedure provides general instructions for modifying virtual devices using the command-line interface (CLI). Some devices attached to your VM, such as disks and NICs, can also be modified using the RHEL 8 web console.

Prerequisites

  • Obtain the required options for the device you intend to attach to a VM. To see the available options for a specific device, use the virt-xml --device=? command. For example:
# virt-xml --network=?
--network options:
[...]
address.unit
boot_order
clearxml
driver_name
[...]
  • Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and sending the output to a file. For example, the following backs up the configuration of your Motoko VM as the motoko.xml file:
# virsh dumpxml Motoko > motoko.xml
# cat motoko.xml
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Motoko</name>
  <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid>
  [...]
</domain>

Procedure

  1. Use the virt-xml --edit command, including the definition of the device and the required options:

    For example, the following clears the <cpu> configuration of the shut-off testguest VM and sets it to host-model:

    # virt-xml testguest --edit --cpu host-model,clearxml=yes
    Domain 'testguest' defined successfully.

Verification

To verify the device has been modified, do any of the following:

  • Run the VM and test if the device is present and reflects the modifications.
  • Use the virsh dumpxml command and see if the device’s XML definition has been modified in the VM’s XML configuration.

    For example, the following output shows the configuration of the testguest VM and confirms that the CPU mode has been configured as host-model.

    # virsh dumpxml testguest
    [...]
    <cpu mode='host-model' check='partial'>
      <model fallback='allow'/>
    </cpu>
    [...]

Troubleshooting

  • If modifying a device causes your VM to become unbootable, use the virsh define utility to restore the XML configuration by reloading the XML configuration file you backed up previously.

    # virsh define testguest.xml
Note

For small changes to the XML configuration of your VM, you can use the virsh edit command - for example virsh edit testguest. However, do not use this method for more extensive changes, as it is more likely to break the configuration in ways that could prevent the VM from booting.

Additional resources

  • The man virt-xml command

10.3.3. Removing devices from virtual machines

You can change the functionality of your virtual machines (VMs) by removing a virtual device. For example, you can remove a virtual disk device from one of your VMs if it is no longer needed.

The following procedure demonstrates how to remove virtual devices from your virtual machines (VMs) using the command-line interface (CLI). Some devices, such as disks or NICs, can also be removed from VMs using the RHEL 8 web console.

Prerequisites

  • Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and sending the output to a file. For example, the following backs up the configuration of your Motoko VM as the motoko.xml file:
# virsh dumpxml Motoko > motoko.xml
# cat motoko.xml
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Motoko</name>
  <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid>
  [...]
</domain>

Procedure

  1. Use the virt-xml --remove-device command, including a definition of the device. For example:

    • The following removes the storage device marked as vdb from the running testguest VM after it shuts down:

      # virt-xml testguest --remove-device --disk target=vdb
      Domain 'testguest' defined successfully.
      Changes will take effect after the domain is fully powered off.
    • The following immediately removes a USB flash drive device from the running testguest2 VM:

      # virt-xml testguest2 --remove-device --update --hostdev type=usb
      Device hotunplug successful.
      Domain 'testguest2' defined successfully.

Troubleshooting

  • If removing a device causes your VM to become unbootable, use the virsh define utility to restore the XML configuration by reloading the XML configuration file you backed up previously.

    # virsh define testguest.xml

Additional resources

  • The man virt-xml command

10.4. Managing host devices using the web console

To modify the functionality of your virtual machine (VM), you can manage the host devices attached to your VM using the RHEL 8 web console.

Host devices are physical devices that are attached to the host system. Based on your requirements, you can enable your VMs to directly access these hardware devices and components.

You can use the web console to:

10.4.1. Viewing devices attached to virtual machines using the web console

Before adding or modifying the devices attached to your virtual machine (VM), you may want to view the devices that are already attached to your VM. The following procedure provides instructions for viewing such devices using the web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with detailed information about the VM.

    Page displaying the virtual machine interface.
  2. Scroll to the Host devices section.

    Page displaying the Host devices section of the virtual machine.

Additional resources

10.4.2. Attaching devices to virtual machines using the web console

To add specific functionalities to your virtual machine (VM), you can use the web console to attach host devices to the VM.

Note

Attaching multiple host devices at the same time does not work. You can attach only one device at a time.

For more information, see RHEL 8 Known Issues.

Prerequisites

  • If you are attaching PCI devices, ensure that the status of the managed attribute of the hostdev element is set to yes.

    Note

    When attaching PCI devices to your VM, do not omit the managed attribute of the hostdev element, or set it to no. If you do so, PCI devices cannot automatically detach from the host when you pass them to the VM. They also cannot automatically reattach to the host when you turn off the VM.

    As a consequence, the host may become unresponsive or shut down unexpectedly.

    You can find the status of the managed attribute in your VM’s XML configuration. The following example opens the XML configuration of the example-VM-1 VM.

    # virsh edit example-VM-1
  • Back up important data from the VM.
  • Optional: Back up the XML configuration of your VM. For example, to back up the example-VM-1 VM:

    # virsh dumpxml example-VM-1 > example-VM-1.xml
  • The web console VM plug-in is installed on your system.

Procedure

  1. In the Virtual Machines interface, click the VM to which you want to attach a host device.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Host devices.

    The Host devices section displays information about the devices attached to the VM as well as options to Add or Remove devices.

    Image displaying the host devices section of the selected VM.
  3. Click Add host device.

    The Add host device dialog appears.

    Image displaying the Add host device dialog box.
  4. Select the device you wish to attach to the VM.
  5. Click Add

    The selected device is attached to the VM.

Verification

  • Run the VM and check if the device appears in the Host devices section.

10.4.3. Removing devices from virtual machines using the web console

To free up resources, modify the functionalities of your VM, or both, you can use the web console to modify the VM and remove host devices that are no longer required.

Warning

Removing attached USB host devices using the web console may fail because of incorrect correlation between the device and bus numbers of the USB device.

For more information, see RHEL 8 Known Issues.

As a workaround, remove the <hostdev> part of the USB device, from the XML configuration of VM by using the virsh utility. The following example opens the XML configuration of the example-VM-1 VM:

# virsh edit <example-VM-1>

Prerequisites

  • The web console VM plug-in is installed on your system.
  • Optional: Back up the XML configuration of your VM by using virsh dumpxml example-VM-1 and sending the output to a file. For example, the following backs up the configuration of your Motoko VM as the motoko.xml file:

    # virsh dumpxml Motoko > motoko.xml
    # cat motoko.xml
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>Motoko</name>
      <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid>
      [...]
    </domain>

Procedure

  1. In the Virtual Machines interface, click the VM from which you want to remove a host device.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Host devices.

    The Host devices section displays information about the devices attached to the VM as well as options to Add or Remove devices.

    Image displaying the host dvices section of the selected VM.
  3. Click the Remove button next to the device you want to remove from the VM.

    A remove device confirmation dialog appears.

    Image displaying the option to remove an attached virtual device.
  4. Click Remove.

    The device is removed from the VM.

Troubleshooting

  • If removing a host device causes your VM to become unbootable, use the virsh define utility to restore the XML configuration by reloading the XML configuration file you backed up previously.

    # virsh define motoko.xml

10.5. Managing virtual USB devices

When using a virtual machine (VM), you can access and control a USB device, such as a flash drive or a web camera, that is attached to the host system. In this scenario, the host system passes control of the device to the VM. This is also known as a USB-passthrough.

The following sections provide information about using the command line to:

10.5.1. Attaching USB devices to virtual machines

To attach a USB device to a virtual machine (VM), you can include the USB device information in the XML configuration file of the VM.

Prerequisites

  • Ensure the device you want to pass through to the VM is attached to the host.

Procedure

  1. Locate the bus and device values of the USB that you want to attach to the VM.

    For example, the following command displays a list of USB devices attached to the host. The device we will use in this example is attached on bus 001 as device 005.

    # lsusb
    [...]
    Bus 001 Device 003: ID 2567:0a2b Intel Corp.
    Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
    [...]
  2. Use the virt-xml utility along with the --add-device argument.

    For example, the following command attaches a USB flash drive to the example-VM-1 VM.

    # virt-xml example-VM-1 --add-device --hostdev 001.005
    Domain 'example-VM-1' defined successfully.
Note

To attach a USB device to a running VM, add the --update argument to the previous command.

Verification

  • Run the VM and test if the device is present and works as expected.
  • Use the virsh dumpxml command to see if the device’s XML definition has been added to the <devices> section in the VM’s XML configuration file.

    # virsh dumpxml example-VM-1
    [...]
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0407'/>
        <product id='0x6252'/>
        <address bus='1' device='5'/>
      </source>
      <alias name='hostdev0'/>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    [...]

Additional resources

10.5.2. Removing USB devices from virtual machines

To remove a USB device from a virtual machine (VM), you can remove the USB device information from the XML configuration of the VM.

Procedure

  1. Locate the bus and device values of the USB that you want to remove from the VM.

    For example, the following command displays a list of USB devices attached to the host. The device we will use in this example is attached on bus 001 as device 005.

    # lsusb
    [...]
    Bus 001 Device 003: ID 2567:0a2b Intel Corp.
    Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
    [...]
  2. Use the virt-xml utility along with the --remove-device argument.

    For example, the following command removes a USB flash drive, attached to the host as device 005 on bus 001, from the example-VM-1 VM.

    # virt-xml example-VM-1 --remove-device --hostdev 001.005
    Domain 'example-VM-1' defined successfully.
Note

To remove a USB device from a running VM, add the --update argument to the previous command.

Verification

  • Run the VM and check if the device has been removed from the list of devices.

Additional resources

10.5.3. Attaching smart card readers to virtual machines

If you have a smart card reader attached to a host, you can also make it available to virtual machines (VMs) on that host. Libvirt provides a specialized virtual device that presents a smart card interface to the guest VM. It is recommended you only use the spicevmc device type, which utilizes the SPICE remote display protocol to tunnel authentication requests to the host.

Although it is possible to use standard device passthrough with smart card readers, this method does not make the device available on both the host and guest system. As a consequence, you could lock the host system when you attach the smart card reader to the VM.

Important

The SPICE remote display protocol has become deprecated in RHEL 8. Since the only recommended way to attach smart card readers to VMs depends on the SPICE protocol, the usage of smart cards in guest VMs is also deprecated in RHEL 8.

In a future major version of RHEL, the functionality of attaching smart card readers to VMs will only be supported by third party remote visualization solutions.

Prerequisites

  • Ensure the smart card reader you want to pass through to the VM is attached to the host.
  • Ensure the smart card reader type is supported in RHEL 8.

Procedure

  • Create and attach a virtual smart card reader device to a VM. For example, to attach a smart card reader to the testguest VM:

    # virt-xml testguest --add-device --smartcard mode=passthrough,type=spicevmc
    Domain 'testguest' defined successfully.
    Changes will take effect after the domain is fully powered off.
    Note

    To attach a virtual smart card reader device to a running VM, add the --update argument to the previous command.

Verification

  1. View the XML configuration of the VM.

    # virsh dumpxml testguest
  2. Ensure the XML configuration contains the following smart card device definition.

    <smartcard mode='passthrough' type='spicevmc'/>

10.6. Managing virtual optical drives

When using a virtual machine (VM), you can access information stored in an ISO image on the host. To do so, attach the ISO image to the VM as a virtual optical drive, such as a CD drive or a DVD drive.

The following sections provide information about using the command line to:

10.6.1. Attaching optical drives to virtual machines

To attach an ISO image as a virtual optical drive, edit the XML configuration file of the virtual machine (VM) and add the new drive.

Prerequisites

  • You must store and copy path of the ISO image on the host machine.

Procedure

  • Use the virt-xml utility with the --add-device argument:

    For example, the following command attaches the example-ISO-name ISO image, stored in the /home/username/Downloads directory, to the example-VM-name VM.

    # virt-xml example-VM-name --add-device --disk /home/username/Downloads/example-ISO-name.iso,device=cdrom
    Domain 'example-VM-name' defined successfully.

Verification

  • Run the VM and test if the device is present and works as expected.

Additional resources

10.6.2. Adding a CD-ROM to a running virtual machine using the web console

You can use the web console to insert a CD-ROM to a running virtual machine (VM) without specifying the media.

Procedure

  1. Shut down the VM.
  2. Attach a virtual CD-ROM device without specifying a source image.

    # virt-xml vmname --add-device --disk target.dev=sda,device=cdrom
  3. Run the VM.
  4. Open the web console and in the Virtual Machines interface, click the VM to which you want to attach a CD-ROM.
  5. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

  6. Click the Insert option for the cdrom device.

    Image displaying the disk row of the cdrom device.
  7. Choose a Source for the file you want to attach:

    • Custom Path: The file is located in a custom directory on the host machine.
    • Use existing: The file is located in the storage pools that you have created.
  8. Click Insert.

Verification

  • In the Virtual Machines interface, the file will appear under the Disks section.

10.6.3. Replacing ISO images in virtual optical drives

To replace an ISO image attached as a virtual optical drive to a virtual machine (VM), edit the XML configuration file of the VM and specify the replacement.

Prerequisites

  • You must store the ISO image on the host machine.
  • You must know the path to the ISO image.

Procedure

  1. Locate the target device where the CD-ROM is attached to the VM. You can find this information in the VM’s XML configuration file.

    For example, the following command displays the example-VM-name VM’s XML configuration file, where the target device for CD-ROM is sda.

    # virsh dumpxml example-VM-name
    ...
    <disk>
      ...
      <source file='$(/home/username/Downloads/example-ISO-name.iso)'/>
      <target dev='sda' bus='sata'/>
      ...
    </disk>
    ...
  2. Use the virt-xml utility with the --edit argument.

    For example, the following command replaces the example-ISO-name ISO image, attached to the example-VM-name VM at target sda, with the example-ISO-name-2 ISO image stored in the /dev/cdrom directory.

    # virt-xml example-VM-name --edit target=sda --disk /dev/cdrom/example-ISO-name-2.iso
    Domain 'example-VM-name' defined successfully.

Verification

  • Run the VM and test if the device is replaced and works as expected.

Additional resources

  • The man virt-xml command

10.6.4. Removing ISO images from virtual optical drives

To remove an ISO image from a virtual optical drive attached to a virtual machine (VM), edit the XML configuration file of the VM.

Procedure

  1. Locate the target device where the CD-ROM is attached to the VM. You can find this information in the VM’s XML configuration file.

    For example, the following command displays the example-VM-name VM’s XML configuration file, where the target device for CD-ROM is sda.

    # virsh dumpxml example-VM-name
    ...
    <disk>
      ...
      <source file='$(/home/username/Downloads/example-ISO-name.iso)'/>
      <target dev='sda' bus='sata'/>
      ...
    </disk>
    ...
  2. Use the virt-xml utility with the --edit argument.

    For example, the following command removes the example-ISO-name ISO image from the CD drive attached to the example-VM-name VM.

    # virt-xml example-VM-name --edit target=sda --disk path=
    Domain 'example-VM-name' defined successfully.

Verification

  • Run the VM and check that image is no longer available.

Additional resources

  • The man virt-xml command

10.6.5. Removing optical drives from virtual machines

To remove an optical drive attached to a virtual machine (VM), edit the XML configuration file of the VM.

Procedure

  1. Locate the target device where the CD-ROM is attached to the VM. You can find this information in the VM’s XML configuration file.

    For example, the following command displays the example-VM-name VM’s XML configuration file, where the target device for CD-ROM is sda.

    # virsh dumpxml example-VM-name
    ...
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sda' bus='sata'/>
      ...
    </disk>
    ...
  2. Use the virt-xml utility with the --remove-device argument.

    For example, the following command removes the optical drive attached as target sda from the example-VM-name VM.

    # virt-xml example-VM-name --remove-device --disk target=sda
    Domain 'example-VM-name' defined successfully.

Verification

  • Confirm that the device is no longer listed in the XML configuration file of the VM.

Additional resources

  • The man virt-xml command

10.6.6. Removing a CD-ROM from a running virtual machine using the web console

You can use the web console to eject a CD-ROM device from a running virtual machine (VM).

Procedure

  1. In the Virtual Machines interface, click the VM from which you want to remove the CD-ROM.
  2. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

    Image displaying the disks section of the VM.
  3. Click the Eject option for the cdrom device.

    The Eject media from VM? dialog box opens.

  4. Click Eject.

Verification

  • In the Virtual Machines interface, the attached file is no longer displayed under the Disks section.

10.7. Managing SR-IOV devices

An emulated virtual device often uses more CPU and memory than a hardware network device. This can limit the performance of a virtual machine (VM). However, if any devices on your virtualization host support Single Root I/O Virtualization (SR-IOV), you can use this feature to improve the device performance, and possibly also the overall performance of your VMs.

10.7.1. What is SR-IOV?

Single-root I/O virtualization (SR-IOV) is a specification that enables a single PCI Express (PCIe) device to present multiple separate PCI devices, called virtual functions (VFs), to the host system. Each of these devices:

  • Is able to provide the same or similar service as the original PCIe device.
  • Appears at a different address on the host PCI bus.
  • Can be assigned to a different VM using VFIO assignment.

For example, a single SR-IOV capable network device can present VFs to multiple VMs. While all of the VFs use the same physical card, the same network connection, and the same network cable, each of the VMs directly controls its own hardware network device, and uses no extra resources from the host.

How SR-IOV works

The SR-IOV functionality is possible thanks to the introduction of the following PCIe functions:

  • Physical functions (PFs) - A PCIe function that provides the functionality of its device (for example networking) to the host, but can also create and manage a set of VFs. Each SR-IOV capable device has one or more PFs.
  • Virtual functions (VFs) - Lightweight PCIe functions that behave as independent devices. Each VF is derived from a PF. The maximum number of VFs a device can have depends on the device hardware. Each VF can be assigned only to a single VM at a time, but a VM can have multiple VFs assigned to it.

VMs recognize VFs as virtual devices. For example, a VF created by an SR-IOV network device appears as a network card to a VM to which it is assigned, in the same way as a physical network card appears to the host system.

Figure 10.1. SR-IOV architecture

virt SR IOV

Benefits

The primary advantages of using SR-IOV VFs rather than emulated devices are:

  • Improved performance
  • Reduced use of host CPU and memory resources

For example, a VF attached to a VM as a vNIC performs at almost the same level as a physical NIC, and much better than paravirtualized or emulated NICs. In particular, when multiple VFs are used simultaneously on a single host, the performance benefits can be significant.

Disadvantages

  • To modify the configuration of a PF, you must first change the number of VFs exposed by the PF to zero. Therefore, you also need to remove the devices provided by these VFs from the VM to which they are assigned.
  • A VM with an VFIO-assigned devices attached, including SR-IOV VFs, cannot be migrated to another host. In some cases, you can work around this limitation by pairing the assigned device with an emulated device. For example, you can bond an assigned networking VF to an emulated vNIC, and remove the VF before the migration.
  • In addition, VFIO-assigned devices require pinning of VM memory, which increases the memory consumption of the VM and prevents the use of memory ballooning on the VM.

10.7.2. Attaching SR-IOV networking devices to virtual machines

To attach an SR-IOV networking device to a virtual machine (VM) on an Intel or AMD host, you must create a virtual function (VF) from an SR-IOV capable network interface on the host and assign the VF as a device to a specified VM. For details, see the following instructions.

Prerequisites

  • The CPU and the firmware of your host support the I/O Memory Management Unit (IOMMU).

    • If using an Intel CPU, it must support the Intel Virtualization Technology for Directed I/O (VT-d).
    • If using an AMD CPU, it must support the AMD-Vi feature.
  • The host system uses Access Control Service (ACS) to provide direct memory access (DMA) isolation for PCIe topology. Verify this with the system vendor.

    For additional information, see Hardware Considerations for Implementing SR-IOV.

  • The physical network device supports SR-IOV. To verify if any network devices on your system support SR-IOV, use the lspci -v command and look for Single Root I/O Virtualization (SR-IOV) in the output.

    # lspci -v
    [...]
    02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    	Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
    	Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0
    	Memory at fcba0000 (32-bit, non-prefetchable) [size=128K]
    [...]
    	Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
    	Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
    	Kernel driver in use: igb
    	Kernel modules: igb
    [...]
  • The host network interface you want to use for creating VFs is running. For example, to activate the eth1 interface and verify it is running:

    # ip link set eth1 up
    # ip link show eth1
    8: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
       link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
       vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  • For SR-IOV device assignment to work, the IOMMU feature must be enabled in the host BIOS and kernel. To do so:

    • On an Intel host, enable VT-d:

      1. Regenerate the GRUB configuration with the intel_iommu=on and iommu=pt parameters:

        # grubby --args="intel_iommu=on iommu=pt" --update-kernel=ALL
      2. Reboot the host.
    • On an AMD host, enable AMD-Vi:

      1. Regenerate the GRUB configuration with the iommu=pt parameter:

        # grubby --args="iommu=pt" --update-kernel=ALL
      2. Reboot the host.

Procedure

  1. Optional: Confirm the maximum number of VFs your network device can use. To do so, use the following command and replace eth1 with your SR-IOV compatible network device.

    # cat /sys/class/net/eth1/device/sriov_totalvfs
    7
  2. Use the following command to create a virtual function (VF):

    # echo VF-number > /sys/class/net/network-interface/device/sriov_numvfs

    In the command, replace:

    • VF-number with the number of VFs you want to create on the PF.
    • network-interface with the name of the network interface for which the VFs will be created.

    The following example creates 2 VFs from the eth1 network interface:

    # echo 2 > /sys/class/net/eth1/device/sriov_numvfs
  3. Verify the VFs have been added:

    # lspci | grep Ethernet
    82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
    82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
  4. Make the created VFs persistent by creating a udev rule for the network interface you used to create the VFs. For example, for the eth1 interface, create the /etc/udev/rules.d/eth1.rules file, and add the following line:

    ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="ixgbe", ATTR{device/sriov_numvfs}="2"

    This ensures that the two VFs that use the ixgbe driver will automatically be available for the eth1 interface when the host starts. If you do not require persistent SR-IOV devices, skip this step.

    Warning

    Currently, the setting described above does not work correctly when attempting to make VFs persistent on Broadcom NetXtreme II BCM57810 adapters. In addition, attaching VFs based on these adapters to Windows VMs is currently not reliable.

  5. Hot-plug one of the newly added VF interface devices to a running VM.

    # virsh attach-interface testguest1 hostdev 0000:82:10.0 --managed --live --config

Verification

  • If the procedure is successful, the guest operating system detects a new network interface card.

10.7.3. Supported devices for SR-IOV assignment

Not all devices can be used for SR-IOV. The following devices have been tested and verified as compatible with SR-IOV in RHEL 8.

Networking devices

  • Intel 82599ES 10 Gigabit Ethernet Controller - uses the ixgbe driver
  • Intel Ethernet Controller XL710 Series - uses the i40e driver
  • Mellanox ConnectX-5 Ethernet Adapter Cards - use the mlx5_core driver
  • Intel Ethernet Network Adapter XXV710 - uses the i40e driver
  • Intel 82576 Gigabit Ethernet Controller - uses the igb driver
  • Broadcom NetXtreme II BCM57810 - uses the bnx2x driver

10.8. Attaching DASD devices to virtual machines on IBM Z

Using the vfio-ccw feature, you can assign direct-access storage devices (DASDs) as mediated devices to your virtual machines (VMs) on IBM Z hosts. This for example makes it possible for the VM to access a z/OS dataset, or to provide the assigned DASDs to a z/OS machine.

Prerequisites

  • Your host system is using the IBM Z hardware architecture and supports the FICON protocol.
  • The target VM is using a Linux guest operating system.
  • The driverctl package is installed.

    # yum install driverctl
  • The necessary vfio kernel modules have been loaded on the host.

    # lsmod | grep vfio

    The output of this command must contain the following modules:

    • vfio_ccw
    • vfio_mdev
    • vfio_iommu_type1
  • You have a spare DASD device for exclusive use by the VM, and you know the identifier of the device.

    This procedure uses 0.0.002c as an example. When performing the commands, replace 0.0.002c with the identifier of your DASD device.

Procedure

  1. Obtain the subchannel identifier of the DASD device.

    # lscss -d 0.0.002c
    Device   Subchan.  DevType CU Type Use  PIM PAM POM  CHPIDs
    ----------------------------------------------------------------------
    0.0.002c 0.0.29a8  3390/0c 3990/e9 yes  f0  f0  ff   02111221 00000000

    In this example, the subchannel identifier is detected as 0.0.29a8. In the following commands of this procedure, replace 0.0.29a8 with the detected subchannel identifier of your device.

  2. If the lscss command in the previous step only displayed the header output and no device information, perform the following steps:

    1. Remove the device from the cio_ignore list.

      # cio_ignore -r 0.0.002c
    2. In the guest OS, edit the kernel command line of the VM and add the device identifier with a ! mark to the line that starts with cio_ignore=, if it is not present already.

      cio_ignore=all,!condev,!0.0.002c
    3. Repeat step 1 on the host to obtain the subchannel identifier.
  3. Bind the subchannel to the vfio_ccw passthrough driver.

    # driverctl -b css set-override 0.0.29a8 vfio_ccw
    Note

    This binds the 0.0.29a8 subchannel to vfio_ccw persistently, which means the DASD will not be usable on the host. If you need to use the device on the host, you must first remove the automatic binding to 'vfio_ccw' and rebind the subchannel to the default driver:

    # driverctl -b css unset-override 0.0.29a8

  4. Define and start the DASD mediated device.

    # cat nodedev.xml
    <device>
        <parent>css_0_0_29a8</parent>
        <capability type="mdev">
            <type id="vfio_ccw-io"/>
        </capability>
    </device>
    
    # virsh nodedev-define nodedev.xml
    Node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8' defined from 'nodedev.xml'
    
    # virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    Device mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 started
  5. Shut down the VM, if it is running.
  6. Display the UUID of the previously defined device and save it for the next step.

    # virsh nodedev-dumpxml mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    
    <device>
      <name>mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8</name>
      <parent>css_0_0_29a8</parent>
      <capability type='mdev'>
        <type id='vfio_ccw-io'/>
        <uuid>30820a6f-b1a5-4503-91ca-0c10ba12345a</uuid>
        <iommuGroup number='0'/>
        <attr name='assign_adapter' value='0x02'/>
        <attr name='assign_domain' value='0x002b'/>
      </capability>
    </device>
  7. Attach the mediated device to the VM. To do so, use the virsh edit utility to edit the XML configuration of the VM, add the following section to the XML, and replace the uuid value with the UUID you obtained in the previous step.

    <hostdev mode='subsystem' type='mdev' model='vfio-ccw'>
      <source>
        <address uuid="30820a6f-b1a5-4503-91ca-0c10ba12345a"/>
      </source>
    </hostdev>
  8. Optional: Configure the mediated device to start automatically on host boot.

    # virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8

Verification

  1. Ensure that the mediated device is configured correctly.

    # virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    Name:           mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    Parent:         css_0_0_0121
    Active:         yes
    Persistent:     yes
    Autostart:      yes
  2. Obtain the identifier that libvirt assigned to the mediated DASD device. To do so, display the XML configuration of the VM and look for a vfio-ccw device.

    # virsh dumpxml vm-name
    
    <domain>
    [...]
        <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ccw'>
          <source>
            <address uuid='10620d2f-ed4d-437b-8aff-beda461541f9'/>
          </source>
          <alias name='hostdev0'/>
          <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0009'/>
        </hostdev>
    [...]
    </domain>

    In this example, the assigned identifier of the device is 0.0.0009.

  3. Start the VM and log in to its guest OS.
  4. In the guest OS, confirm that the DASD device is listed. For example:

    # lscss | grep 0.0.0009
    0.0.0009 0.0.0007  3390/0c 3990/e9      f0  f0  ff   12212231 00000000
  5. In the guest OS, set the device online. For example:

    # chccwdev -e 0.0009
    Setting device 0.0.0009 online
    Done

10.9. Attaching a watchdog device to a virtual machine using the web console

To force the virtual machine (VM) to perform a specified action when it stops responding, you can attach virtual watchdog devices to a VM.

Prerequisites

Procedure

  1. In the command line interface, install the watchdog service.

    # yum install watchdog

  2. Shut down the VM.
  3. Add the watchdog service to the VM.

    # virt-xml vmname  --add-device --watchdog action=reset --update

  4. Run the VM.
  5. Open the web console and in the Virtual Machines interface of the web console, click on the VM to which you want to add the watchdog device.
  6. Click add next to the Watchdog field in the Overview pane.

    The Add watchdog device type dialog appears.

  7. Select the action that you want the watchdog device to perform if the VM stops responding.

    Image displaying the add watchdog device type dialog box.
  8. Click Add.

Verification

  • The action you selected is visible next to the Watchdog field in the Overview pane.

10.10. Attaching PCI devices to virtual machines on IBM Z

Using the vfio-pci device driver, you can assign PCI devices in pass-through mode to your virtual machines (VMs) on IBM Z hosts. This for example makes it possible for the VM to use NVMe flash disks for handling databases.

Prerequisites

  • Your host system is using the IBM Z hardware architecture.
  • The target VM is using a Linux guest operating system.
  • The necessary vfio kernel modules have been loaded on the host.

    # lsmod | grep vfio

    The output of this command must contain the following modules:

    • vfio_pci
    • vfio_pci_core
    • vfio_iommu_type1

Procedure

  1. Obtain the PCI address identifier of the device that you want to use.

    # lspci -nkD
    
    0000:00:00.0 0000: 1014:04ed
    	Kernel driver in use: ism
    	Kernel modules: ism
    0001:00:00.0 0000: 1014:04ed
    	Kernel driver in use: ism
    	Kernel modules: ism
    0002:00:00.0 0200: 15b3:1016
    	Subsystem: 15b3:0062
    	Kernel driver in use: mlx5_core
    	Kernel modules: mlx5_core
    0003:00:00.0 0200: 15b3:1016
    	Subsystem: 15b3:0062
    	Kernel driver in use: mlx5_core
    	Kernel modules: mlx5_core
  2. Open the XML configuration of the VM to which you want to attach the PCI device.

    # virsh edit vm-name
  3. Add the following <hostdev> configuration to the <devices> section of the XML file.

    Replace the values on the address line with the PCI address of your device. For example, if the device address is 0003:00:00.0, use the following configuration:

    <hostdev mode="subsystem" type="pci" managed="yes">
      <driver name="vfio"/>
       <source>
        <address domain="0x0003" bus="0x00" slot="0x00" function="0x0"/>
       </source>
       <address type="pci"/>
    </hostdev>
  4. Optional: To modify how the guest operating system will detect the PCI device, you can also add a <zpci> sub-element to the <address> element. In the <zpci> line, you can adjust the uid and fid values, which modifies the PCI address and function ID of the device in the guest operating system.

    <hostdev mode="subsystem" type="pci" managed="yes">
      <driver name="vfio"/>
       <source>
        <address domain="0x0003" bus="0x00" slot="0x00" function="0x0"/>
       </source>
       <address type="pci">
         <zpci uid="0x0008" fid="0x001807"/>
       </address>
    </hostdev>

    In this example:

    • uid="0x0008" sets the domain PCI address of the device in the VM to 0008:00:00.0.
    • fid="0x001807" sets the slot value of the device to 0x001807. As a result, the device configuration in the file system of the VM is saved to /sys/bus/pci/slots/00001087/address.

      If these values are not specified, libvirt configures them automatically.

  5. Save the XML configuration.
  6. If the VM is running, shut it down.

    # virsh shutdown vm-name

Verification

  1. Start the VM and log in to its guest operating system.
  2. In the guest operating system, confirm that the PCI device is listed.

    For example, if the device address is 0003:00:00.0, use the following command:

    # lspci -nkD | grep 0003:00:00.0
    
    0003:00:00.0 8086:9a09 (rev 01)

Chapter 11. Managing storage for virtual machines

A virtual machine (VM), just like a physical machine, requires storage for data, program, and system files. As a VM administrator, you can assign physical or network-based storage to your VMs as virtual storage. You can also modify how the storage is presented to a VM regardless of the underlying hardware.

The following sections provide information about the different types of VM storage, how they work, and how you can manage them using the CLI or the web console.

11.1. Understanding virtual machine storage

If you are new to virtual machine (VM) storage, or are unsure about how it works, the following sections provide a general overview about the various components of VM storage, how it works, management basics, and the supported solutions provided by Red Hat.

You can find information about:

11.1.1. Introduction to storage pools

A storage pool is a file, directory, or storage device, managed by libvirt to provide storage for virtual machines (VMs). You can divide storage pools into storage volumes, which store VM images or are attached to VMs as additional storage.

Furthermore, multiple VMs can share the same storage pool, allowing for better allocation of storage resources.

  • Storage pools can be persistent or transient:

    • A persistent storage pool survives a system restart of the host machine. You can use the virsh pool-define to create a persistent storage pool.
    • A transient storage pool only exists until the host reboots. You can use the virsh pool-create command to create a transient storage pool.

Storage pool storage types

Storage pools can be either local or network-based (shared):

  • Local storage pools

    Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices.

    Local storage pools are useful for development, testing, and small deployments that do not require migration or have a large number of VMs.

  • Networked (shared) storage pools

    Networked storage pools include storage devices shared over a network using standard protocols.

11.1.2. Introduction to storage volumes

Storage pools are divided into storage volumes. Storage volumes are abstractions of physical partitions, LVM logical volumes, file-based disk images, and other storage types handled by libvirt. Storage volumes are presented to VMs as local storage devices, such as disks, regardless of the underlying hardware.

On the host machine, a storage volume is referred to by its name and an identifier for the storage pool from which it derives. On the virsh command line, this takes the form --pool storage_pool volume_name.

For example, to display information about a volume named firstimage in the guest_images pool.

# virsh vol-info --pool guest_images firstimage
  Name:             firstimage
  Type:             block
  Capacity:         20.00 GB
  Allocation:       20.00 GB

11.1.3. Storage management using libvirt

Using the libvirt remote protocol, you can manage all aspects of VM storage. These operations can also be performed on a remote host. Consequently, a management application that uses libvirt, such as the RHEL web console, can be used to perform all the required tasks of configuring the storage of a VM.

You can use the libvirt API to query the list of volumes in a storage pool or to get information regarding the capacity, allocation, and available storage in that storage pool. For storage pools that support it, you can also use the libvirt API to create, clone, resize, and delete storage volumes. Furthermore, you can use the libvirt API to upload data to storage volumes, download data from storage volumes, or wipe data from storage volumes.

11.1.4. Overview of storage management

To illustrate the available options for managing storage, the following example talks about a sample NFS server that uses mount -t nfs nfs.example.com:/path/to/share /path/to/data.

As a storage administrator:

  • You can define an NFS storage pool on the virtualization host to describe the exported server path and the client target path. Consequently, libvirt can mount the storage either automatically when libvirt is started or as needed while libvirt is running.
  • You can simply add the storage pool and storage volume to a VM by name. You do not need to add the target path to the volume. Therefore, even if the target client path changes, it does not affect the VM.
  • You can configure storage pools to autostart. When you do so, libvirt automatically mounts the NFS shared disk on the directory which is specified when libvirt is started. libvirt mounts the share on the specified directory, similar to the command mount nfs.example.com:/path/to/share /vmdata.
  • You can query the storage volume paths using the libvirt API. These storage volumes are basically the files present in the NFS shared disk. You can then copy these paths into the section of a VM’s XML definition that describes the source storage for the VM’s block devices.
  • In the case of NFS, you can use an application that uses the libvirt API to create and delete storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share).

    Note that, not all storage pool types support creating and deleting volumes.

  • You can stop a storage pool when no longer required. Stopping a storage pool (pool-destroy) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite what the name of the command suggests. For more information, see man virsh.

11.1.5. Supported and unsupported storage pool types

Supported storage pool types

The following is a list of storage pool types supported by RHEL:

  • Directory-based storage pools
  • Disk-based storage pools
  • Partition-based storage pools
  • GlusterFS storage pools
  • iSCSI-based storage pools
  • LVM-based storage pools
  • NFS-based storage pools
  • SCSI-based storage pools with vHBA devices
  • Multipath-based storage pools
  • RBD-based storage pools

Unsupported storage pool types

The following is a list of libvirt storage pool types not supported by RHEL:

  • Sheepdog-based storage pools
  • Vstorage-based storage pools
  • ZFS-based storage pools

11.2. Managing virtual machine storage pools using the CLI

You can use the CLI to manage the following aspects of your storage pools to assign storage to your virtual machines (VMs):

11.2.1. Viewing storage pool information using the CLI

Using the CLI, you can view a list of all storage pools with limited or full details about the storage pools. You can also filter the storage pools listed.

Procedure

  • Use the virsh pool-list command to view storage pool information.

    # virsh pool-list --all --details
     Name                State    Autostart  Persistent    Capacity  Allocation   Available
     default             running  yes        yes          48.97 GiB   23.93 GiB   25.03 GiB
     Downloads           running  yes        yes         175.62 GiB   62.02 GiB  113.60 GiB
     RHEL-Storage-Pool   running  yes        yes         214.62 GiB   93.02 GiB  168.60 GiB

Additional resources

  • The virsh pool-list --help command

11.2.2. Creating directory-based storage pools using the CLI

A directory-based storage pool is based on a directory in an existing mounted file system. This is useful, for example, when you want to use the remaining space on the file system for other purposes. You can use the virsh utility to create directory-based storage pools.

Prerequisites

  • Ensure your hypervisor supports directory storage pools:

    # virsh pool-capabilities | grep "'dir' supported='yes'"

    If the command displays any output, directory pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a directory-type storage pool. For example, to create a storage pool named guest_images_dir that uses the /guest_images directory:

    # virsh pool-define-as guest_images_dir dir --target "/guest_images"
    Pool guest_images_dir defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Directory-based storage pool parameters.

  2. Create the storage pool target path

    Use the virsh pool-build command to create a storage pool target path for a pre-formatted file system storage pool, initialize the storage source device, and define the format of the data.

    # virsh pool-build guest_images_dir
      Pool guest_images_dir built
    
    # ls -la /guest_images
      total 8
      drwx------.  2 root root 4096 May 31 19:38 .
      dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
  3. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_dir     inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_dir
      Pool guest_images_dir started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  5. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_dir
      Pool guest_images_dir marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_dir
      Name:           guest_images_dir
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.3. Creating disk-based storage pools using the CLI

In a disk-based storage pool, the pool is based on a disk partition. This is useful, for example, when you want to have an entire disk partition dedicated as virtual machine (VM) storage. You can use the virsh utility to create disk-based storage pools.

Prerequisites

  • Ensure your hypervisor supports disk-based storage pools:

    # virsh pool-capabilities | grep "'disk' supported='yes'"

    If the command displays any output, disk-based pools are supported.

  • Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups on it. This can result in system errors on the host.

    However, if you require using an entire block device for the storage pool, Red Hat recommends protecting any important partitions on the device from GRUB’s os-prober function. To do so, edit the /etc/default/grub file and apply one of the following configurations:

    • Disable os-prober.

      GRUB_DISABLE_OS_PROBER=true
    • Prevent os-prober from discovering a specific partition. For example:

      GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
  • Back up any data on the selected storage device before creating a storage pool. Depending on the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a disk-type storage pool. The following example creates a storage pool named guest_images_disk that uses the /dev/sdb device and is mounted on the /dev directory.

    # virsh pool-define-as guest_images_disk disk --source-format=gpt --source-dev=/dev/sdb --target /dev
    Pool guest_images_disk defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Disk-based storage pool parameters.

  2. Create the storage pool target path

    Use the virsh pool-build command to create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.

    # virsh pool-build guest_images_disk
      Pool guest_images_disk built
    Note

    Building the target path is only necessary for disk-based, file system-based, and logical storage pools. If libvirt detects that the source storage device’s data format differs from the selected storage pool type, the build fails, unless the overwrite option is specified.

  3. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_disk    inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_disk
      Pool guest_images_disk started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  5. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_disk
      Pool guest_images_disk marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_disk
      Name:           guest_images_disk
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.4. Creating filesystem-based storage pools using the CLI

When you want to create a storage pool on a file system that is not mounted, use the filesystem-based storage pool. This storage pool is based on a given file-system mountpoint. You can use the virsh utility to create filesystem-based storage pools.

Prerequisites

  • Ensure your hypervisor supports filesystem-based storage pools:

    # virsh pool-capabilities | grep "'fs' supported='yes'"

    If the command displays any output, file-based pools are supported.

  • Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups on it. This can result in system errors on the host.

    However, if you require using an entire block device for the storage pool, Red Hat recommends protecting any important partitions on the device from GRUB’s os-prober function. To do so, edit the /etc/default/grub file and apply one of the following configurations:

    • Disable os-prober.

      GRUB_DISABLE_OS_PROBER=true
    • Prevent os-prober from discovering a specific partition. For example:

      GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a filesystem-type storage pool. For example, to create a storage pool named guest_images_fs that uses the /dev/sdc1 partition, and is mounted on the /guest_images directory:

    # virsh pool-define-as guest_images_fs fs --source-dev /dev/sdc1 --target /guest_images
    Pool guest_images_fs defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Filesystem-based storage pool parameters.

  2. Define the storage pool target path

    Use the virsh pool-build command to create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.

    # virsh pool-build guest_images_fs
      Pool guest_images_fs built
    
    # ls -la /guest_images
      total 8
      drwx------.  2 root root 4096 May 31 19:38 .
      dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
  3. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_fs      inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_fs
      Pool guest_images_fs started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  5. Optional: Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_fs
      Pool guest_images_fs marked as autostarted

Verification

  1. Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_fs
      Name:           guest_images_fs
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB
  2. Verify there is a lost+found directory in the target path on the file system, indicating that the device is mounted.

    # mount | grep /guest_images
      /dev/sdc1 on /guest_images type ext4 (rw)
    
    # ls -la /guest_images
      total 24
      drwxr-xr-x.  3 root root  4096 May 31 19:47 .
      dr-xr-xr-x. 25 root root  4096 May 31 19:38 ..
      drwx------.  2 root root 16384 May 31 14:18 lost+found

11.2.5. Creating GlusterFS-based storage pools using the CLI

GlusterFS is a user-space file system that uses the File System in Userspace (FUSE) software interface. If you want to have a storage pool on a Gluster server, you can use the virsh utility to create GlusterFS-based storage pools.

Prerequisites

  • Before you can create GlusterFS-based storage pool on a host, prepare a Gluster.

    1. Obtain the IP address of the Gluster server by listing its status with the following command:

      # gluster volume status
      Status of volume: gluster-vol1
      Gluster process                           Port	Online	Pid
      ------------------------------------------------------------
      Brick 222.111.222.111:/gluster-vol1       49155	  Y    18634
      
      Task Status of Volume gluster-vol1
      ------------------------------------------------------------
      There are no active volume tasks
    2. If not installed, install the glusterfs-fuse package.
    3. If not enabled, enable the virt_use_fusefs boolean. Check that it is enabled.

      # setsebool virt_use_fusefs on
      # getsebool virt_use_fusefs
      virt_use_fusefs --> on
  • Ensure your hypervisor supports GlusterFS-based storage pools:

    # virsh pool-capabilities | grep "'gluster' supported='yes'"

    If the command displays any output, GlusterFS-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create a GlusterFS-based storage pool. For example, to create a storage pool named guest_images_glusterfs that uses a Gluster server named gluster-vol1 with IP 111.222.111.222, and is mounted on the root directory of the Gluster server:

    # virsh pool-define-as --name guest_images_glusterfs --type gluster --source-host 111.222.111.222 --source-name gluster-vol1 --source-path /
    Pool guest_images_glusterfs defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see GlusterFS-based storage pool parameters.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                    State      Autostart
      --------------------------------------------
      default                 active     yes
      guest_images_glusterfs  inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_glusterfs
      Pool guest_images_glusterfs started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_glusterfs
      Pool guest_images_glusterfs marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_glusterfs
      Name:           guest_images_glusterfs
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.6. Creating iSCSI-based storage pools using the CLI

Internet Small Computer Systems Interface (iSCSI) is an IP-based storage networking standard for linking data storage facilities. If you want to have a storage pool on an iSCSI server, you can use the virsh utility to create iSCSI-based storage pools.

Prerequisites

  • Ensure your hypervisor supports iSCSI-based storage pools:

    # virsh pool-capabilities | grep "'iscsi' supported='yes'"

    If the command displays any output, iSCSI-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create an iSCSI-type storage pool. For example, to create a storage pool named guest_images_iscsi that uses the iqn.2010-05.com.example.server1:iscsirhel7guest IQN on the server1.example.com, and is mounted on the /dev/disk/by-path path:

    # virsh pool-define-as --name guest_images_iscsi --type iscsi --source-host server1.example.com --source-dev iqn.2010-05.com.example.server1:iscsirhel7guest --target /dev/disk/by-path
    Pool guest_images_iscsi defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see iSCSI-based storage pool parameters.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_iscsi   inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_iscsi
      Pool guest_images_iscsi started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_iscsi
      Pool guest_images_iscsi marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_iscsi
      Name:           guest_images_iscsi
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.7. Creating LVM-based storage pools using the CLI

If you want to have a storage pool that is part of an LVM volume group, you can use the virsh utility to create LVM-based storage pools.

Recommendations

Be aware of the following before creating an LVM-based storage pool:

  • LVM-based storage pools do not provide the full flexibility of LVM.
  • libvirt supports thin logical volumes, but does not provide the features of thin storage pools.
  • LVM-based storage pools are volume groups. You can create volume groups using the virsh utility, but this way you can only have one device in the created volume group. To create a volume group with multiple devices, use the LVM utility instead, see How to create a volume group in Linux with LVM.

    For more detailed information about volume groups, refer to the Red Hat Enterprise Linux Logical Volume Manager Administration Guide.

  • LVM-based storage pools require a full disk partition. If you activate a new partition or device using virsh commands, the partition will be formatted and all data will be erased. If you are using a host’s existing volume group, as in these procedures, nothing will be erased.

Prerequisites

  • Ensure your hypervisor supports LVM-based storage pools:

    # virsh pool-capabilities | grep "'logical' supported='yes'"

    If the command displays any output, LVM-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create an LVM-type storage pool. For example, the following command creates a storage pool named guest_images_lvm that uses the lvm_vg volume group and is mounted on the /dev/lvm_vg directory:

    # virsh pool-define-as guest_images_lvm logical --source-name lvm_vg --target /dev/lvm_vg
    Pool guest_images_lvm defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see LVM-based storage pool parameters.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                   State      Autostart
      -------------------------------------------
      default                active     yes
      guest_images_lvm       inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_lvm
      Pool guest_images_lvm started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_lvm
      Pool guest_images_lvm marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_lvm
      Name:           guest_images_lvm
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.8. Creating NFS-based storage pools using the CLI

If you want to have a storage pool on a Network File System (NFS) server, you can use the virsh utility to create NFS-based storage pools.

Prerequisites

  • Ensure your hypervisor supports NFS-based storage pools:

    # virsh pool-capabilities | grep "<value>nfs</value>"

    If the command displays any output, NFS-based pools are supported.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create an NFS-type storage pool. For example, to create a storage pool named guest_images_netfs that uses a NFS server with IP 111.222.111.222 mounted on the server directory /home/net_mount using the target directory /var/lib/libvirt/images/nfspool:

    # virsh pool-define-as --name guest_images_netfs --type netfs --source-host='111.222.111.222' --source-path='/home/net_mount' --source-format='nfs' --target='/var/lib/libvirt/images/nfspool'

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see NFS-based storage pool parameters.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_netfs   inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_netfs
      Pool guest_images_netfs started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_netfs
      Pool guest_images_netfs marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_netfs
      Name:           guest_images_netfs
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.9. Creating SCSI-based storage pools with vHBA devices using the CLI

If you want to have a storage pool on a Small Computer System Interface (SCSI) device, your host must be able to connect to the SCSI device using a virtual host bus adapter (vHBA). You can then use the virsh utility to create SCSI-based storage pools.

Prerequisites

  • Ensure your hypervisor supports SCSI-based storage pools:

    # virsh pool-capabilities | grep "'scsi' supported='yes'"

    If the command displays any output, SCSI-based pools are supported.

  • Before creating a SCSI-based storage pools with vHBA devices, create a vHBA. For more information, see Creating vHBAs.

Procedure

  1. Create a storage pool

    Use the virsh pool-define-as command to define and create SCSI storage pool using a vHBA. For example, the following creates a storage pool named guest_images_vhba that uses a vHBA identified by the scsi_host3 parent adapter, world-wide port number 5001a4ace3ee047d, and world-wide node number 5001a4a93526d0a1. The storage pool is mounted on the /dev/disk/ directory:

    # virsh pool-define-as guest_images_vhba scsi --adapter-parent scsi_host3 --adapter-wwnn 5001a4a93526d0a1 --adapter-wwpn 5001a4ace3ee047d --target /dev/disk/
    Pool guest_images_vhba defined

    If you already have an XML configuration of the storage pool you want to create, you can also define the pool based on the XML. For details, see Parameters for SCSI-based storage pools with vHBA devices.

  2. Verify that the pool was created

    Use the virsh pool-list command to verify that the pool was created.

    # virsh pool-list --all
    
      Name                 State      Autostart
      -----------------------------------------
      default              active     yes
      guest_images_vhba    inactive   no
  3. Start the storage pool

    Use the virsh pool-start command to mount the storage pool.

    # virsh pool-start guest_images_vhba
      Pool guest_images_vhba started
    Note

    The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.

  4. [Optional] Turn on autostart

    By default, a storage pool defined with the virsh command is not set to automatically start each time virtualization services start. Use the virsh pool-autostart command to configure the storage pool to autostart.

    # virsh pool-autostart guest_images_vhba
      Pool guest_images_vhba marked as autostarted

Verification

  • Use the virsh pool-info command to verify that the storage pool is in the running state. Check if the sizes reported are as expected and if autostart is configured correctly.

    # virsh pool-info guest_images_vhba
      Name:           guest_images_vhba
      UUID:           c7466869-e82a-a66c-2187-dc9d6f0877d0
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       458.39 GB
      Allocation:     197.91 MB
      Available:      458.20 GB

11.2.10. Deleting storage pools using the CLI

To remove a storage pool from your host system, you must stop the pool and remove its XML definition.

Procedure

  1. List the defined storage pools using the virsh pool-list command.

    # virsh pool-list --all
    Name                 State      Autostart
    -------------------------------------------
    default              active     yes
    Downloads            active     yes
    RHEL-Storage-Pool   active     yes
  2. Stop the storage pool you want to delete using the virsh pool-destroy command.

    # virsh pool-destroy Downloads
    Pool Downloads destroyed
  3. Optional: For some types of storage pools, you can remove the directory where the storage pool resides using the virsh pool-delete command. Note that to do so, the directory must be empty.

    # virsh pool-delete Downloads
    Pool Downloads deleted
  4. Delete the definition of the storage pool using the virsh pool-undefine command.

    # virsh pool-undefine Downloads
    Pool Downloads has been undefined

Verification

  • Confirm that the storage pool was deleted.

    # virsh pool-list --all
    Name                 State      Autostart
    -------------------------------------------
    default              active     yes
    rhel-Storage-Pool   active     yes

11.3. Managing virtual machine storage pools using the web console

Using the RHEL web console, you can manage the storage pools to assign storage to your virtual machines (VMs).

You can use the web console to:

11.3.1. Viewing storage pool information using the web console

Using the web console, you can view detailed information about storage pools available on your system. Storage pools can be used to create disk images for your virtual machines.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines interface.

    The Storage pools window appears, showing a list of configured storage pools.

    Image displaying the storage pool tab of the web console with information about existing storage pools.

    The information includes the following:

    • Name - The name of the storage pool.
    • Size - The current allocation and the total capacity of the storage pool.
    • Connection - The connection used to access the storage pool.
    • State - The state of the storage pool.
  2. Click the arrow next to the storage pool whose information you want to see.

    The row expands to reveal the Overview pane with detailed information about the selected storage pool.

    Image displaying the detailed information about the selected storage pool.

    The information includes:

    • Target path - The source for the types of storage pools backed by directories, such as dir or netfs.
    • Persistent - Indicates whether or not the storage pool has a persistent configuration.
    • Autostart - Indicates whether or not the storage pool starts automatically when the system boots up.
    • Type - The type of the storage pool.
  3. To view a list of storage volumes associated with the storage pool, click Storage Volumes.

    The Storage Volumes pane appears, showing a list of configured storage volumes.

    Image displaying the list of storage volumes associated with the selected storage pool.

    The information includes:

    • Name - The name of the storage volume.
    • Used by - The VM that is currently using the storage volume.
    • Size - The size of the volume.

11.3.2. Creating directory-based storage pools using the web console

A directory-based storage pool is based on a directory in an existing mounted file system. This is useful, for example, when you want to use the remaining space on the file system for other purposes.

Prerequisites

Procedure

  1. In the RHEL web console, click Storage pools in the Virtual Machines tab.

    The Storage pools window appears, showing a list of configured storage pools, if any.

    Image displaying all the storage pools currently configured on the host
  2. Click Create storage pool.

    The Create storage pool dialog appears.

  3. Enter a name for the storage pool.
  4. In the Type drop down menu, select Filesystem directory.

    Image displaying the Create storage pool dialog box.
    Note

    If you do not see the Filesystem directory option in the drop down menu, then your hypervisor does not support directory-based storage pools.

  5. Enter the following information:

    • Target path - The source for the types of storage pools backed by directories, such as dir or netfs.
    • Startup - Whether or not the storage pool starts when the host boots.
  6. Click Create.

    The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.

11.3.3. Creating NFS-based storage pools using the web console

An NFS-based storage pool is based on a file system that is hosted on a server.

Prerequisites

Procedure

  1. In the RHEL web console, click Storage pools in the Virtual Machines tab.

    The Storage pools window appears, showing a list of configured storage pools, if any.

    Image displaying all the storage pools currently configured on the host
  2. Click Create storage pool.

    The Create storage pool dialog appears.

  3. Enter a name for the storage pool.
  4. In the Type drop down menu, select Network file system.

    Image displaying the Create storage pool dialog box.
    Note

    If you do not see the Network file system option in the drop down menu, then your hypervisor does not support nfs-based storage pools.

  5. Enter the rest of the information:

    • Target path - The path specifying the target. This will be the path used for the storage pool.
    • Host - The hostname of the network server where the mount point is located. This can be a hostname or an IP address.
    • Source path - The directory used on the network server.
    • Startup - Whether or not the storage pool starts when the host boots.
  6. Click Create.

    The storage pool is created. The Create storage pool dialog closes, and the new storage pool appears in the list of storage pools.

11.3.4. Creating iSCSI-based storage pools using the web console

An iSCSI-based storage pool is based on the Internet Small Computer Systems Interface (iSCSI), an IP-based storage networking standard for linking data storage facilities.

Prerequisites

Procedure

  1. In the RHEL web console, click Storage pools in the Virtual Machines tab.

    The Storage pools window appears, showing a list of configured storage pools, if any.

    Image displaying all the storage pools currently configured on the host
  2. Click Create storage pool.

    The Create storage pool dialog appears.

  3. Enter a name for the storage pool.
  4. In the Type drop down menu, select iSCSI target.

    Image displaying the Create storage pool dialog box.
  5. Enter the rest of the information:

    • Target Path - The path specifying the target. This will be the path used for the storage pool.
    • Host - The hostname or IP address of the ISCSI server.
    • Source path - The unique iSCSI Qualified Name (IQN) of the iSCSI target.
    • Startup - Whether or not the storage pool starts when the host boots.
  6. Click Create.

    The storage pool is created. The Create storage pool dialog closes, and the new storage pool appears in the list of storage pools.

11.3.5. Creating disk-based storage pools using the web console

A disk-based storage pool uses entire disk partitions.

Warning
  • Depending on the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device. It is strongly recommended that you back up the data on the storage device before creating a storage pool.
  • When whole disks or block devices are passed to the VM, the VM will likely partition it or create its own LVM groups on it. This can cause the host machine to detect these partitions or LVM groups and cause errors.

    These errors can also occur when you manually create partitions or LVM groups and pass them to the VM.

    To avoid theses errors, use file-based storage pools instead.

Prerequisites

Procedure

  1. In the RHEL web console, click Storage pools in the Virtual Machines tab.

    The Storage pools window appears, showing a list of configured storage pools, if any.

    Image displaying all the storage pools currently configured on the host
  2. Click Create storage pool.

    The Create storage pool dialog appears.

  3. Enter a name for the storage pool.
  4. In the Type drop down menu, select Physical disk device.

    Image displaying the Create storage pool dialog box.
    Note

    If you do not see the Physical disk device option in the drop down menu, then your hypervisor does not support disk-based storage pools.

  5. Enter the rest of the information:

    • Target Path - The path specifying the target device. This will be the path used for the storage pool.
    • Source path - The path specifying the storage device. For example, /dev/sdb.
    • Format - The type of the partition table.
    • Startup - Whether or not the storage pool starts when the host boots.
  6. Click Create.

    The storage pool is created. The Create storage pool dialog closes, and the new storage pool appears in the list of storage pools.

11.3.6. Creating LVM-based storage pools using the web console

An LVM-based storage pool is based on volume groups, which you can manage using the Logical Volume Manager (LVM). A volume group is a combination of multiple physical volumes that creates a single storage structure.

Note
  • LVM-based storage pools do not provide the full flexibility of LVM.
  • libvirt supports thin logical volumes, but does not provide the features of thin storage pools.
  • LVM-based storage pools require a full disk partition. If you activate a new partition or device using virsh commands, the partition will be formatted and all data will be erased. If you are using a host’s existing volume group, as in these procedures, nothing will be erased.
  • To create a volume group with multiple devices, use the LVM utility instead, see How to create a volume group in Linux with LVM.

    For more detailed information about volume groups, refer to the Red Hat Enterprise Linux Logical Volume Manager Administration Guide.

Prerequisites

Procedure

  1. In the RHEL web console, click Storage pools in the Virtual Machines tab.

    The Storage pools window appears, showing a list of configured storage pools, if any.

    Image displaying all the storage pools currently configured on the host
  2. Click Create storage pool.

    The Create storage pool dialog appears.

  3. Enter a name for the storage pool.
  4. In the Type drop down menu, select LVM volume group.

    Image displaying the Create storage pool dialog box.
    Note

    If you do not see the LVM volume group option in the drop down menu, then your hypervisor does not support LVM-based storage pools.

  5. Enter the rest of the information:

    • Source volume group - The name of the LVM volume group that you wish to use.
    • Startup - Whether or not the storage pool starts when the host boots.
  6. Click Create.

    The storage pool is created. The Create storage pool dialog closes, and the new storage pool appears in the list of storage pools.

11.3.7. Creating SCSI-based storage pools with vHBA devices using the web console

An SCSI-based storage pool is based on a Small Computer System Interface (SCSI) device. In this configuration, your host must be able to connect to the SCSI device using a virtual host bus adapter (vHBA).

Prerequisites

Procedure

  1. In the RHEL web console, click Storage pools in the Virtual Machines tab.

    The Storage pools window appears, showing a list of configured storage pools, if any.

    Image displaying all the storage pools currently configured on the host
  2. Click Create storage pool.

    The Create storage pool dialog appears.

  3. Enter a name for the storage pool.
  4. In the Type drop down menu, select iSCSI direct target.

    Image displaying the Create storage pool dialog box.
    Note

    If you do not see the iSCSI direct target option in the drop down menu, then your hypervisor does not support SCSI-based storage pools.

  5. Enter the rest of the information:

    • Host - The hostname of the network server where the mount point is located. This can be a hostname or an IP address.
    • Source path - The unique iSCSI Qualified Name (IQN) of the iSCSI target.
    • Initiator - The unique iSCSI Qualified Name (IQN) of the iSCSI initiator, the vHBA.
    • Startup - Whether or not the storage pool starts when the host boots.
  6. Click Create.

    The storage pool is created. The Create storage pool dialog closes, and the new storage pool appears in the list of storage pools.

11.3.8. Removing storage pools using the web console

You can remove storage pools to free up resources on the host or on the network to improve system performance. Deleting storage pools also frees up resources that can then be used by other virtual machines (VMs).

Important

Unless explicitly specified, deleting a storage pool does not simultaneously delete the storage volumes inside that pool.

To temporarily deactivate a storage pool instead of deleting it, see Deactivating storage pools using the web console

Prerequisites

Procedure

  1. Click Storage Pools on the Virtual Machines tab.

    The Storage Pools window appears, showing a list of configured storage pools.

    Image displaying all the storage pools currently configured on the host.
  2. Click the Menu button of the storage pool you want to delete and click Delete.

    A confirmation dialog appears.

    Image displaying the Delete Storage Pool default dialog box.
  3. Optional: To delete the storage volumes inside the pool, select the corresponding check boxes in the dialog.
  4. Click Delete.

    The storage pool is deleted. If you had selected the checkbox in the previous step, the associated storage volumes are deleted as well.

11.3.9. Deactivating storage pools using the web console

If you do not want to permanently delete a storage pool, you can temporarily deactivate it instead.

When you deactivate a storage pool, no new volumes can be created in that pool. However, any virtual machines (VMs) that have volumes in that pool will continue to run. This is useful for a number of reasons, for example, you can limit the number of volumes that can be created in a pool to increase system performance.

To deactivate a storage pool using the RHEL web console, see the following procedure.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    Image displaying all the storage pools currently configured on the host.
  2. Click Deactivate on the storage pool row.

    The storage pool is deactivated.

11.4. Parameters for creating storage pools

Based on the type of storage pool you require, you can modify its XML configuration file and define a specific type of storage pool. This section provides information about the XML parameters required for creating various types of storage pools along with examples.

11.4.1. Directory-based storage pool parameters

When you want to create or modify a directory-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_dir

Parameters

The following table provides a list of required parameters for the XML file for a directory-based storage pool.

Table 11.1. Directory-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='dir'>

The name of the storage pool

<name>name</name>

The path specifying the target. This will be the path used for the storage pool.

<target>
   <path>target_path</path>
</target>

Example

The following is an example of an XML file for a storage pool based on the /guest_images directory:

<pool type='dir'>
  <name>dirpool</name>
  <target>
    <path>/guest_images</path>
  </target>
</pool>

11.4.2. Disk-based storage pool parameters

When you want to create or modify a disk-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_disk

Parameters

The following table provides a list of required parameters for the XML file for a disk-based storage pool.

Table 11.2. Disk-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='disk'>

The name of the storage pool

<name>name</name>

The path specifying the storage device. For example, /dev/sdb.

<source>
   <path>source_path</path>
</source>

The path specifying the target device. This will be the path used for the storage pool.

<target>
   <path>target_path</path>
</target>

Example

The following is an example of an XML file for a disk-based storage pool:

<pool type='disk'>
  <name>phy_disk</name>
  <source>
    <device path='/dev/sdb'/>
    <format type='gpt'/>
  </source>
  <target>
    <path>/dev</path>
  </target>
</pool>

11.4.3. Filesystem-based storage pool parameters

When you want to create or modify a filesystem-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_fs

Parameters

The following table provides a list of required parameters for the XML file for a filesystem-based storage pool.

Table 11.3. Filesystem-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='fs'>

The name of the storage pool

<name>name</name>

The path specifying the partition. For example, /dev/sdc1

<source>
   <device path=device_path />

The file system type, for example ext4.

    <format type=fs_type />
</source>

The path specifying the target. This will be the path used for the storage pool.

<target>
    <path>path-to-pool</path>
</target>

Example

The following is an example of an XML file for a storage pool based on the /dev/sdc1 partition:

<pool type='fs'>
  <name>guest_images_fs</name>
  <source>
    <device path='/dev/sdc1'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/guest_images</path>
  </target>
</pool>

11.4.4. GlusterFS-based storage pool parameters

When you want to create or modify a GlusterFS-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_glusterfs

Parameters

The following table provides a list of required parameters for the XML file for a GlusterFS-based storage pool.

Table 11.4. GlusterFS-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='gluster'>

The name of the storage pool

<name>name</name>

The hostname or IP address of the Gluster server

<source>
   <name=gluster-name />

The path on the Gluster server used for the storage pool.

    <dir path=gluster-path />
</source>

Example

The following is an example of an XML file for a storage pool based on the Gluster file system at 111.222.111.222:

<pool type='gluster'>
  <name>Gluster_pool</name>
  <source>
    <host name='111.222.111.222'/>
    <dir path='/'/>
    <name>gluster-vol1</name>
  </source>
</pool>

11.4.5. iSCSI-based storage pool parameters

When you want to create or modify an iSCSI-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_iscsi

Parameters

The following table provides a list of required parameters for the XML file for an iSCSI-based storage pool.

Table 11.5. iSCSI-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='iscsi'>

The name of the storage pool

<name>name</name>

The name of the host

<source>
  <host name=hostname />

The iSCSI IQN

    <device path= iSCSI_IQN />
</source>

The path specifying the target. This will be the path used for the storage pool.

<target>
   <path>/dev/disk/by-path</path>
</target>

[Optional] The IQN of the iSCSI initiator. This is only needed when the ACL restricts the LUN to a particular initiator.

<initiator>
   <iqn name='initiator0' />
</initiator>

Note

The IQN of the iSCSI initiator can be determined using the virsh find-storage-pool-sources-as iscsi command.

Example

The following is an example of an XML file for a storage pool based on the specified iSCSI device:

<pool type='iscsi'>
  <name>iSCSI_pool</name>
  <source>
    <host name='server1.example.com'/>
    <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
  </source>
  <target>
    <path>/dev/disk/by-path</path>
  </target>
</pool>

11.4.6. LVM-based storage pool parameters

When you want to create or modify an LVM-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_logical

Parameters

The following table provides a list of required parameters for the XML file for a LVM-based storage pool.

Table 11.6. LVM-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='logical'>

The name of the storage pool

<name>name</name>

The path to the device for the storage pool

<source>
   <device path='device_path' />`

The name of the volume group

    <name>VG-name</name>

The virtual group format

    <format type='lvm2' />
</source>

The target path

<target>
   <path=target_path />
</target>

Note

If the logical volume group is made of multiple disk partitions, there may be multiple source devices listed. For example:

<source>
  <device path='/dev/sda1'/>
  <device path='/dev/sdb3'/>
  <device path='/dev/sdc2'/>
  ...
</source>

Example

The following is an example of an XML file for a storage pool based on the specified LVM:

<pool type='logical'>
  <name>guest_images_lvm</name>
  <source>
    <device path='/dev/sdc'/>
    <name>libvirt_lvm</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/libvirt_lvm</path>
  </target>
</pool>

11.4.7. NFS-based storage pool parameters

When you want to create or modify an NFS-based storage pool using an XML configuration file, you must include certain required parameters. See the following table for more information about these parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_netfs

Parameters

The following table provides a list of required parameters for the XML file for an NFS-based storage pool.

Table 11.7. NFS-based storage pool parameters

DescriptionXML

The type of storage pool

<pool type='netfs'>

The name of the storage pool

<name>name</name>

The hostname of the network server where the mount point is located. This can be a hostname or an IP address.

<source>
   <host name=hostname
/>

The format of the storage pool

One of the following:

    <format type='nfs' />

    <format type='glusterfs' />

    <format type='cifs' />

The directory used on the network server

    <dir path=source_path />
</source>

The path specifying the target. This will be the path used for the storage pool.

<target>
   <path>target_path</path>
</target>

Example

The following is an example of an XML file for a storage pool based on the /home/net_mount directory of the file_server NFS server:

<pool type='netfs'>
  <name>nfspool</name>
  <source>
    <host name='file_server'/>
    <format type='nfs'/>
    <dir path='/home/net_mount'/>
  </source>
  <target>
    <path>/var/lib/libvirt/images/nfspool</path>
  </target>
</pool>

11.4.8. Parameters for SCSI-based storage pools with vHBA devices

To create or modify an XML configuration file for a SCSi-based storage pool that uses a virtual host adapter bus (vHBA) device, you must include certain required parameters in the XML configuration file. See the following table for more information about the required parameters.

You can use the virsh pool-define command to create a storage pool based on the XML configuration in a specified file. For example:

# virsh pool-define ~/guest_images.xml
  Pool defined from guest_images_vhba

Parameters

The following table provides a list of required parameters for the XML file for a SCSI-based storage pool with vHBA.

Table 11.8. Parameters for SCSI-based storage pools with vHBA devices

DescriptionXML

The type of storage pool

<pool type='scsi'>

The name of the storage pool

<name>name</name>

The identifier of the vHBA. The parent attribute is optional.

<source>
   <adapter type='fc_host'
   [parent=parent_scsi_device]
   wwnn='WWNN'
   wwpn='WWPN' />
</source>

The target path. This will be the path used for the storage pool.

<target>
   <path=target_path />
</target>

Important

When the <path> field is /dev/, libvirt generates a unique short device path for the volume device path. For example, /dev/sdc. Otherwise, the physical host path is used. For example, /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0. The unique short device path allows the same volume to be listed in multiple virtual machines (VMs) by multiple storage pools. If the physical host path is used by multiple VMs, duplicate device type warnings may occur.

Note

The parent attribute can be used in the <adapter> field to identify the physical HBA parent from which the NPIV LUNs by varying paths can be used. This field, scsi_hostN, is combined with the vports and max_vports attributes to complete the parent identification. The parent, parent_wwnn, parent_wwpn, or parent_fabric_wwn attributes provide varying degrees of assurance that after the host reboots the same HBA is used.

  • If no parent is specified, libvirt uses the first scsi_hostN adapter that supports NPIV.
  • If only the parent is specified, problems can arise if additional SCSI host adapters are added to the configuration.
  • If parent_wwnn or parent_wwpn is specified, after the host reboots the same HBA is used.
  • If parent_fabric_wwn is used, after the host reboots an HBA on the same fabric is selected, regardless of the scsi_hostN used.

Examples

The following are examples of XML files for SCSI-based storage pools with vHBA.

  • A storage pool that is the only storage pool on the HBA:

    <pool type='scsi'>
      <name>vhbapool_host3</name>
      <source>
        <adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>
  • A storage pool that is one of several storage pools that use a single vHBA and uses the parent attribute to identify the SCSI host device:

    <pool type='scsi'>
      <name>vhbapool_host3</name>
      <source>
        <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>

11.5. Managing virtual machine storage volumes using the CLI

You can use the CLI to manage the following aspects of your storage volumes to assign storage to your virtual machines (VMs):

11.5.1. Viewing storage volume information using the CLI

Using the command line, you can view a list of all storage pools available on your host, as well as details about a specified storage pool

Procedure

  1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.

    # virsh vol-list --pool RHEL-Storage-Pool --details
     Name                Path                                               Type   Capacity  Allocation
    ---------------------------------------------------------------------------------------------
     .bash_history       /home/VirtualMachines/.bash_history       file  18.70 KiB   20.00 KiB
     .bash_logout        /home/VirtualMachines/.bash_logout        file    18.00 B    4.00 KiB
     .bash_profile       /home/VirtualMachines/.bash_profile       file   193.00 B    4.00 KiB
     .bashrc             /home/VirtualMachines/.bashrc             file   1.29 KiB    4.00 KiB
     .git-prompt.sh      /home/VirtualMachines/.git-prompt.sh      file  15.84 KiB   16.00 KiB
     .gitconfig          /home/VirtualMachines/.gitconfig          file   167.00 B    4.00 KiB
     RHEL_Volume.qcow2   /home/VirtualMachines/RHEL8_Volume.qcow2  file  60.00 GiB   13.93 GiB
  2. Use the virsh vol-info command to list the storage volumes in a specified storage pool.

    # vol-info --pool RHEL-Storage-Pool --vol RHEL_Volume.qcow2
    Name:           RHEL_Volume.qcow2
    Type:           file
    Capacity:       60.00 GiB
    Allocation:     13.93 GiB

11.5.2. Creating and assigning storage volumes using the CLI

To obtain a disk image and attach it to a virtual machine (VM) as a virtual disk, create a storage volume and assign its XML configuration to a the VM.

Prerequisites

  • A storage pool with unallocated space is present on the host.

    • To verify, list the storage pools on the host:

      # virsh pool-list --details
      
      Name               State     Autostart   Persistent   Capacity     Allocation   Available
      --------------------------------------------------------------------------------------------
      default            running   yes         yes          48.97 GiB    36.34 GiB    12.63 GiB
      Downloads          running   yes         yes          175.92 GiB   121.20 GiB   54.72 GiB
      VM-disks           running   yes         yes          175.92 GiB   121.20 GiB   54.72 GiB
    • If you do not have an existing storage pool, create one. For more information, see Managing storage for virtual machines.

Procedure

  1. Create a storage volume using the virsh vol-create-as command. For example, to create a 20 GB qcow2 volume based on the guest-images-fs storage pool:

    # virsh vol-create-as --pool guest-images-fs --name vm-disk1 --capacity 20 --format qcow2

    Important: Specific storage pool types do not support the virsh vol-create-as command and instead require specific processes to create storage volumes:

    • GlusterFS-based - Use the qemu-img command to create storage volumes.
    • iSCSI-based - Prepare the iSCSI LUNs in advance on the iSCSI server.
    • Multipath-based - Use the multipathd command to prepare or manage the multipath.
    • vHBA-based - Prepare the fibre channel card in advance.
  2. Create an XML file, and add the following lines in it. This file will be used to add the storage volume as a disk to a VM.

    <disk type='volume' device='disk'>
        <driver name='qemu' type='qcow2'/>
        <source pool='guest-images-fs' volume='vm-disk1'/>
        <target dev='hdk' bus='ide'/>
    </disk>

    This example specifies a virtual disk that uses the vm-disk1 volume, created in the previous step, and sets the volume to be set up as disk hdk on an ide bus. Modify the respective parameters as appropriate for your environment.

    Important: With specific storage pool types, you must use different XML formats to describe a storage volume disk.

    • For GlusterFS-based pools:

        <disk type='network' device='disk'>
          <driver name='qemu' type='raw'/>
          <source protocol='gluster' name='Volume1/Image'>
            <host name='example.org' port='6000'/>
          </source>
          <target dev='vda' bus='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </disk>
    • For multipath-based pools:

      <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/mpatha' />
      <target dev='sda' bus='scsi'/>
      </disk>
    • For RBD-based storage pools:

        <disk type='network' device='disk'>
          <driver name='qemu' type='raw'/>
          <source protocol='rbd' name='pool/image'>
            <host name='mon1.example.org' port='6321'/>
          </source>
          <target dev='vdc' bus='virtio'/>
        </disk>
  3. Use the XML file to assign the storage volume as a disk to a VM. For example, to assign a disk defined in ~/vm-disk1.xml to the testguest1 VM, use the following command:

    # virsh attach-device --config testguest1 ~/vm-disk1.xml

Verification

  • In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.

11.5.3. Deleting storage volumes using the CLI

To remove a storage volume from your host system, you must stop the pool and remove its XML definition.

Prerequisites

  • Any virtual machine that uses the storage volume you want to delete is shut down.

Procedure

  1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.

    # virsh vol-list --pool RHEL-SP
     Name                 Path
    ---------------------------------------------------------------
     .bash_history        /home/VirtualMachines/.bash_history
     .bash_logout         /home/VirtualMachines/.bash_logout
     .bash_profile        /home/VirtualMachines/.bash_profile
     .bashrc              /home/VirtualMachines/.bashrc
     .git-prompt.sh       /home/VirtualMachines/.git-prompt.sh
     .gitconfig           /home/VirtualMachines/.gitconfig
     vm-disk1             /home/VirtualMachines/vm-disk1
  2. Optional: Use the virsh vol-wipe command to wipe a storage volume. For example, to wipe a storage volume named vm-disk1 associated with the storage pool RHEL-SP:

    # virsh vol-wipe --pool RHEL-SP vm-disk1
    Vol vm-disk1 wiped
  3. Use the virsh vol-delete command to delete a storage volume. For example, to delete a storage volume named vm-disk1 associated with the storage pool RHEL-SP:

    # virsh vol-delete --pool RHEL-SP vm-disk1
    Vol vm-disk1 deleted

Verification

  • Use the virsh vol-list command again to verify that the storage volume was deleted.

    # virsh vol-list --pool RHEL-SP
     Name                 Path
    ---------------------------------------------------------------
     .bash_history        /home/VirtualMachines/.bash_history
     .bash_logout         /home/VirtualMachines/.bash_logout
     .bash_profile        /home/VirtualMachines/.bash_profile
     .bashrc              /home/VirtualMachines/.bashrc
     .git-prompt.sh       /home/VirtualMachines/.git-prompt.sh
     .gitconfig           /home/VirtualMachines/.gitconfig

11.6. Managing virtual machine storage volumes using the web console

Using the RHEL, you can manage the storage volumes used to allocate storage to your virtual machines (VMs).

You can use the RHEL web console to:

11.6.1. Creating storage volumes using the web console

To create a functioning virtual machine (VM) you require a local storage device assigned to the VM that can store the VM image and VM-related data. You can create a storage volume in a storage pool and assign it to a VM as a storage disk.

To create storage volumes using the web console, see the following procedure.

Prerequisites

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    Image displaying all the storage pools currently configured on the host.
  2. In the Storage Pools window, click the storage pool from which you want to create a storage volume.

    The row expands to reveal the Overview pane with basic information about the selected storage pool.

    Image displaying the detailed information about the selected storage pool.
  3. Click Storage Volumes next to the Overview tab in the expanded row.

    The Storage Volume tab appears with basic information about existing storage volumes, if any.

    Image displaying the list of storage volumes associated with the selected storage pool.
  4. Click Create Volume.

    The Create storage volume dialog appears.

    Image displaying the Create Storage Volume dialog box.
  5. Enter the following information in the Create Storage Volume dialog:

    • Name - The name of the storage volume.
    • Size - The size of the storage volume in MiB or GiB.
    • Format - The format of the storage volume. The supported types are qcow2 and raw.
  6. Click Create.

    The storage volume is created, the Create Storage Volume dialog closes, and the new storage volume appears in the list of storage volumes.

11.6.2. Removing storage volumes using the web console

You can remove storage volumes to free up space in the storage pool, or to remove storage items associated with defunct virtual machines (VMs).

To remove storage volumes using the RHEL web console, see the following procedure.

Prerequisites

  • The web console VM plug-in is installed on your system.
  • Any virtual machine that uses the storage volume you want to delete is shut down.

Procedure

  1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.

    Image displaying all the storage pools currently configured on the host.
  2. In the Storage Pools window, click the storage pool from which you want to remove a storage volume.

    The row expands to reveal the Overview pane with basic information about the selected storage pool.

    Image displaying the detailed information about the selected storage pool.
  3. Click Storage Volumes next to the Overview tab in the expanded row.

    The Storage Volume tab appears with basic information about existing storage volumes, if any.

    Image displaying the list of storage volumes associated with the selected storage pool.
  4. Select the storage volume you want to remove.

    Image displaying the option to delete the selected storage volume.
  5. Click Delete 1 Volume

Additional resources

11.7. Managing virtual machine storage disks using the web console

Using RHEL, you can manage the storage disks that are attached to your virtual machines (VMs).

You can use the RHEL web console to:

11.7.1. Viewing virtual machine disk information in the web console

Using the web console, you can view detailed information about disks assigned to a selected virtual machine (VM).

Prerequisites

Procedure

  1. Click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

    Image displaying the disk usage of the selected VM.

The information includes the following:

  • Device - The device type of the disk.
  • Used - The amount of disk currently allocated.
  • Capacity - The maximum size of the storage volume.
  • Bus - The type of disk device that is emulated.
  • Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the access to Writeable and shared.
  • Source - The disk device or file.

11.7.2. Adding new disks to virtual machines using the web console

You can add new disks to virtual machines (VMs) by creating a new storage volume and attaching it to a VM using the RHEL 8 web console.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM for which you want to create and attach a new disk.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

    Image displaying the disk usage of the selected VM.
  3. Click Add Disk.

    The Add Disk dialog appears.

    Image displaying the Add Disk dialog box.

  4. Select the Create New option.
  5. Configure the new disk.

    • Pool - Select the storage pool from which the virtual disk will be created.
    • Name - Enter a name for the virtual disk that will be created.
    • Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created.
    • Format - Select the format for the virtual disk that will be created. The supported types are qcow2 and raw.
    • Persistence - If checked, the virtual disk is persistent. If not checked, the virtual disk is transient.

      Note

      Transient disks can only be added to VMs that are running.

    • Additional Options - Set additional configurations for the virtual disk.

      • Cache - Select the cache mechanism.
      • Bus - Select the type of disk device to emulate.
  6. Click Add.

    The virtual disk is created and connected to the VM.

11.7.3. Attaching existing disks to virtual machines using the web console

Using the web console, you can attach existing storage volumes as disks to a virtual machine (VM).

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM for which you want to create and attach a new disk.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

    Image displaying the disk usage of the selected VM.
  3. Click Add Disk.

    The Add Disk dialog appears.

    Image displaying the Add Disk dialog box.
  4. Click the Use Existing radio button.

    The appropriate configuration fields appear in the Add Disk dialog.

    Image displaying the Add Disk dialog box with Use Existing option selected. width=100%
  5. Configure the disk for the VM.

    • Pool - Select the storage pool from which the virtual disk will be attached.
    • Volume - Select the storage volume that will be attached.
    • Persistence - Available when the VM is running. Select the Always attach checkbox to make the virtual disk persistent. Clear the checkbox to make the virtual disk transient.
    • Additional Options - Set additional configurations for the virtual disk.

      • Cache - Select the cache mechanism.
      • Bus - Select the type of disk device to emulate.
  6. Click Add

    The selected virtual disk is attached to the VM.

11.7.4. Detaching disks from virtual machines using the web console

Using the web console, you can detach disks from virtual machines (VMs).

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM from which you want to detach a disk.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Disks.

    The Disks section displays information about the disks assigned to the VM as well as options to Add, Remove, or Edit disks.

    Image displaying the disk usage of the selected VM.
  3. Click the Remove button next to the disk you want to detach from the VM. A Remove Disk confirmation dialog box appears.
  4. In the confirmation dialog box, click Remove.

    The virtual disk is detached from the VM.

11.8. Securing iSCSI storage pools with libvirt secrets

User name and password parameters can be configured with virsh to secure an iSCSI storage pool. You can configure this before or after you define the pool, but the pool must be started for the authentication settings to take effect.

The following provides instructions for securing iSCSI-based storage pools with libvirt secrets.

Note

This procedure is required if a user_ID and password were defined when creating the iSCSI target.

Prerequisites

Procedure

  1. Create a libvirt secret file with a challenge-handshake authentication protocol (CHAP) user name. For example:

    <secret ephemeral='no' private='yes'>
        <description>Passphrase for the iSCSI example.com server</description>
        <usage type='iscsi'>
            <target>iscsirhel7secret</target>
        </usage>
    </secret>
  2. Define the libvirt secret with the virsh secret-define command.

    # virsh secret-define secret.xml

  3. Verify the UUID with the virsh secret-list command.

    # virsh secret-list
    UUID                                  Usage
    -------------------------------------------------------------------
    2d7891af-20be-4e5e-af83-190e8a922360  iscsi iscsirhel7secret
  4. Assign a secret to the UUID in the output of the previous step using the virsh secret-set-value command. This ensures that the CHAP username and password are in a libvirt-controlled secret list. For example:

    # virsh secret-set-value --interactive 2d7891af-20be-4e5e-af83-190e8a922360
    Enter new value for secret:
    Secret value set
  5. Add an authentication entry in the storage pool’s XML file using the virsh edit command, and add an <auth> element, specifying authentication type, username, and secret usage.

    For example:

    <pool type='iscsi'>
      <name>iscsirhel7pool</name>
        <source>
           <host name='192.168.122.1'/>
           <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
           <auth type='chap' username='redhat'>
              <secret usage='iscsirhel7secret'/>
           </auth>
        </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>
    Note

    The <auth> sub-element exists in different locations within the virtual machine’s <pool> and <disk> XML elements. For a <pool>, <auth> is specified within the <source> element, as this describes where to find the pool sources, since authentication is a property of some pool sources (iSCSI and RBD). For a <disk>, which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is a property of the disk. In addition, the <auth> sub-element for a disk differs from that of a storage pool.

    <auth username='redhat'>
      <secret type='iscsi' usage='iscsirhel7secret'/>
    </auth>
  6. To activate the changes, activate the storage pool. If the pool has already been started, stop and restart the storage pool:

    # virsh pool-destroy iscsirhel7pool
    # virsh pool-start iscsirhel7pool

11.9. Creating vHBAs

A virtual host bus adapter (vHBA) device connects the host system to an SCSI device and is required for creating an SCSI-based storage pool.

You can create a vHBA device by defining it in an XML configuration file.

Procedure

  1. Locate the HBAs on your host system, using the virsh nodedev-list --cap vports command.

    The following example shows a host that has two HBAs that support vHBA:

    # virsh nodedev-list --cap vports
    scsi_host3
    scsi_host4
  2. View the HBA’s details, using the virsh nodedev-dumpxml HBA_device command.

    # virsh nodedev-dumpxml scsi_host3

    The output from the command lists the <name>, <wwnn>, and <wwpn> fields, which are used to create a vHBA. <max_vports> shows the maximum number of supported vHBAs. For example:

    <device>
      <name>scsi_host3</name>
      <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path>
      <parent>pci_0000_10_00_0</parent>
      <capability type='scsi_host'>
        <host>3</host>
        <unique_id>0</unique_id>
        <capability type='fc_host'>
          <wwnn>20000000c9848140</wwnn>
          <wwpn>10000000c9848140</wwpn>
          <fabric_wwn>2002000573de9a81</fabric_wwn>
        </capability>
        <capability type='vport_ops'>
          <max_vports>127</max_vports>
          <vports>0</vports>
        </capability>
      </capability>
    </device>

    In this example, the <max_vports> value shows there are a total 127 virtual ports available for use in the HBA configuration. The <vports> value shows the number of virtual ports currently being used. These values update after creating a vHBA.

  3. Create an XML file similar to one of the following for the vHBA host. In these examples, the file is named vhba_host3.xml.

    This example uses scsi_host3 to describe the parent vHBA.

    <device>
      <parent>scsi_host3</parent>
      <capability type='scsi_host'>
        <capability type='fc_host'>
        </capability>
      </capability>
    </device>

    This example uses a WWNN/WWPN pair to describe the parent vHBA.

    <device>
      <name>vhba</name>
      <parent wwnn='20000000c9848140' wwpn='10000000c9848140'/>
      <capability type='scsi_host'>
        <capability type='fc_host'>
        </capability>
      </capability>
    </device>
    Note

    The WWNN and WWPN values must match those in the HBA details seen in the previous step.

    The <parent> field specifies the HBA device to associate with this vHBA device. The details in the <device> tag are used in the next step to create a new vHBA device for the host. For more information about the nodedev XML format, see the libvirt upstream pages.

    Note

    The virsh command does not provide a way to define the parent_wwnn, parent_wwpn, or parent_fabric_wwn attributes.

  4. Create a VHBA based on the XML file created in the previous step using the virsh nodev-create command.

    # virsh nodedev-create vhba_host3
    Node device scsi_host5 created from vhba_host3.xml

Verification

  • Verify the new vHBA’s details (scsi_host5) using the virsh nodedev-dumpxml command:

    # virsh nodedev-dumpxml scsi_host5
    <device>
      <name>scsi_host5</name>
      <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path>
      <parent>scsi_host3</parent>
      <capability type='scsi_host'>
        <host>5</host>
        <unique_id>2</unique_id>
        <capability type='fc_host'>
          <wwnn>5001a4a93526d0a1</wwnn>
          <wwpn>5001a4ace3ee047d</wwpn>
          <fabric_wwn>2002000573de9a81</fabric_wwn>
        </capability>
      </capability>
    </device>

Chapter 12. Managing GPU devices in virtual machines

To enhance the graphical performance of your virtual machine (VMs) on a RHEL 8 host, you can assign a host GPU to a VM.

  • You can detach the GPU from the host and pass full control of the GPU directly to the VM.
  • You can create multiple mediated devices from a physical GPU, and assign these devices as virtual GPUs (vGPUs) to multiple guests. This is currently only supported on selected NVIDIA GPUs, and only one mediated device can be assigned to a single guest.

12.1. Assigning a GPU to a virtual machine

To access and control GPUs that are attached to the host system, you must configure the host system to pass direct control of the GPU to the virtual machine (VM).

Note

If you are looking for information about assigning a virtual GPU, see Managing NVIDIA vGPU devices.

Prerequisites

  • You must enable IOMMU support on the host machine kernel.

    • On an Intel host, you must enable VT-d:

      1. Regenerate the GRUB configuration with the intel_iommu=on and iommu=pt parameters:

        # grubby --args="intel_iommu=on iommu_pt" --update-kernel DEFAULT
      2. Reboot the host.
    • On an AMD host, you must enable AMD-Vi.

      Note that on AMD hosts, IOMMU is enabled by default, you can add iommu=pt to switch it to pass-through mode:

      1. Regenerate the GRUB configuration with the iommu=pt parameter:

        # grubby --args="iommu=pt" --update-kernel DEFAULT
        Note

        The pt option only enables IOMMU for devices used in pass-through mode and provides better host performance. However, not all hardware supports the option. You can still assign devices irrespective of whether this option is enabled.

      2. Reboot the host.

Procedure

  1. Prevent the driver from binding to the GPU.

    1. Identify the PCI bus address to which the GPU is attached.

      # lspci -Dnn | grep VGA
      0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [10de:11fa] (rev a1)
    2. Prevent the host’s graphics driver from using the GPU. To do so, use the GPU’s PCI ID with the pci-stub driver.

      For example, the following command prevents the driver from binding to the GPU attached at the 10de:11fa bus:

      # grubby --args="pci-stub.ids=10de:11fa" --update-kernel DEFAULT
    3. Reboot the host.
  2. Optional: If certain GPU functions, such as audio, cannot be passed through to the VM due to support limitations, you can modify the driver bindings of the endpoints within an IOMMU group to pass through only the necessary GPU functions.

    1. Convert the GPU settings to XML and note the PCI address of the endpoints that you want to prevent from attaching to the host drivers.

      To do so, convert the GPU’s PCI bus address to a libvirt-compatible format by adding the pci_ prefix to the address, and converting the delimiters to underscores.

      For example, the following command displays the XML configuration of the GPU attached at the 0000:02:00.0 bus address.

      # virsh nodedev-dumpxml pci_0000_02_00_0
      <device>
       <name>pci_0000_02_00_0</name>
       <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path>
       <parent>pci_0000_00_03_0</parent>
       <driver>
        <name>pci-stub</name>
       </driver>
       <capability type='pci'>
        <domain>0</domain>
        <bus>2</bus>
        <slot>0</slot>
        <function>0</function>
        <product id='0x11fa'>GK106GL [Quadro K4000]</product>
        <vendor id='0x10de'>NVIDIA Corporation</vendor>
        <iommuGroup number='13'>
         <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
         <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
        </iommuGroup>
        <pci-express>
         <link validity='cap' port='0' speed='8' width='16'/>
         <link validity='sta' speed='2.5' width='16'/>
        </pci-express>
       </capability>
      </device>
    2. Prevent the endpoints from attaching to the host driver.

      In this example, to assign the GPU to a VM, prevent the endpoints that correspond to the audio function, <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>, from attaching to the host audio driver, and instead attach the endpoints to VFIO-PCI.

      # driverctl set-override 0000:02:00.1 vfio-pci
  3. Attach the GPU to the VM

    1. Create an XML configuration file for the GPU by using the PCI bus address.

      For example, you can create the following XML file, GPU-Assign.xml, by using parameters from the GPU’s bus address.

      <hostdev mode='subsystem' type='pci' managed='yes'>
       <driver name='vfio'/>
       <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
       </source>
      </hostdev>
    2. Save the file on the host system.
    3. Merge the file with the VM’s XML configuration.

      For example, the following command merges the GPU XML file, GPU-Assign.xml, with the XML configuration file of the System1 VM.

      # virsh attach-device System1 --file /home/GPU-Assign.xml --persistent
      Device attached successfully.
      Note

      The GPU is attached as a secondary graphics device to the VM. Assigning a GPU as the primary graphics device is not supported, and Red Hat does not recommend removing the primary emulated graphics device in the VM’s XML configuration.

Verification

Known Issues

  • The number of GPUs that can be attached to a VM is limited by the maximum number of assigned PCI devices, which in RHEL 8 is currently 64. However, attaching multiple GPUs to a VM is likely to cause problems with memory-mapped I/O (MMIO) on the guest, which may result in the GPUs not being available to the VM.

    To work around these problems, set a larger 64-bit MMIO space and configure the vCPU physical address bits to make the extended 64-bit MMIO space addressable.

  • Attaching an NVIDIA GPU device to a VM that uses a RHEL 8 guest operating system currently disables the Wayland session on that VM, and loads an Xorg session instead. This is because of incompatibilities between NVIDIA drivers and Wayland.

12.2. Managing NVIDIA vGPU devices

The vGPU feature makes it possible to divide a physical NVIDIA GPU device into multiple virtual devices, referred to as mediated devices. These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs can share the performance of a single physical GPU.

Important

Assigning a physical GPU to VMs, with or without using mediated devices, makes it impossible for the host to use the GPU.

12.2.1. Setting up NVIDIA vGPU devices

To set up the NVIDIA vGPU feature, you need to download NVIDIA vGPU drivers for your GPU device, create mediated devices, and assign them to the intended virtual machines. For detailed instructions, see below.

Prerequisites

  • Your GPU supports vGPU mediated devices. For an up-to-date list of NVIDIA GPUs that support creating vGPUs, see the NVIDIA vGPU software documentation.

    • If you do not know which GPU your host is using, install the lshw package and use the lshw -C display command. The following example shows the system is using an NVIDIA Tesla P4 GPU, compatible with vGPU.

      # lshw -C display
      
      *-display
             description: 3D controller
             product: GP104GL [Tesla P4]
             vendor: NVIDIA Corporation
             physical id: 0
             bus info: pci@0000:01:00.0
             version: a1
             width: 64 bits
             clock: 33MHz
             capabilities: pm msi pciexpress cap_list
             configuration: driver=vfio-pci latency=0
             resources: irq:16 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff

Procedure

  1. Download the NVIDIA vGPU drivers and install them on your system. For instructions, see the NVIDIA documentation.
  2. If the NVIDIA software installer did not create the /etc/modprobe.d/nvidia-installer-disable-nouveau.conf file, create a conf file of any name in /etc/modprobe.d/, and add the following lines in the file:

    blacklist nouveau
    options nouveau modeset=0
  3. Regenerate the initial ramdisk for the current kernel, then reboot.

    # dracut --force
    # reboot
  4. Check that the kernel has loaded the nvidia_vgpu_vfio module and that the nvidia-vgpu-mgr.service service is running.

    # lsmod | grep nvidia_vgpu_vfio
    nvidia_vgpu_vfio 45011 0
    nvidia 14333621 10 nvidia_vgpu_vfio
    mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio
    vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1
    
    # systemctl status nvidia-vgpu-mgr.service
    nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon
       Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled)
       Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago
     Main PID: 1553 (nvidia-vgpu-mgr)
     [...]

    In addition, if creating vGPU based on an NVIDIA Ampere GPU device, ensure that virtual functions are enable for the physical GPU. For instructions, see the NVIDIA documentation.

  5. Generate a device UUID.

    # uuidgen
    30820a6f-b1a5-4503-91ca-0c10ba58692a
  6. Prepare an XML file with a configuration of the mediated device, based on the detected GPU hardware. For example, the following configures a mediated device of the nvidia-63 vGPU type on an NVIDIA Tesla P4 card that runs on the 0000:01:00.0 PCI bus and uses the UUID generated in the previous step.

    <device>
        <parent>pci_0000_01_00_0</parent>
        <capability type="mdev">
            <type id="nvidia-63"/>
            <uuid>30820a6f-b1a5-4503-91ca-0c10ba58692a</uuid>
        </capability>
    </device>
  7. Define a vGPU mediated device based on the XML file you prepared. For example:

    # virsh nodedev-define vgpu-test.xml
    Node device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 created from vgpu-test.xml
  8. Optional: Verify that the mediated device is listed as inactive.

    # virsh nodedev-list --cap mdev --inactive
    mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
  9. Start the vGPU mediated device you created.

    # virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 started
  10. Optional: Ensure that the mediated device is listed as active.

    # virsh nodedev-list --cap mdev
    mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
  11. Set the vGPU device to start automatically after the host reboots

    # virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Device mdev_d196754e_d8ed_4f43_bf22_684ed698b08b_0000_9b_00_0 marked as autostarted
  12. Attach the mediated device to a VM that you want to share the vGPU resources. To do so, add the following lines, along with the previously genereated UUID, to the <devices/> sections in the XML configuration of the VM.

    <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'>
      <source>
        <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/>
      </source>
    </hostdev>

    Note that each UUID can only be assigned to one VM at a time. In addition, if the VM does not have QEMU video devices, such as virtio-vga, add also the ramfb='on' parameter on the <hostdev> line.

  13. For full functionality of the vGPU mediated devices to be available on the assigned VMs, set up NVIDIA vGPU guest software licensing on the VMs. For further information and instructions, see the NVIDIA Virtual GPU Software License Server User Guide.

Verification

  1. Query the capabilities of the vGPU you created, and ensure it is listed as active and persistent.

    # virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Name:           virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Parent:         pci_0000_01_00_0
    Active:         yes
    Persistent:     yes
    Autostart:      yes
  2. Start the VM and verify that the guest operating system detects the mediated device as an NVIDIA GPU. For example, if the VM uses Linux:

    # lspci -d 10de: -k
    07:00.0 VGA compatible controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)
            Subsystem: NVIDIA Corporation Device 12ce
            Kernel driver in use: nvidia
            Kernel modules: nouveau, nvidia_drm, nvidia

Known Issues

  • Assigning an NVIDIA vGPU mediated device to a VM that uses a RHEL 8 guest operating system currently disables the Wayland session on that VM, and loads an Xorg session instead. This is because of incompatibilities between NVIDIA drivers and Wayland.

Additional resources

12.2.2. Removing NVIDIA vGPU devices

To change the configuration of assigned vGPU mediated devices, you need to remove the existing devices from the assigned VMs. For instructions, see below:

Prerequisites

  • The VM from which you want to remove the device is shut down.

Procedure

  1. Obtain the ID of the mediated device that you want to remove.

    # virsh nodedev-list --cap mdev
    mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
  2. Stop the running instance of the vGPU mediated device.

    # virsh nodedev-destroy mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Destroyed node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'
  3. Optional: Ensure the mediated device has been deactivated.

    # virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Name:           virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Parent:         pci_0000_01_00_0
    Active:         no
    Persistent:     yes
    Autostart:      yes
  4. Remove the device from the XML configuration of the VM. To do so, use the virsh edit utility to edit the XML configuration of the VM, and remove the mdev’s configuration segment. The segment will look similar to the following:

    <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'>
      <source>
        <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/>
      </source>
    </hostdev>

    Note that stopping and detaching the mediated device does not delete it, but rather keeps it as defined. As such, you can restart and attach the device to a different VM.

  5. Optional: To delete the stopped mediated device, remove its definition.

    # virsh nodedev-undefine mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
    Undefined node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'

Verification

  • If you only stopped and detached the device, ensure the mediated device is listed as inactive.

    # virsh nodedev-list --cap mdev --inactive
    mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
  • If you also deleted the device, ensure the following command does not display it.

    # virsh nodedev-list --cap mdev

Additional resources

  • The man virsh command

12.2.3. Obtaining NVIDIA vGPU information about your system

To evaluate the capabilities of the vGPU features available to you, you can obtain additional information about the mediated devices on your system, such as:

  • How many mediated devices of a given type can be created
  • What mediated devices are already configured on your system.

Procedure

  • To see the available GPUs devices on your host that can support vGPU mediated devices, use the virsh nodedev-list --cap mdev_types command. For example, the following shows a system with two NVIDIA Quadro RTX6000 devices.

    # virsh nodedev-list --cap mdev_types
    pci_0000_5b_00_0
    pci_0000_9b_00_0
  • To display vGPU types supported by a specific GPU device, as well as additional metadata, use the virsh nodedev-dumpxml command.

    # virsh nodedev-dumpxml pci_0000_9b_00_0
    <device>
      <name>pci_0000_9b_00_0</name>
      <path>/sys/devices/pci0000:9a/0000:9a:00.0/0000:9b:00.0</path>
      <parent>pci_0000_9a_00_0</parent>
      <driver>
        <name>nvidia</name>
      </driver>
      <capability type='pci'>
        <class>0x030000</class>
        <domain>0</domain>
        <bus>155</bus>
        <slot>0</slot>
        <function>0</function>
        <product id='0x1e30'>TU102GL [Quadro RTX 6000/8000]</product>
        <vendor id='0x10de'>NVIDIA Corporation</vendor>
        <capability type='mdev_types'>
          <type id='nvidia-346'>
            <name>GRID RTX6000-12C</name>
            <deviceAPI>vfio-pci</deviceAPI>
            <availableInstances>2</availableInstances>
          </type>
          <type id='nvidia-439'>
            <name>GRID RTX6000-3A</name>
            <deviceAPI>vfio-pci</deviceAPI>
            <availableInstances>8</availableInstances>
          </type>
          [...]
          <type id='nvidia-440'>
            <name>GRID RTX6000-4A</name>
            <deviceAPI>vfio-pci</deviceAPI>
            <availableInstances>6</availableInstances>
          </type>
          <type id='nvidia-261'>
            <name>GRID RTX6000-8Q</name>
            <deviceAPI>vfio-pci</deviceAPI>
            <availableInstances>3</availableInstances>
          </type>
        </capability>
        <iommuGroup number='216'>
          <address domain='0x0000' bus='0x9b' slot='0x00' function='0x3'/>
          <address domain='0x0000' bus='0x9b' slot='0x00' function='0x1'/>
          <address domain='0x0000' bus='0x9b' slot='0x00' function='0x2'/>
          <address domain='0x0000' bus='0x9b' slot='0x00' function='0x0'/>
        </iommuGroup>
        <numa node='2'/>
        <pci-express>
          <link validity='cap' port='0' speed='8' width='16'/>
          <link validity='sta' speed='2.5' width='8'/>
        </pci-express>
      </capability>
    </device>

Additional resources

  • The man virsh command

12.2.4. Remote desktop streaming services for NVIDIA vGPU

The following remote desktop streaming services are supported on the RHEL 8 hypervisor with NVIDIA vGPU or NVIDIA GPU passthrough enabled:

  • HP ZCentral Remote Boost/Teradici
  • NICE DCV
  • Mechdyne TGX

For support details, see the appropriate vendor support matrix.

Chapter 13. Configuring virtual machine network connections

For your virtual machines (VMs) to connect over a network to your host, to other VMs on your host, and to locations on an external network, the VM networking must be configured accordingly. To provide VM networking, the RHEL 8 hypervisor and newly created VMs have a default network configuration, which can also be modified further. For example:

  • You can enable the VMs on your host to be discovered and connected to by locations outside the host, as if the VMs were on the same network as the host.
  • You can partially or completely isolate a VM from inbound network traffic to increase its security and minimize the risk of any problems with the VM impacting the host.

The following sections explain the various types of VM network configuration and provide instructions for setting up selected VM network configurations.

13.1. Understanding virtual networking

The connection of virtual machines (VMs) to other devices and locations on a network has to be facilitated by the host hardware. The following sections explain the mechanisms of VM network connections and describe the default VM network setting.

13.1.1. How virtual networks work

Virtual networking uses the concept of a virtual network switch. A virtual network switch is a software construct that operates on a host machine. VMs connect to the network through the virtual network switch. Based on the configuration of the virtual switch, a VM can use an existing virtual network managed by the hypervisor, or a different network connection method.

The following figure shows a virtual network switch connecting two VMs to the network:

vn 02 switchandtwoguests

From the perspective of a guest operating system, a virtual network connection is the same as a physical network connection. Host machines view virtual network switches as network interfaces. When the libvirtd service is first installed and started, it creates virbr0, the default network interface for VMs.

To view information about this interface, use the ip utility on the host.

$ ip addr show virbr0
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
 UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff
 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

By default, all VMs on a single host are connected to the same NAT-type virtual network, named default, which uses the virbr0 interface. For details, see Virtual networking default configuration.

For basic outbound-only network access from VMs, no additional network setup is usually needed, because the default network is installed along with the libvirt-daemon-config-network package, and is automatically started when the libvirtd service is started.

If a different VM network functionality is needed, you can create additional virtual networks and network interfaces and configure your VMs to use them. In addition to the default NAT, these networks and interfaces can be configured to use one of the following modes:

13.1.2. Virtual networking default configuration

When the libvirtd service is first installed on a virtualization host, it contains an initial virtual network configuration in network address translation (NAT) mode. By default, all VMs on the host are connected to the same libvirt virtual network, named default. VMs on this network can connect to locations both on the host and on the network beyond the host, but with the following limitations:

  • VMs on the network are visible to the host and other VMs on the host, but the network traffic is affected by the firewalls in the guest operating system’s network stack and by the libvirt network filtering rules attached to the guest interface.
  • VMs on the network can connect to locations outside the host but are not visible to them. Outbound traffic is affected by the NAT rules, as well as the host system’s firewall.

The following diagram illustrates the default VM network configuration:

vn 08 network overview

13.2. Using the web console for managing virtual machine network interfaces

Using the RHEL 8 web console, you can manage the virtual network interfaces for the virtual machines to which the web console is connected. You can:

13.2.1. Viewing and editing virtual network interface information in the web console

Using the RHEL 8 web console, you can view and modify the virtual network interfaces on a selected virtual machine (VM):

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

    Image displaying the network interface details of the selected virtual machine.

    The information includes the following:

    • Type - The type of network interface for the VM. The types include virtual network, bridge to LAN, and direct attachment.

      Note

      Generic Ethernet connection is not supported in RHEL 8 and later.

    • Model type - The model of the virtual network interface.
    • MAC Address - The MAC address of the virtual network interface.
    • IP Address - The IP address of the virtual network interface.
    • Source - The source of the network interface. This is dependent on the network type.
    • State - The state of the virtual network interface.
  3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings dialog opens.

    Image displaying the various options that can be edited for the selected network interface.
  4. Change the interface type, source, model, or MAC address.
  5. Click Save. The network interface is modified.

    Note

    Changes to the virtual network interface settings take effect only after restarting the VM.

    Additionally, MAC address can only be modified when the VM is shut off.

13.2.2. Adding and connecting virtual network interfaces in the web console

Using the RHEL 8 web console, you can create a virtual network interface and connect a virtual machine (VM) to it.

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Plug network interfaces.

    Image displaying the network interface details of the selected virtual machine.

  3. Click Plug in the row of the virtual network interface you want to connect.

    The selected virtual network interface connects to the VM.

13.2.3. Disconnecting and removing virtual network interfaces in the web console

Using the RHEL 8 web console, you can disconnect the virtual network interfaces connected to a selected virtual machine (VM).

Prerequisites

Procedure

  1. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Network Interfaces.

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

    Image displaying the network interface details of the selected virtual machine.
  3. Click Unplug in the row of the virtual network interface you want to disconnect.

    The selected virtual network interface disconnects from the VM.

13.4. Types of virtual machine network connections

To modify the networking properties and behavior of your VMs, change the type of virtual network or interface the VMs use. The following sections describe the connection types available to VMs in RHEL 8.

13.4.1. Virtual networking with network address translation

By default, virtual network switches operate in network address translation (NAT) mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected VMs to use the host machine’s IP address for communication with any external network. When the virtual network switch is operating in NAT mode, computers external to the host cannot communicate with the VMs inside the host.

vn 04 hostwithnatswitch
Warning

Virtual network switches use NAT configured by firewall rules. Editing these rules while the switch is running is not recommended, because incorrect rules may result in the switch being unable to communicate.

13.4.2. Virtual networking in routed mode

When using Routed mode, the virtual switch connects to the physical LAN connected to the host machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, the virtual machines (VMs) are all in a single subnet, separate from the host machine. The VM subnet is routed through a virtual switch, which exists on the host machine. This enables incoming connections, but requires extra routing-table entries for systems on the external network.

Routed mode uses routing based on the IP address:

vn 06 routed switch

A common topology that uses routed mode is virtual server hosting (VSH). A VSH provider may have several host machines, each with two physical network connections. One interface is used for management and accounting, the other for the VMs to connect through. Each VM has its own public IP address, but the host machines use private IP addresses so that only internal administrators can manage the VMs.

vn 10 routed mode datacenter

13.4.3. Virtual networking in bridged mode

In most VM networking modes, VMs automatically create and connect to the virbr0 virtual bridge. In contrast, in bridged mode, the VM connects to an existing Linux bridge on the host. As a result, the VM is directly visible on the physical network. This enables incoming connections, but does not require any extra routing-table entries.

Bridged mode uses connection switching based on the MAC address:

vn Bridged Mode Diagram

In bridged mode, the VM appear within the same subnet as the host machine. All other physical machines on the same physical network can detect the VM and access it.

Bridged network bonding

It is possible to use multiple physical bridge interfaces on the hypervisor by joining them together with a bond. The bond can then be added to a bridge, after which the VMs can be added to the bridge as well. However, the bonding driver has several modes of operation, and not all of these modes work with a bridge where VMs are in use.

The following bonding modes are usable:

  • mode 1
  • mode 2
  • mode 4

In contrast, using modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that media-independent interface (MII) monitoring should be used to monitor bonding modes, as Address Resolution Protocol (ARP) monitoring does not work correctly.

For more information about bonding modes, refer to the Red Hat Knowledgebase.

Common scenarios

The most common use cases for bridged mode include:

  • Deploying VMs in an existing network alongside host machines, making the difference between virtual and physical machines invisible to the end user.
  • Deploying VMs without making any changes to existing physical network configuration settings.
  • Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a physical network where they must access DHCP services.
  • Connecting VMs to an existing network where virtual LANs (VLANs) are used.
  • A demilitarized zone (DMZ) network. For a DMZ deployment with VMs, Red Hat recommends setting up the DMZ at the physical network router and switches, and connecting the VMs to the physical network using bridged mode.

13.4.4. Virtual networking in isolated mode

When using isolated mode, virtual machines connected to the virtual switch can communicate with each other and with the host machine, but their traffic will not pass outside of the host machine, and they cannot receive traffic from outside the host machine. Using dnsmasq in this mode is required for basic functionality such as DHCP.

vn 07 isolated switch

13.4.5. Virtual networking in open mode

When using open mode for networking, libvirt does not generate any firewall rules for the network. As a result, libvirt does not overwrite firewall rules provided by the host, and the user can therefore manually manage the VM’s firewall rules.

13.4.6. Comparison of virtual machine connection types

The following table provides information about the locations to which selected types of virtual machine (VM) network configurations can connect, and to which they are visible.

Table 13.1. Virtual machine connection types

 Connection to the hostConnection to other VMs on the hostConnection to outside locationsVisible to outside locations

Bridged mode

YES

YES

YES

YES

NAT

YES

YES

YES

no

Routed mode

YES

YES

YES

YES

Isolated mode

YES

YES

no

no

Open mode

Depends on the host’s firewall rules

13.5. Booting virtual machines from a PXE server

Virtual machines (VMs) that use Preboot Execution Environment (PXE) can boot and load their configuration from a network. This chapter describes how to use libvirt to boot VMs from a PXE server on a virtual or bridged network.

Warning

These procedures are provided only as an example. Ensure that you have sufficient backups before proceeding.

13.5.1. Setting up a PXE boot server on a virtual network

This procedure describes how to configure a libvirt virtual network to provide Preboot Execution Environment (PXE). This enables virtual machines on your host to be configured to boot from a boot image available on the virtual network.

Prerequisites

  • A local PXE server (DHCP and TFTP), such as:

    • libvirt internal server
    • manually configured dhcpd and tftpd
    • dnsmasq
    • Cobbler server
  • PXE boot images, such as PXELINUX configured by Cobbler or manually.

Procedure

  1. Place the PXE boot images and configuration in /var/lib/tftpboot folder.
  2. Set folder permissions:

    # chmod -R a+r /var/lib/tftpboot
  3. Set folder ownership:

    # chown -R nobody: /var/lib/tftpboot
  4. Update SELinux context:

    # chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot
    # chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpboot
  5. Shut down the virtual network:

    # virsh net-destroy default
  6. Open the virtual network configuration file in your default editor:

    # virsh net-edit default
  7. Edit the <ip> element to include the appropriate address, network mask, DHCP address range, and boot file, where BOOT_FILENAME is the name of the boot image file.

    <ip address='192.168.122.1' netmask='255.255.255.0'>
       <tftp root='/var/lib/tftpboot' />
       <dhcp>
          <range start='192.168.122.2' end='192.168.122.254' />
          <bootp file='BOOT_FILENAME' />
       </dhcp>
    </ip>
  8. Start the virtual network:

    # virsh net-start default

Verification

  • Verify that the default virtual network is active:

    # virsh net-list
    Name             State    Autostart   Persistent
    ---------------------------------------------------
    default          active   no          no

13.5.2. Booting virtual machines using PXE and a virtual network

To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a virtual network, you must enable PXE booting.

Prerequisites

Procedure

  • Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the default virtual network, into a new 10 GB qcow2 image file:

    # virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10
    • Alternatively, you can manually edit the XML configuration file of an existing VM:

      1. Ensure the <os> element has a <boot dev='network'/> element inside:

        <os>
           <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
           <boot dev='network'/>
           <boot dev='hd'/>
        </os>
      2. Ensure the guest network is configured to use your virtual network:

        <interface type='network'>
           <mac address='52:54:00:66:79:14'/>
           <source network='default'/>
           <target dev='vnet0'/>
           <alias name='net0'/>
           <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

Verification

  • Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.

13.5.3. Booting virtual machines using PXE and a bridged network

To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a bridged network, you must enable PXE booting.

Prerequisites

  • Network bridging is enabled.
  • A PXE boot server is available on the bridged network.

Procedure

  • Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the breth0 bridged network, into a new 10 GB qcow2 image file:

    # virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10
    • Alternatively, you can manually edit the XML configuration file of an existing VM:

      1. Ensure the <os> element has a <boot dev='network'/> element inside:

        <os>
           <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
           <boot dev='network'/>
           <boot dev='hd'/>
        </os>
      2. Ensure the VM is configured to use your bridged network:

        <interface type='bridge'>
           <mac address='52:54:00:5a:ad:cb'/>
           <source bridge='breth0'/>
           <target dev='vnet0'/>
           <alias name='net0'/>
           <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

Verification

  • Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.

Additional resources

13.6. Additional resources

Chapter 14. Sharing files between the host and its virtual machines

You may frequently require to share data between your host system and the virtual machines (VMs) it runs. To do so quickly and efficiently, you can set up NFS file shares on your system.

14.1. Sharing files between the host and its virtual machines using NFS

For efficient file sharing between your RHEL 8 host system and the virtual machines (VMs) it is connected to, you can export an NFS share that your VMs can mount and access.

Prerequisites

  • The nfs-utils package is installed on the host.

    # yum install nfs-utils -y
  • A directory that you want to share with your VMs. If you do not want to share any of your existing directories, create a new one, for example named shared-files.

    # mkdir shared-files
  • When connected to a VM, the host is visible and reachable over a network. This is generally the case if the VM uses the NAT and bridge type of virtual networks.
  • Optional: For improved security, ensure your VMs are compatible with NFS version 4 or later.

Procedure

  1. On the host, export a directory with the files you want to share as a network file system (NFS).

    1. Obtain the IP address of each VM with which you want to share files. The following example obtains the IPs of testguest1 and testguest2.

      # virsh domifaddr testguest1
      Name       MAC address          Protocol     Address
      ----------------------------------------------------------------
      vnet0      52:53:00:84:57:90    ipv4         192.168.124.220/24
      
      # virsh domifaddr testguest2
      Name       MAC address          Protocol     Address
      ----------------------------------------------------------------
      vnet1      52:53:00:65:29:21    ipv4         192.168.124.17/24
    2. Edit the /etc/exports file on the host and add a line that includes the directory you want to share, IPs of VMs you want to share it with, and sharing options.

      <shared_directory> <VM1-IP(options)> <VM2-IP(options)> [...]

      For example, the following shares the /usr/local/shared-files directory on the host with testguest1 and testguest2, and enables the VMs to edit the content of the directory:

      /usr/local/shared-files/ 192.168.124.220(rw,sync) 192.168.124.17(rw,sync)
      Note

      If you want to share a directory with a Windows VM, you must ensure the Windows NFS client has write permissions in the shared directory. A simple way to do so, is to use the all_squash, anonuid, and anongid options in the /etc/exports file.

      For example:

      /usr/local/shared-files/ 192.168.124.220(rw,sync,all_squash,anonuid=<directory-owner-UID>,anongid=<directory-owner-GID>)

      The <directory-owner-UID> and <directory-owner-GID> are the UID and GID of the local user that owns the shared directory on the host.

      To explore other options for managing NFS client permissions, follow the Securing NFS guide.

    3. Export the updated file system.

      # exportfs -a
    4. Ensure the nfs-server service is running.

      # systemctl start nfs-server
    5. Obtain the IP address of the host system. This will be used for mounting the shared directory on the VMs later.

      # ip addr
      [...]
      5: virbr0: [BROADCAST,MULTICAST,UP,LOWER_UP] mtu 1500 qdisc noqueue state UP group default qlen 1000
          link/ether 52:54:00:32:ff:a5 brd ff:ff:ff:ff:ff:ff
          inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
             valid_lft forever preferred_lft forever
      [...]

      Note that the relevant network is the one that is used for connecting to the host by the VMs you want to share files with. Usually, this is virbr0.

  2. Mount the shared directory on a Linux VM that is specified in the /etc/exports file.

    # mount 192.168.124.1:/usr/local/shared-files /mnt/host-share

    In this example:

    • 192.168.124.1 is the IP address of the host.
    • /usr/local/shared-files is a file-system path to the exported directory on the host.
    • /mnt/host-share is a mount point on the VM. The mount point must be an empty directory.
  3. To mount the shared directory on a Windows VM that is specified in the /etc/exports file:

    1. Open a PowerShell shell prompt as an administrator.
    2. Install the NFS-Client package. The installation command is different for the server and desktop versions of Windows.

      On a server version of Windows:

      # Install-WindowsFeature NFS-Client

      On a desktop version of Windows:

      # Enable-WindowsOptionalFeature -FeatureName ServicesForNFS-ClientOnly, ClientForNFS-Infrastructure -Online -NoRestart
    3. Mount the directory exported by the host on a Windows VM.

      # C:\Windows\system32\mount.exe -o anon \\192.168.124.1\usr\local\shared-files Z:

      In this example:

      • 192.168.124.1 is the IP address of the host.
      • /usr/local/shared-files is a file system path to the exported directory on the host.
      • Z: is the drive letter that will be used as a mount point. You must choose a drive letter that is not in use on the system.

Verification

  • To verify you can share files between the host and the VM, list the content of the shared directory on the VM. In the following example, replace <mount_point> with a file system path to the mounted shared directory.

    $ ls <mount_point>
    shared-file1  shared-file2  shared-file3

Chapter 15. Securing virtual machines

As an administrator of a RHEL 8 system with virtual machines (VMs), ensuring that your VMs are as secure as possible significantly lowers the risk of your guest and host OSs being infected by malicious software.

This document outlines the mechanics of securing VMs on a RHEL 8 host and provides a list of methods to increase the security of your VMs.

15.1. How security works in virtual machines

When using virtual machines (VMs), multiple operating systems can be housed within a single host machine. These systems are connected with the host through the hypervisor, and usually also through a virtual network. As a consequence, each VM can be used as a vector for attacking the host with malicious software, and the host can be used as a vector for attacking any of the VMs.

Figure 15.1. A potential malware attack vector on a virtualization host

virt sec successful attack

Because the hypervisor uses the host kernel to manage VMs, services running on the VM’s operating system are frequently used for injecting malicious code into the host system. However, you can protect your system against such security threats by using a number of security features on your host and your guest systems.

These features, such as SELinux or QEMU sandboxing, provide various measures that make it more difficult for malicious code to attack the hypervisor and transfer between your host and your VMs.

Figure 15.2. Prevented malware attacks on a virtualization host

virt sec prevented attack

Many of the features that RHEL 8 provides for VM security are always active and do not have to be enabled or configured. For details, see Automatic features for virtual machine security.

In addition, you can adhere to a variety of best practices to minimize the vulnerability of your VMs and your hypervisor. For more information, see Best practices for securing virtual machines.

15.2. Best practices for securing virtual machines

Following the instructions below significantly decreases the risk of your virtual machines being infected with malicious code and used as attack vectors to infect your host system.

On the guest side:

  • Secure the virtual machine as if it was a physical machine. The specific methods available to enhance security depend on the guest OS.

    If your VM is running RHEL 8, see Securing Red Hat Enterprise Linux 8 for detailed instructions on improving the security of your guest system.

On the host side:

  • When managing VMs remotely, use cryptographic utilities such as SSH and network protocols such as SSL for connecting to the VMs.
  • Ensure SELinux is in Enforcing mode:

    # getenforce
    Enforcing

    If SELinux is disabled or in Permissive mode, see the Using SELinux document for instructions on activating Enforcing mode.

    Note

    SELinux Enforcing mode also enables the sVirt RHEL 8 feature. This is a set of specialized SELinux booleans for virtualization, which can be manually adjusted for fine-grained VM security management.

  • Use VMs with SecureBoot:

    SecureBoot is a feature that ensures that your VM is running a cryptographically signed OS. This prevents VMs whose OS has been altered by a malware attack from booting.

    SecureBoot can only be applied when installing a Linux VM that uses OVMF firmware. For instructions, see Creating a SecureBoot virtual machine.

  • Do not use qemu-* commands, such as qemu-kvm.

    QEMU is an essential component of the virtualization architecture in RHEL 8, but it is difficult to manage manually, and improper QEMU configurations may cause security vulnerabilities. Therefore, using qemu-* commands is not supported by Red Hat. Instead, use libvirt utilities, such as virsh, virt-install, and virt-xml, as these orchestrate QEMU according to the best practices.

15.3. Creating a SecureBoot virtual machine

You can create a Linux virtual machine (VM) that uses the SecureBoot feature, which ensures that your VM is running a cryptographically signed OS. This can be useful if the guest OS of a VM has been altered by malware. In such a scenario, SecureBoot prevents the VM from booting, which stops the potential spread of the malware to your host machine.

Prerequisites

  • The VM is using the Q35 machine type.
  • The edk2-OVMF packages is installed:

    # yum install edk2-ovmf
  • An operating system (OS) installation source is available locally or on a network. This can be one of the following formats:

    • An ISO image of an installation medium
    • A disk image of an existing VM installation

      Warning

      Installing from a host CD-ROM or DVD-ROM device is not possible in RHEL 8. If you select a CD-ROM or DVD-ROM as the installation source when using any VM installation method available in RHEL 8, the installation will fail. For more information, see the Red Hat Knowledgebase.

  • Optional: A Kickstart file can be provided for faster and easier configuration of the installation.

Procedure

  1. Use the virt-install command to create a VM as detailed in Creating virtual machines using the command-line interface. For the --boot option, use the uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd value. This uses the OVMF_VARS.secboot.fd and OVMF_CODE.secboot.fd files as templates for the VM’s non-volatile RAM (NVRAM) settings, which enables the SecureBoot feature.

    For example:

    # virt-install --name rhel8sb --memory 4096 --vcpus 4 --os-variant rhel8.0 --boot uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd --disk boot_order=2,size=10 --disk boot_order=1,device=cdrom,bus=scsi,path=/images/RHEL-8.0-installation.iso
  2. Follow the OS installation procedure according to the instructions on the screen.

Verification

  1. After the guest OS is installed, access the VM’s command line by opening the terminal in the graphical guest console or connecting to the guest OS using SSH.
  2. To confirm that SecureBoot has been enabled on the VM, use the mokutil --sb-state command:

    # mokutil --sb-state
    SecureBoot enabled

15.4. Limiting what actions are available to virtual machine users

In some cases, actions that users of virtual machines (VMs) hosted on RHEL 8 can perform by default may pose a security risk. If that is the case, you can limit the actions available to VM users by configuring the libvirt daemons to use the polkit policy toolkit on the host machine.

Procedure

  1. Optional: Ensure your system’s polkit control policies related to libvirt are set up according to your preferences.

    1. Find all libvirt-related files in the /usr/share/polkit-1/actions/ and /usr/share/polkit-1/rules.d/ directories.

      # ls /usr/share/polkit-1/actions | grep libvirt
      # ls /usr/share/polkit-1/rules.d | grep libvirt
    2. Open the files and review the rule settings.

      For information about reading the syntax of polkit control policies, use man polkit.

    3. Modify the libvirt control policies. To do so:

      1. Create a new .rules file in the /etc/polkit-1/rules.d/ directory.
      2. Add your custom policies to this file, and save it.

        For further information and examples of libvirt control policies, see the libvirt upstream documentation.

  2. Configure your VMs to use access policies determined by polkit.

    To do so, uncomment the access_drivers = [ "polkit" ] line in the /etc/libvirt/libvirtd.conf file.

    # sed -i 's/#access_drivers = \[ "polkit" \]/access_drivers = \[ "polkit" \]/' /etc/libvirt/libvirtd.conf
  3. Restart the libvirtd service.

    # systemctl restart libvirtd

Verification

  • As a user whose VM actions you intended to limit, perform one of the restricted actions.

    For example, if unprivileged users are restricted from viewing VMs created in the system session:

    $ virsh -c qemu:///system list --all
    Id   Name           State
    -------------------------------

    If this command does not list any VMs even though one or more VMs exist on your system, polkit successfully restricts the action for unprivileged users.

Troubleshooting

Additional resources

15.5. Automatic features for virtual machine security

In addition to manual means of improving the security of your virtual machines listed in Best practices for securing virtual machines, a number of security features are provided by the libvirt software suite and are automatically enabled when using virtualization in RHEL 8. These include:

System and session connections

To access all the available utilities for virtual machine management in RHEL 8, you need to use the system connection of libvirt (qemu:///system). To do so, you must have root privileges on the system or be a part of the libvirt user group.

Non-root users that are not in the libvirt group can only access a session connection of libvirt (qemu:///session), which has to respect the access rights of the local user when accessing resources. For example, using the session connection, you cannot detect or access VMs created in the system connection or by other users. Also, available VM networking configuration options are significantly limited.

Note

The RHEL 8 documentation assumes you have system connection privileges.

Virtual machine separation
Individual VMs run as isolated processes on the host, and rely on security enforced by the host kernel. Therefore, a VM cannot read or access the memory or storage of other VMs on the same host.
QEMU sandboxing
A feature that prevents QEMU code from executing system calls that can compromise the security of the host.
Kernel Address Space Randomization (KASLR)
Enables randomizing the physical and virtual addresses at which the kernel image is decompressed. Thus, KASLR prevents guest security exploits based on the location of kernel objects.

15.6. SELinux booleans for virtualization

For fine-grained configuration of virtual machines security on a RHEL 8 system, you can configure SELinux booleans on the host to ensure the hypervisor acts in a specific way.

To list all virtualization-related booleans and their statuses, use the getsebool -a | grep virt command:

$ getsebool -a | grep virt
[...]
virt_sandbox_use_netlink --> off
virt_sandbox_use_sys_admin --> off
virt_transition_userdomain --> off
virt_use_comm --> off
virt_use_execmem --> off
virt_use_fusefs --> off
[...]

To enable a specific boolean, use the setsebool -P boolean_name on command as root. To disable a boolean, use setsebool -P boolean_name off.

The following table lists virtualization-related booleans available in RHEL 8 and what they do when enabled:

Table 15.1. SELinux virtualization booleans

SELinux BooleanDescription

staff_use_svirt

Enables non-root users to create and transition VMs to sVirt.

unprivuser_use_svirt

Enables unprivileged users to create and transition VMs to sVirt.

virt_sandbox_use_audit

Enables sandbox containers to send audit messages.

virt_sandbox_use_netlink

Enables sandbox containers to use netlink system calls.

virt_sandbox_use_sys_admin

Enables sandbox containers to use sys_admin system calls, such as mount.

virt_transition_userdomain

Enables virtual processes to run as user domains.

virt_use_comm

Enables virt to use serial/parallel communication ports.

virt_use_execmem

Enables confined virtual guests to use executable memory and executable stack.

virt_use_fusefs

Enables virt to read FUSE mounted files.

virt_use_nfs

Enables virt to manage NFS mounted files.

virt_use_rawip

Enables virt to interact with rawip sockets.

virt_use_samba

Enables virt to manage CIFS mounted files.

virt_use_sanlock

Enables confined virtual guests to interact with the sanlock.

virt_use_usb

Enables virt to use USB devices.

virt_use_xserver

Enables virtual machine to interact with the X Window System.

15.7. Setting up IBM Secure Execution on IBM Z

When using IBM Z hardware to run a RHEL 8 host, you can improve the security of your virtual machines (VMs) by configuring IBM Secure Execution for the VMs.

IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing a VM’s state and memory contents. As a result, even if the host is compromised, it cannot be used as a vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent untrusted hosts from obtaining sensitive information from the VM.

The following procedure describes how to convert an existing VM on an IBM Z host into a secured VM.

Prerequisites

  • The system hardware is one of the following:

    • IBM z15 or later
    • IBM LinuxONE III or later
  • The Secure Execution feature is enabled for your system. To verify, use:

    # grep facilities /proc/cpuinfo | grep 158

    If this command displays any output, your CPU is compatible with Secure Execution.

  • The kernel includes support for Secure Execution. To confirm, use:

    # ls /sys/firmware | grep uv

    If the command generates any output, your kernel supports Secure Execution.

  • The host CPU model contains the unpack facility. To confirm, use:

    # virsh domcapabilities | grep unpack
    <feature policy='require' name='unpack'/>

    If the command generates the above output, your CPU host model is compatible with Secure Execution.

  • The CPU mode of the VM is set to host-model. To confirm this, use the following and replace vm-name with the name of your VM.

    # virsh dumpxml vm-name | grep "<cpu mode='host-model'/>"

    If the command generates any output, the VM’s CPU mode is set correctly.

  • You have obtained and verified the IBM Z host key document. For instructions to do so, see Verifying the host key document in IBM documentation.

Procedure

Do the following steps on your host:

  1. Add the prot_virt=1 kernel parameter to the boot configuration of the host.

    # grubby --update-kernel=ALL --args="prot_virt=1"
  2. Update the boot menu:

    # zipl

  3. Use virsh edit to modify the XML configuration of the VM you want to secure.
  4. Add <launchSecurity type="s390-pv"/> to the under the </devices> line. For example:

    [...]
        </memballoon>
      </devices>
      <launchSecurity type="s390-pv"/>
    </domain>
  5. If the <devices> section of the configuration includes a virtio-rng device (<rng model="virtio">), remove all lines of the <rng> </rng> block.

Do the following steps in the guest operating system of the VM you want to secure.

  1. Create a parameter file. For example:

    # touch ~/secure-parameters
  2. In the /boot/loader/entries directory, identify the boot loader entry with the latest version:

    # ls /boot/loader/entries -l
    [...]
    -rw-r--r--. 1 root root  281 Oct  9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf
  3. Retrieve the kernel options line of the boot loader entry:

    # cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf | grep options
    options root=/dev/mapper/rhel-root
    crashkernel=auto
    rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap
  4. Add the content of the options line and swiotlb=262144 to the created parameters file.

    # echo "root=/dev/mapper/rhel-root crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap swiotlb=262144" > ~/secure-parameters
  5. Generate an IBM Secure Execution image.

    For example, the following creates a /boot/secure-image secured image based on the /boot/vmlinuz-4.18.0-240.el8.s390x image, using the secure-parameters file, the /boot/initramfs-4.18.0-240.el8.s390x.img initial RAM disk file, and the HKD-8651-000201C048.crt host key document.

    # genprotimg -i /boot/vmlinuz-4.18.0-240.el8.s390x -r /boot/initramfs-4.18.0-240.el8.s390x.img -p ~/secure-parameters -k HKD-8651-00020089A8.crt -o /boot/secure-image

    Using the genprotimg utility creates the secure image, which contains the kernel parameters, initial RAM disk, and boot image.

  6. Update the VM’s boot menu to boot from the secure image. In addition, remove the lines starting with initrd and options, as they are not needed.

    For example, in a RHEL 8.3 VM, the boot menu can be edited in the /boot/loader/entries/ directory:

    # cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf
    title Red Hat Enterprise Linux 8.3
    version 4.18.0-240.el8.s390x
    linux /boot/secure-image
    [...]
  7. Create the bootable disk image:

    # zipl -V
  8. Securely remove the original unprotected files. For example:

    # shred /boot/vmlinuz-4.18.0-240.el8.s390x
    # shred /boot/initramfs-4.18.0-240.el8.s390x.img
    # shred secure-parameters

    The original boot image, the initial RAM image, and the kernel parameter file are unprotected, and if they are not removed, VMs with Secure Execution enabled can still be vulnerable to hacking attempts or sensitive data mining.

Verification

  • On the host, use the virsh dumpxml utility to confirm the XML configuration of the secured VM. The configuration must include the <launchSecurity type="s390-pv"/> element, and no <rng model="virtio"> lines.

    # virsh dumpxml vm-name
    [...]
      <cpu mode='host-model'/>
      <devices>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none' io='native'>
          <source file='/var/lib/libvirt/images/secure-guest.qcow2'/>
          <target dev='vda' bus='virtio'/>
        </disk>
        <interface type='network'>
          <source network='default'/>
          <model type='virtio'/>
        </interface>
        <console type='pty'/>
        <memballoon model='none'/>
      </devices>
      <launchSecurity type="s390-pv"/>
    </domain>

15.8. Attaching cryptographic coprocessors to virtual machines on IBM Z

To use hardware encryption in your virtual machine (VM) on an IBM Z host, create mediated devices from a cryptographic coprocessor device and assign them to the intended VMs. For detailed instructions, see below.

Prerequisites

  • Your host is running on IBM Z hardware.
  • The cryptographic coprocessor is compatible with device assignment. To confirm this, ensure that the type of your coprocessor is listed as CEX4 or later.

    # lszcrypt -V
    
    CARD.DOMAIN TYPE  MODE        STATUS  REQUESTS  PENDING HWTYPE QDEPTH FUNCTIONS  DRIVER
    --------------------------------------------------------------------------------------------
    05         CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4card
    05.0004    CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4queue
    05.00ab    CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4queue
  • The vfio_ap kernel module is loaded. To verify, use:

    # lsmod | grep vfio_ap
    vfio_ap         24576  0
    [...]

    To load the module, use:

    # modprobe vfio_ap
  • The s390utils version supports ap handling:

    # lszdev --list-types
    ...
    ap           Cryptographic Adjunct Processor (AP) device
    ...

Procedure

  1. Obtain the decimal values for the devices that you want to assign to the VM. For example, for the devices 05.0004 and 05.00ab:

    # echo "obase=10; ibase=16; 04" | bc
    4
    # echo "obase=10; ibase=16; AB" | bc
    171
  2. On the host, reassign the devices to the vfio-ap drivers:

    # chzdev -t ap apmask=-5 aqmask=-4,-171
    Note

    To assign the devices persistently, use the -p flag.

  3. Verify that the cryptographic devices have been reassigned correctly.

    # lszcrypt -V
    
    CARD.DOMAIN TYPE  MODE        STATUS  REQUESTS  PENDING HWTYPE QDEPTH FUNCTIONS  DRIVER
    --------------------------------------------------------------------------------------------
    05          CEX5C CCA-Coproc  -              1        0     11     08 S--D--N--  cex4card
    05.0004     CEX5C CCA-Coproc  -              1        0     11     08 S--D--N--  vfio_ap
    05.00ab     CEX5C CCA-Coproc  -              1        0     11     08 S--D--N--  vfio_ap

    If the DRIVER values of the domain queues changed to vfio_ap, the reassignment succeeded.

  4. Create an XML snippet that defines a new mediated device.

    The following example shows defining a persistent mediated device and assigning queues to it. Specifically, the vfio_ap.xml XML snippet in this example assigns a domain adapter 0x05, domain queues 0x0004 and 0x00ab, and a control domain 0x00ab to the mediated device.

    # vim vfio_ap.xml
    
    <device>
      <parent>ap_matrix</parent>
      <capability type="mdev">
        <type id="vfio_ap-passthrough"/>
        <attr name='assign_adapter' value='0x05'/>
        <attr name='assign_domain' value='0x0004'/>
        <attr name='assign_domain' value='0x00ab'/>
        <attr name='assign_control_domain' value='0x00ab'/>
      </capability>
    </device>
  5. Create a new mediated device from the vfio_ap.xml XML snippet.

    #