Chapter 10. Managing virtual devices

One of the most effective ways to manage the functionality, features, and performance of a virtual machine (VM) is to adjust its virtual devices.

The following sections provide a general overview of what virtual devices are, and instructions on how they can be attached, modified, or removed from a VM.

10.1. How virtual devices work

The basics

Just like physical machines, virtual machines (VMs) require specialized devices to provide functions to the system, such as processing power, memory, storage, networking, or graphics. Physical systems usually use hardware devices for these purposes. However, because VMs work as software implements, they need to use software abstractions of such devices instead, referred to as virtual devices.

Virtual devices attached to a VM can be configured when creating the VM, and can also be managed on an existing VM. Generally, virtual devices can be attached or detached from a VM only when the VM is shut off, but some can be added or removed when the VM is running. This feature is referred to as device hot plug and hot unplug.

When creating a new VM, libvirt automatically creates and configures a default set of essential virtual devices, unless specified otherwise by the user. These are based on the host system architecture and machine type, and usually include:

  • the CPU
  • memory
  • a keyboard
  • a network interface controller (NIC)
  • various device controllers
  • a video card
  • a sound card

To manage virtual devices after the VM is created, use the command-line interface (CLI). However, to manage virtual storage devices and NICs, you can also use the RHEL 8 web console.

Performance or flexibility

For some types of devices, RHEL 8 supports multiple implementations, often with a trade-off between performance and flexibility.

For example, the physical storage used for virtual disks can be represented by files in various formats, such as qcow2 or raw, and presented to the VM using a variety of controllers:

  • an emulated controller
  • virtio-scsi
  • virtio-blk

An emulated controller is slower than a virtio controller, because virtio devices are designed specifically for virtualization purposes. On the other hand, emulated controllers make it possible to run operating systems that have no drivers for virtio devices. Similarly, virtio-scsi offers a more complete support for SCSI commands, and makes it possible to attach a larger number of disks to the VM. Finally, virtio-blk provides better performance than both virtio-scsi and emulated controllers, but a more limited range of use-cases. For example, attaching a physical disk as a LUN device to a VM is not possible when using virtio-blk.

For more information on types of virtual devices, see Section 10.5, “Types of virtual devices”.

Additional resources

10.2. Attaching devices to virtual machines

The following provides general information about creating and attaching virtual devices to your virtual machines (VMs) using the command-line interface (CLI). Some devices can also be attached to VMs using the RHEL 8 web console.

Prerequisites

  • Obtain the required options for the device you intend to attach to a VM. To see the available options for a specific device, use the virt-xml --device=? command. For example:

    # virt-xml --network=?
    --network options:
    [...]
    address.unit
    boot_order
    clearxml
    driver_name
    [...]

Procedure

  1. To attach a device to a VM, use the virt-xml --add-device command, including the definition of the device and the required options:

    • For example, the following command creates a 20GB newdisk qcow2 disk image in the /var/lib/libvirt/images/ directory, and attaches it as a virtual disk to the running testguest VM on the next start-up of the VM:

      # virt-xml testguest --add-device --disk /var/lib/libvirt/images/newdisk.qcow2,format=qcow2,size=20
      Domain 'testguest' defined successfully.
      Changes will take effect after the domain is fully powered off.
    • The following attaches a USB flash drive, attached as device 004 on bus 002 on the host, to the testguest2 VM while the VM is running:

      # virt-xml testguest2 --add-device --update --hostdev 002.004
      Device hotplug successful.
      Domain 'testguest2' defined successfully.

      The bus-device combination for defining the USB can be obtained using the lsusb command.

Verification

To verify the device has been added, do any of the following:

  • Use the virsh dumpxml command and see if the device’s XML definition has been added to the <devices> section in the VM’s XML configuration.

    For example, the following output shows the configuration of the testguest VM and confirms that the 002.004 USB flash disk device has been added.

    # virsh dumpxml testguest
    [...]
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x4146'/>
        <product id='0x902e'/>
        <address bus='2' device='4'/>
      </source>
      <alias name='hostdev0'/>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    [...]
  • Run the VM and test if the device is present and works properly.

Additional resources

  • For further information on using the virt-xml command, use man virt-xml.

10.3. Modifying devices attached to virtual machines

The following procedure provides general instructions for modifying virtual devices using the command-line interface (CLI). Some devices attached to your VM, such as disks and NICs, can also be modified using the RHEL 8 web console.

Prerequisites

  • Obtain the required options for the device you intend to attach to a VM. To see the available options for a specific device, use the virt-xml --device=? command. For example:
# virt-xml --network=?
--network options:
[...]
address.unit
boot_order
clearxml
driver_name
[...]
  • Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and sending the output to a file. For example, the following backs up the configuration of your Motoko VM as the motoko.xml file:
# virsh dumpxml Motoko > motoko.xml
# cat motoko.xml
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Motoko</name>
  <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid>
  [...]
</domain>

Procedure

  1. Use the virt-xml --edit command, including the definition of the device and the required options:

    For example, the following clears the <cpu> configuration of the shut-off testguest VM and sets it to host-model:

    # virt-xml testguest --edit --cpu host-model,clearxml=yes
    Domain 'testguest' defined successfully.

Verification

To verify the device has been modified, do any of the following:

  • Run the VM and test if the device is present and reflects the modifications.
  • Use the virsh dumpxml command and see if the device’s XML definition has been modified in the VM’s XML configuration.

    For example, the following output shows the configuration of the testguest VM and confirms that the CPU mode has been configured as host-model.

    # virsh dumpxml testguest
    [...]
    <cpu mode='host-model' check='partial'>
      <model fallback='allow'/>
    </cpu>
    [...]

Troubleshooting

  • If modifying a device causes your VM to become unbootable, use the virsh define utility to restore the XML configuration by reloading the XML configuration file you backed up previously.

    # virsh define testguest.xml
Note

For small changes to the XML configuration of your VM, you can use the virsh edit command - for example virsh edit testguest. However, do not use this method for more extensive changes, as it is more likely to break the configuration in ways that could prevent the VM from booting.

Additional resources

  • For details on using the virt-xml command, use man virt-xml.

10.4. Removing devices from virtual machines

The following provides general information for removing virtual devices from your virtual machines (VMs) using the command-line interface (CLI). Some devices, such as disks or NICs, can also be removed from VMs using the RHEL 8 web console.

Prerequisites

  • Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and sending the output to a file. For example, the following backs up the configuration of your Motoko VM as the motoko.xml file:
# virsh dumpxml Motoko > motoko.xml
# cat motoko.xml
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Motoko</name>
  <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid>
  [...]
</domain>

Procedure

  1. Use the virt-xml --remove-device command, including a definition of the device. For example:

    • The following removes the storage device marked as vdb from the running testguest VM after it shuts down:

      # virt-xml testguest --remove-device --disk target=vdb
      Domain 'testguest' defined successfully.
      Changes will take effect after the domain is fully powered off.
    • The following immediately removes a USB flash drive device from the running testguest2 VM:

      # virt-xml testguest2 --remove-device --update --hostdev type=usb
      Device hotunplug successful.
      Domain 'testguest2' defined successfully.

Troubleshooting

  • If removing a device causes your VM to become unbootable, use the virsh define utility to restore the XML configuration by reloading the XML configuration file you backed up previously.

    # virsh define testguest.xml

Additional resources

  • For details on using the virt-xml command, use man virt-xml.

10.5. Types of virtual devices

Virtualization in RHEL 8 can present several distinct types of virtual devices that you can attach to virtual machines (VMs):

Emulated devices

Emulated devices are software implementations of widely used physical devices. Drivers designed for physical devices are also compatible with emulated devices. Therefore, emulated devices can be used very flexibly.

However, since they need to faithfully emulate a particular type of hardware, emulated devices may suffer a significant performance loss compared with the corresponding physical devices or more optimized virtual devices.

The following types of emulated devices are supported:

  • Virtual CPUs (vCPUs), with a large choice of CPU models available. The performance impact of emulation depends significantly on the differences between the host CPU and the emulated vCPU.
  • Emulated system components, such as PCI bus controllers
  • Emulated storage controllers, such as SATA, SCSI or even IDE
  • Emulated sound devices, such as ICH9, ICH6 or AC97
  • Emulated graphics cards, such as VGA or QXL cards
  • Emulated network devices, such as rtl8139
Paravirtualized devices

Paravirtualization provides a fast and efficient method for exposing virtual devices to VMs. Paravirtualized devices expose interfaces that are designed specifically for use in VMs, and thus significantly increase device performance. RHEL 8 provides paravirtualized devices to VMs using the virtio API as a layer between the hypervisor and the VM. The drawback of this approach is that it requires a specific device driver in the guest operating system.

It is recommended to use paravirtualized devices instead of emulated devices for VM whenever possible, notably if they are running I/O intensive applications. Paravirtualized devices decrease I/O latency and increase I/O throughput, in some cases bringing them very close to bare-metal performance. Other paravirtualized devices also add functionality to VMs that is not otherwise available.

The following types of paravirtualized devices are supported:

  • The paravirtualized network device (virtio-net).
  • Paravirtualized storage controllers:

    • virtio-blk - provides block device emulation.
    • virtio-scsi - provides more complete SCSI emulation.
  • The paravirtualized clock.
  • The paravirtualized serial device (virtio-serial).
  • The balloon device (virtio-balloon), used to share information about guest memory usage with the hypervisor.

    Note, however, that the balloon device also requires the balloon service to be installed.

  • The paravirtualized random number generator (virtio-rng).
  • The paravirtualized graphics card (QXL).
Physically shared devices

Certain hardware platforms enable VMs to directly access various hardware devices and components. This process is known as device assignment or passthrough.

When attached in this way, some aspects of the physical device are directly available to the VM as they would be to a physical machine. This provides superior performance for the device when used in the VM. However, devices physically attached to a VM become unavailable to the host, and also cannot be migrated.

Nevertheless, some devices can be shared across multiple VMs. For example, a single physical device can in certain cases provide multiple mediated devices, which can then be assigned to distinct VMs.

The following types of passthrough devices are supported:

  • Virtual Function I/O (VFIO) device assignment - safely exposes devices to applications or VMs using hardware-enforced DMA and interrupt isolation.
  • USB, PCI, and SCSI passthrough - expose common industry standard buses directly to VMs in order to make their specific features available to guest software.
  • Single-root I/O virtualization (SR-IOV) - a specification that enables hardware-enforced isolation of PCI Express resources. This makes it safe and efficient to partition a single physical PCI resource into virtual PCI functions. It is commonly used for network interface cards (NICs).
  • N_Port ID virtualization (NPIV) - a Fibre Channel technology to share a single physical host bus adapter (HBA) with multiple virtual ports.
  • GPUs and vGPUs - accelerators for specific kinds of graphic or compute workloads. Some GPUs can be attached directly to a VM, while certain types also offer the ability to create virtual GPUs (vGPUs) that share the underlying physical hardware.

10.6. Managing SR-IOV devices

An emulated virtual device often uses more CPU and memory than a hardware network device. This can limit the performance of a virtual machine (VM). However, if any devices on your virtualization host support Single Root I/O Virtualization (SR-IOV), you can use this feature to improve the device performance, and possibly also the overall performance of your VMs.

10.6.1. What is SR-IOV?

Single-root I/O virtualization (SR-IOV) is a specification that enables a single PCI Express (PCIe) device to present multiple separate PCI devices, called virtual functions (VFs), to the host system. Each of these devices:

  • is able to provide the same or similar service as the original PCIe device.
  • appears at a different address on the host PCI bus.
  • can be assigned to a different VM using VFIO assignment.

For example, a single SR-IOV capable network device can present VFs to multiple VMs. While all of the VFs use the same physical card, the same network connection, and the same network cable, each of the VMs directly controls its own hardware network device, and uses no extra resources from the host.

How SR-IOV works

The SR-IOV functionality is possible thanks to the introduction of the following PCIe functions:

  • Physical functions (PFs) - A PCIe function that provides the functionality of its device (for example networking) to the host, but can also create and manage a set of VFs. Each SR-IOV capable device has one or more PFs.
  • Virtual functions (VFs) - Lightweight PCIe functions that behave as independent devices. Each VF is derived from a PF. The maximum number of VFs a device can have depends on the device hardware. Each VF can be assigned only to a single VM at a time, but a VM can have multiple VFs assigned to it.

VMs recognize VFs as virtual devices. For example, a VF created by an SR-IOV network device appears as a network card to a VM to which it is assigned, in the same way as a physical network card appears to the host system.

Figure 10.1. SR-IOV architecture

virt SR IOV

Benefits

The primary advantages of using SR-IOV VFs rather than emulated devices are:

  • Improved performance
  • Reduced use of host CPU and memory resources

For example, a VF attached to a VM as a vNIC performs at almost the same level as a physical NIC, and much better than paravirtualized or emulated NICs. In particular, when multiple VFs are used simultaneously on a single host, the performance benefits can be significant.

Disadvantages

  • To modify the configuration of a PF, you must first change the number of VFs exposed by the PF to zero. Therefore, you also need to remove the devices provided by these VFs from the VM to which they are assigned.
  • A VM with an VFIO-assigned devices attached, including SR-IOV VFs, cannot be migrated to another host. In some cases, you can work around this limitation by pairing the assigned device with an emulated device. For example, you can bond an assigned networking VF to an emulated vNIC, and remove the VF before the migration.
  • In addition, VFIO-assigned devices require pinning of VM memory, which increases the memory consumption of the VM and prevents the use of memory ballooning on the VM.

Additional resources

10.6.2. Attaching SR-IOV networking devices to virtual machines

To attach an SR-IOV networking device to a virtual machine (VM) on an Intel or AMD host, you must create a virtual function (VF) from an SR-IOV capable network interface on the host and assign the VF as a device to a specified VM. For details, see the following instructions.

Prerequisites

  • The CPU and the firmware of your host must support the I/O Memory Management Unit (IOMMU).

    • If using an Intel CPU, it must support the Intel Virtualization Technology for Directed I/O (VT-d).
    • If using an AMD CPU, it must support the AMD-Vi feature.
  • Verify with the system vendor that the system uses Access Control Service (ACS) to provide direct memory access (DMA) isolation for PCIe topology.

    For additional information, see Hardware Considerations for Implementing SR-IOV.

  • The physical network device must support SR-IOV. To verify if any network devices on your system support SR-IOV, use the lspci -v command and look for Single Root I/O Virtualization (SR-IOV) in the output.

    # lspci -v
    [...]
    02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    	Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
    	Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0
    	Memory at fcba0000 (32-bit, non-prefetchable) [size=128K]
    [...]
    	Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
    	Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
    	Kernel driver in use: igb
    	Kernel modules: igb
    [...]
  • The host network interface you want to use for creating VFs must be running. For example, to activate the eth1 interface and verify it is running:

    # ip link set eth1 up
    # ip link show eth1
    8: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
       link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
       vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  • For SR-IOV device assignment to work, the IOMMU feature must be enabled in the host BIOS and kernel. To do so:

    • On an Intel host, enable VT-d:

      • If your Intel host uses multiple boot entries:

        1. Edit the /etc/default/grub file and add the intel_iommu=on and iommu=pt parameters at the end of the GRUB_CMDLINE_LINUX line:

          GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel_dell-per730-27-swap rd.lvm.lv=rhel_dell-per730-27/root rd.lvm.lv=rhel_dell-per730-27/swap console=ttyS0,115200n81 intel_iommu=on iommu=pt"
        2. Regenerate the GRUB configuration:

          # grub2-mkconfig -o /boot/grub2/grub.cfg
        3. Reboot the host.
      • If your Intel host uses a single boot entry:

        1. Regenerate the GRUB configuration with the intel_iommu=on parameter:

          # grubby --args="intel_iommu=on" --update-kernel DEFAULT
        2. Reboot the host.
    • On an AMD host, enable AMD-Vi:

      • If your AMD host uses multiple boot entries:

        1. Edit the /etc/default/grub file and add the iommu=pt and amd_iommu=on parameters at the end of the GRUB_CMDLINE_LINUX line:

          GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel_dell-per730-27-swap rd.lvm.lv=rhel_dell-per730-27/root rd.lvm.lv=rhel_dell-per730-27/swap console=ttyS0,115200n81 iommu=pt amd_iommu=on"
        2. Regenerate the GRUB configuration:

          # grub2-mkconfig -o /boot/grub2/grub.cfg
        3. Reboot the host.
      • If your AMD host uses a single boot entry:

        1. Regenerate the GRUB configuration with the iommu=pt parameter:

          # grubby --args="iommu=pt" --update-kernel DEFAULT
        2. Reboot the host.

Procedure

  1. Optional: Confirm the maximum number of VFs your network device can use. To do so, use the following command and replace eth1 with your SR-IOV compatible network device.

    # cat /sys/class/net/eth1/device/sriov_totalvfs
    7
  2. Use the following command to create a virtual function (VF):

    # echo VF-number > /sys/class/net/network-interface/device/sriov_numvfs

    In the command, replace:

    • VF-number with the number of VFs you want to create on the PF.
    • network-interface with the name of the network interface for which the VFs will be created.

    The following example creates 2 VFs from the eth1 network interface:

    # echo 2 > /sys/class/net/eth1/device/sriov_numvfs
  3. Verify the VFs have been added:

    # lspci | grep Ethernet
    01:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
    01:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
    07:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
    07:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
  4. Make the created VFs persistent by creating a udev rule for the network interface you used to create the VFs. For example, for the eth1 interface, create the /etc/udev/rules.d/eth1.rules file, and add the following line:

    ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="ixgbe", ATTR{device/sriov_numvfs}="2"

    This ensures that the two VFs that use the ixgbe driver will automatically be available for the eth1 interface when the host starts.

    Warning

    Currently, this command does not work correctly when attempting to make VFs persistent on Broadcom NetXtreme II BCM57810 adapters. In addition, attaching VFs based on these adapters to Windows VMs is currently not reliable.

  5. Use the virsh nodedev-list command to verify that libvirt recognizes the added VF devices. For example, the following shows that the 01:00.0 and 07:00.0 PFs from the previous example have been successfully converted into VFs:

    # virsh nodedev-list | grep pci_
    pci_0000_01_00_0
    pci_0000_01_00_1
    pci_0000_07_10_0
    pci_0000_07_10_1
    [...]
  6. Obtain the bus, slot, and function values of a PF and one of its corresponding VFs. For example, for pci_0000_01_00_0 and pci_0000_01_00_1:

    # virsh nodedev-dumpxml pci_0000_01_00_0
    <device>
      <name>pci_0000_01_00_0</name>
      <path>/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0</path>
      <parent>pci_0000_00_01_0</parent>
      <driver>
        <name>ixgbe</name>
      </driver>
      <capability type='pci'>
        <domain>0</domain>
        <bus>1</bus>
        <slot>0</slot>
        <function>0</function>
    [...]
    # virsh nodedev-dumpxml pci_0000_01_00_1
    <device>
      <name>pci_0000_01_00_1</name>
      <path>/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.1</path>
      <parent>pci_0000_00_01_0</parent>
      <driver>
        <name>vfio-pci</name>
      </driver>
      <capability type='pci'>
        <domain>0</domain>
        <bus>1</bus>
        <slot>0</slot>
        <function>1</function>
    [...]
  7. Create a temporary XML file and add a configuration into using the bus, slot, and function values you obtained in the previous step. For example:

    <interface type='hostdev' managed='yes'>
      <source>
        <address type='pci' domain='0x0000' bus='0x03' slot='0x10' function='0x2'/>
      </source>
    </interface>
  8. Add the VF to a VM using the temporary XML file. For example, the following attaches a VF saved in the /tmp/holdmyfunction.xml to a running testguest1 VM and ensures it is available after the VM restarts:

    # virsh attach-device testguest1 /tmp/holdmyfunction.xml --live --config
    Device attached successfully.

    If this is successful, the guest operating system detects a new network interface card.

10.6.3. Supported devices for SR-IOV assignment

Not all devices can be used for SR-IOV. The following devices have been tested and verified as compatible with SR-IOV in RHEL 8.

Networking devices

  • Intel 82599ES 10 Gigabit Ethernet Controller - uses the ixgbe driver
  • Intel Ethernet Controller XL710 Series - uses the i40e driver
  • Mellanox ConnectX-5 Ethernet Adapter Cards - use the mlx5_core driver
  • Intel Ethernet Network Adapter XXV710 - uses the i40e driver
  • Intel 82576 Gigabit Ethernet Controller - uses the igb driver
  • Broadcom NetXtreme II BCM57810 - uses the bnx2x driver