Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

3.4. Virtualized Hardware Devices

Virtualization on Red Hat Enterprise Linux 7 allows virtual machines to use the host's physical hardware as three distinct types of devices:
  • Virtualized and emulated devices
  • Paravirtualized devices
  • Physically shared devices
These hardware devices all appear as being physically attached to the virtual machine but the device drivers work in different ways.

3.4.1. Virtualized and Emulated Devices

KVM implements many core devices for virtual machines as software. These emulated hardware devices are crucial for virtualizing operating systems. Emulated devices are virtual devices which exist entirely in software.
In addition, KVM provides emulated drivers. These form a translation layer between the virtual machine and the Linux kernel (which manages the source device). The device level instructions are completely translated by the KVM hypervisor. Any device of the same type (storage, network, keyboard, or mouse) that is recognized by the Linux kernel can be used as the backing source device for the emulated drivers.
Virtual CPUs (vCPUs)
On Red Hat Enterprise Linux 7.2 and above, the host system can have up to 240 virtual CPUs (vCPUs) that can be presented to guests for use, regardless of the number of host CPUs. This is up from 160 in Red Hat Enterprise Linux 7.0.
Emulated system components
The following core system components are emulated to provide basic system functions:
  • Intel i440FX host PCI bridge
  • PIIX3 PCI to ISA bridge
  • PS/2 mouse and keyboard
  • EvTouch USB graphics tablet
  • PCI UHCI USB controller and a virtualized USB hub
  • Emulated serial ports
  • EHCI controller, virtualized USB storage and a USB mouse
  • USB 3.0 xHCI host controller (Technology Preview in Red Hat Enterprise Linux 7.3)
Emulated storage drivers
Storage devices and storage pools can use emulated drivers to attach storage devices to virtual machines. The guest uses an emulated storage driver to access the storage pool.
Note that like all virtual devices, the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtual machine. The backing storage device can be any supported type of storage device, file, or storage pool volume.
The emulated IDE driver
KVM provides two emulated PCI IDE interfaces. An emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtual machine. The emulated IDE driver is also used for virtualized CD-ROM and DVD-ROM drives.
The emulated floppy disk drive driver
The emulated floppy disk drive driver is used for creating virtualized floppy drives.
Emulated sound devices
An emulated (Intel) HDA sound device, intel-hda, is supported in the following guest operating systems:
  • Red Hat Enterprise Linux 7, for the AMD64 and Intel 64 architecture
  • Red Hat Enterprise Linux 4, 5, and 6, for the 32-bit AMD and Intel architecture and the AMD64 and Intel 64 architecture

Note

The following emulated sound device is also available, but is not recommended due to compatibility issues with certain guest operating systems:
  • ac97, an emulated Intel 82801AA AC97 Audio compatible sound card
Emulated graphics cards
The following emulated graphics cards are provided.
  • A Cirrus CLGD 5446 PCI VGA card
  • A standard VGA graphics card with Bochs VESA extensions (hardware level, including all non-standard modes)
Guests can connect to these devices with the Simple Protocol for Independent Computing Environments (SPICE) protocol or with the Virtual Network Computing (VNC) system.
Emulated network devices
The following two emulated network devices are provided:
  • The e1000 device emulates an Intel E1000 network adapter (Intel 82540EM, 82573L, 82544GC).
  • The rtl8139 device emulates a Realtek 8139 network adapter.
Emulated watchdog devices
A watchdog can be used to automatically reboot a virtual machine when the machine becomes overloaded or unresponsive.
Red Hat Enterprise Linux 7 provides the following emulated watchdog devices:
  • i6300esb, an emulated Intel 6300 ESB PCI watchdog device. It is supported in guest operating system Red Hat Enterprise Linux versions 6.0 and above, and is the recommended device to use.
  • ib700, an emulated iBase 700 ISA watchdog device. The ib700 watchdog device is only supported in guests using Red Hat Enterprise Linux 6.2 and above.
Both watchdog devices are supported on 32-bit and 64-bit AMD and Intel architectures for guest operating systems Red Hat Enterprise Linux 6.2 and above.

3.4.2. Paravirtualized Devices

Paravirtualization provides a fast and efficient means of communication for guests to use devices on the host machine. KVM provides paravirtualized devices to virtual machines using the virtio API as a layer between the hypervisor and guest.
Some paravirtualized devices decrease I/O latency and increase I/O throughput to near bare-metal levels, while other paravirtualized devices add functionality to virtual machines that is not otherwise available. It is recommended to use paravirtualized devices instead of emulated devices for virtual machines running I/O intensive applications.
All virtio devices have two parts: the host device and the guest driver. Paravirtualized device drivers allow the guest operating system access to physical devices on the host system.
To use this device, the paravirtualized device drivers must be installed on the guest operating system. By default, the paravirtualized device drivers are included in Red Hat Enterprise Linux 4.7 and later, Red Hat Enterprise Linux 5.4 and later, and Red Hat Enterprise Linux 6.0 and later.

Note

For more information on using the paravirtualized devices and drivers, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
The paravirtualized network device (virtio-net)
The paravirtualized network device is a virtual network device that provides network access to virtual machines with increased I/O performance and lower latency.
The paravirtualized block device (virtio-blk)
The paravirtualized block device is a high-performance virtual storage device that provides storage to virtual machines with increased I/O performance and lower latency. The paravirtualized block device is supported by the hypervisor and is attached to the virtual machine (except for floppy disk drives, which must be emulated).
The paravirtualized controller device (virtio-scsi)
The paravirtualized SCSI controller device provides a more flexible and scalable alternative to virtio-blk. A virtio-scsi guest is capable of inheriting the feature set of the target device, and can handle hundreds of devices compared to virtio-blk, which can only handle 28 devices.
virtio-scsi is fully supported for the following guest operating systems:
  • Red Hat Enterprise Linux 7
  • Red Hat Enterprise Linux 6.4 and above
The paravirtualized clock
Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues. KVM works around hosts that do not have a constant Time Stamp Counter by providing guests with a paravirtualized clock. Additionally, the paravirtualized clock assists with time adjustments needed after a guest runs the sleep (S3) or suspend to RAM operations.
The paravirtualized serial device (virtio-serial)
The paravirtualized serial device is a bytestream-oriented, character stream device, and provides a simple communication interface between the host's user space and the guest's user space.
The balloon device (virtio-balloon)
The balloon device can designate part of a virtual machine's RAM as not being used (a process known as inflating the balloon), so that the memory can be freed for the host (or for other virtual machines on that host) to use. When the virtual machine needs the memory again, the balloon can be deflated and the host can distribute the RAM back to the virtual machine.
The paravirtualized random number generator (virtio-rng)
The paravirtualized random number generator enables virtual machines to collect entropy, or randomness, directly from the host to use for encrypted data and security. Virtual machines can often be starved of entropy because typical inputs (such as hardware usage) are unavailable. Sourcing entropy can be time-consuming. virtio-rng makes this process faster by injecting entropy directly into guest virtual machines from the host.
The paravirtualized graphics card (QXL)
The paravirtualized graphics card works with the QXL driver to provide an efficient way to display a virtual machine's graphics from a remote host. The QXL driver is required to use SPICE.

3.4.3. Physical Host Devices

Certain hardware platforms enable virtual machines to directly access various hardware devices and components. This process in virtualization is known as device assignment, or also as passthrough.
VFIO device assignment
Virtual Function I/O (VFIO) is a new kernel driver in Red Hat Enterprise Linux 7 that provides virtual machines with high performance access to physical hardware.
VFIO attaches PCI devices on the host system directly to virtual machines, providing guests with exclusive access to PCI devices for a range of tasks. This enables PCI devices to appear and behave as if they were physically attached to the guest virtual machine.
VFIO improves on previous PCI device assignment architecture by moving device assignment out of the KVM hypervisor, and enforcing device isolation at the kernel level. VFIO offers better security and is compatible with secure boot. It is the default device assignment mechanism in Red Hat Enterprise Linux 7.
VFIO increases the number of assigned devices to 32 in Red Hat Enterprise Linux 7, up from a maximum 8 devices in Red Hat Enterprise Linux 6. VFIO also supports assignment of NVIDIA GPUs.

Note

For more information on VFIO device assignment, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
USB, PCI, and SCSI passthrough
The KVM hypervisor supports attaching USB, PCI, and SCSI devices on the host system to virtual machines. USB, PCI, and SCSI device assignment makes it possible for the devices to appear and behave as if they were physically attached to the virtual machine. Thus, it provides guests with exclusive access to these devices for a variety of tasks.

Note

For more information on USB, PCI, and SCSI passthrough, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
SR-IOV
SR-IOV (Single Root I/O Virtualization) is a PCI Express (PCI-e) standard that extends a single physical PCI function to share its PCI resources as separate virtual functions (VFs). Each function can be used by a different virtual machine via PCI device assignment.
An SR-IOV-capable PCI-e device provides a Single Root function (for example, a single Ethernet port) and presents multiple, separate virtual devices as unique PCI device functions. Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts.
NPIV
N_Port ID Virtualization (NPIV) is a functionality available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adapters (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtual machines can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs).
NPIV can provide high density virtualized environments with enterprise-level storage solutions.
For more information on NPIV, see the vHBA-based storage pools using SCSI devices.

3.4.4. Guest CPU Models

CPU models define which host CPU features are exposed to the guest operating system. KVM and libvirt contain definitions for a number of processor models, allowing users to enable CPU features that are available only in newer CPU models. The set of CPU features that can be exposed to guests depends on support in the host CPU, the kernel, and KVM code.
To ensure safe migration of virtual machines between hosts with different sets of CPU features, KVM does not expose all features of the host CPU to guest operating system by default. Instead, CPU features are exposed based on the selected CPU model. If a virtual machine has a given CPU feature enabled, it cannot be migrated to a host that does not support exposing that feature to guests.

Note

For more information on guest CPU models, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.