Chapter 1. GPU device passthrough: Assigning a host GPU to a single virtual machine

Red Hat Virtualization supports PCI VFIO, also called device passthrough, for some NVIDIA PCIe-based GPU devices as non-VGA graphics devices.

You can attach one or more host GPUs to a single virtual machine by passing through the host GPU to the virtual machine, in addition to one of the standard emulated graphics interfaces. The virtual machine uses the emulated graphics device for pre-boot and installation, and the GPU takes control when its graphics drivers are loaded.

For information on the exact number of host GPUs that you can pass through to a single virtual machine, see the NVIDIA website.

To assign a GPU to a virtual machine, follow the steps in these procedures:

These steps are detailed below.

Prerequisites

  • Your GPU device supports GPU passthrough mode.
  • Your system is listed as a validated server hardware platform.
  • Your host chipset supports Intel VT-d or AMD-Vi

For more information about supported hardware and software, see Validated Platforms in the NVIDIA GPU Software Release Notes.

1.1. Enabling host IOMMU support and blacklisting nouveau

I/O Memory Management Unit (IOMMU) support on the host machine is necessary to use a GPU on a virtual machine.

Procedure

  1. In the Administration Portal, click ComputeHosts. Select a host and click Edit. The Edit Hosts pane appears.
  2. Click the Kernel tab.
  3. Check the Hostdev Passthrough & SR-IOV checkbox. This checkbox enables IOMMU support for a host with Intel VT-d or AMD Vi by adding intel_iommu=on or amd_iommu=on to the kernel command line.
  4. Check the Blacklist Nouveau checkbox.
  5. Click OK.
  6. Select the host and click ManagementMaintenance and OK.
  7. Click InstallationReinstall.
  8. After the reinstallation is finished, reboot the host machine.
  9. When the host machine has rebooted, click ManagementActivate.
Note

To enable IOMMU support using the command line, edit the grub.conf file in the virtual machine (./entries/rhvh-4.4.<machine id>.conf) to include the option intel_iommu=on.

1.2. Detaching the GPU from the host

You cannot add the GPU to the virtual machine if the GPU is bound to the host kernel driver, so you must unbind the GPU device from the host before you can add it to the virtual machine. Host drivers often do not support dynamic unbinding of the GPU, so it is recommended to manually exclude the device from binding to the host drivers.

Procedure

  1. On the host, identify the device slot name and IDs of the device by running the lspci command. In the following example, a graphics controller such as an NVIDIA Quadro or GRID card is used:

    # lspci -Dnn | grep -i NVIDIA
    0000:03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [Quadro K4200] [10de:11b4] (rev a1)
    0000:03:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)

    The output shows that the NVIDIA GK104 device is installed. It has a graphics controller and an audio controller with the following properties:

    • The device slot name of the graphics controller is 0000:03:00.0, and the vendor-id:device-id for the graphics controller are 10de:11b4.
    • The device slot name of the audio controller is 0000:03:00.1, and the vendor-id:device-id for the audio controller are 10de:0e0a.
  2. Prevent the host machine driver from using the GPU device. You can use a vendor-id:device-id with the pci-stub driver. To do this, append the pci-stub.ids option, with the vendor-id:device-id as its value, to the GRUB_CMDLINX_LINUX environment variable located in the /etc/sysconfig/grub configuration file, for example:

    GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/vg0-lv_swap rd.lvm.lv=vg0/lv_root rd.lvm.lv=vg0/lv_swap rhgb quiet intel_iommu=on pci-stub.ids=10de:11b4,10de:0e0a"

    When adding additional vendor IDs and device IDs for pci-stub, separate them with a comma.

  3. Regenerate the boot loader configuration using grub2-mkconfig to include this option:

    # grub2-mkconfig -o /etc/grub2.cfg
    Note

    When using a UEFI-based host, the target file should be /etc/grub2-efi.cfg.

  4. Reboot the host machine.
  5. Confirm that IOMMU is enabled, the host device is added to the list of pci-stub.ids, and Nouveau is blacklisted:

    # cat /proc/cmdline
    BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-147.el8.x86_64 root=/dev/mapper/vg0-lv_root ro crashkernel=auto resume=/dev/mapper/vg0-lv_swap rd.lvm.lv=vg0/lv_root rd.lvm.lv=vg0/lv_swap rhgb quiet intel_iommu=on 1
    pci-stub.ids=10de:11b4,10de:0e0a 2
    rdblacklist=nouveau 3
1
IOMMU is enabled
2
the host device is added to the list of pci-stub.ids
3
Nouveau is blacklisted

1.3. Attaching the GPU to a Virtual Machine

After unbinding the GPU from host kernel driver, you can add it to the virtual machine and enable the correct driver.

Procedure

  1. Follow the steps in Adding a Host Device to a Virtual Machine in the Virtual Machine Management Guide.
  2. Run the virtual machine and log in to it.
  3. Install the NVIDIA GPU driver on the virtual machine.
  4. Verify that the correct kernel driver is in use for the GPU with the lspci -nnk command. For example:

    # lspci -nnk
    00:07.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [Quadro K4200] [10de:11b4] (rev a1)
    Subsystem: Hewlett-Packard Company Device [103c:1096]
    Kernel driver in use: nvidia
    Kernel modules: nouveau, nvidia_drm, nvidia

1.4. Installing the GPU driver on the virtual machine

Procedure

  1. Run the virtual machine and connect to it using the VNC or SPICE console.
  2. Download the driver to the virtual machine. For information on getting the driver, see the Drivers page on the NVIDIA website.
  3. Install the GPU driver.

    Important

    Linux only: When installing the driver on a Linux guest operating system, you are prompted to update xorg.conf. If you do not update xorg.conf during the installation, you need to update it manually.

  4. After the driver finishes installing, reboot the machine. For Windows virtual machines, fully power off the guest from the Administration portal or the VM portal, not from within the guest operating system.

    Important

    Windows only: Powering off the virtual machine from within the Windows guest operating system sometimes sends the virtual machine into hibernate mode, which does not completely clear the memory, possibly leading to subsequent problems. Using the Administration portal or the VM portal to power off the virtual machine forces it to fully clean the memory.

  5. Connect a monitor to the host GPU output interface and run the virtual machine.
  6. Set up NVIDIA vGPU guest software licensing for each vGPU and add the license credentials in the NVIDIA control panel. For more information, see How NVIDIA vGPU Software Licensing Is Enforced in the NVIDIA Virtual GPU Software Documentation.

1.5. Updating and Enabling xorg (Linux Virtual Machines)

Before you can use the GPU on the virtual machine, you need to update and enable xorg on the virtual machine. The NVIDIA driver installation should do this automatically. Check if xorg is updated and enabled by viewing /etc/X11/xorg.conf:

# cat /etc/X11/xorg.conf

The first two lines say if it was generated by NVIDIA. For example:

# cat /etc/X11/xorg.conf
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 390.87 (buildmeister@swio-display-x64-rhel04-14) Tue Aug 21 17:33:38 PDT 2018

Procedure

  1. On the virtual machine, generate the xorg.conf file using following command:

    # X -configure
  2. Copy the xorg.conf file to /etc/X11/xorg.conf using the following command:

    # cp /root/xorg.conf.new /etc/X11/xorg.conf
  3. Reboot the virtual machine.
  4. Verify that xorg is updated and enabled by viewing /etc/X11/xorg.conf:

    # cat /etc/X11/xorg.conf

    Search for the Device section. You should see an entry similar to the following section:

    Section "Device"
        Identifier     "Device0"
        Driver         "nvidia"
        VendorName     "NVIDIA Corporation"
    EndSection

The GPU is now assigned to the virtual machine.