How to patch RHV environment for Meltdown and Spectre CVE( CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715)?

Solution Verified - Updated -


  • Red Hat Enterprise Virtualization 3.6 ELS
  • Red Hat Virtualization 4.1



To fully mitigate these potential attacks, customers need to do three things:

  • firmware level update
  • OS level updates (kernel, qemu-kvm, libvirt)
  • application level updates (RHV patches in vdsm, rhevm-setup-plugin, rhvh and rhev-m appliance)

Both hosts and guests should be updated and rebooted. Order is not important, but after updating firmware and hypervisors, every virtual machine will need to power off and restart to recognize a new hardware type, and so it might be most convenient to patch and power cycle virtual machine guests last to save virtual machine reboots.

After applying firmware and OS patches customers will also need to upgrade to the latest RHVM and hypervisors/vdsm. These are available in the normal repositories. Details are in the latest Installation Guide on the Red Hat Customer Portal. The latest RHV updates define new CPU Types to distinguish updated hardware from vulnerable hardware. After updating firmware, and then upgrading and rebooting each hypervisor, verify each hypervisor CPU is identified properly as a model with -IBRS at the end.

Suggested Upgrade Outline:

  1. Upgrade the manager and reboot (firmware, if applicable, kernel patches, RHV patches).
  2. Update each host and reboot(firmware, kernel, RHV).
  3. Change the Cluster CPU Type to -IBRS variant (via updated RHV-Manager).
  4. Update each guest and reboot (kernel patches).
  5. Check your environment is fully patched.


1. Upgrade the manager and reboot.
1.1. On the manager, run:

# yum update  rhevm-setup-plugins

1.2. Continue upgrading the manager as described in the upgrade documentation. In order for this fix to be added, engine-setup run is required and a new kernel.
Refer to the Upgrade Guide’s “updates between minor versions” for upgrade flow details 1.
Or the Self Hosted Engine Guide 2.

2. Update each host and reboot.
Follow the "updating virtualization hosts" section in the Upgrade Guide 3

  • Once rebooted, verify the host CPU is identified properly as a model with -IBRS
# virsh -r cpu-models x86_64 | grep IBRS 
  • Within the manager, navigate to Host -> General -> Hardware subtab and verify CPU Type contains -IBRS variant

3. Change the Cluster CPU Type to -IBRS variant:

After upgrading a sufficient number of hypervisors, customers will need to update their clusters or define new clusters with the new hardware types. To change the Cluster CPU Type to the -IBRS variant, in RHVM, navigate to Cluster -> Edit and change CPU Type to -IBRS variant. Note that all hypervisors in a cluster must be running the -IBRS CPU type variant before changing the cluster CPU type.

After updating hypervisors and clusters, power cycle every VM in the environment to use clusters with the new -IBRS CPU types. For convenience, it might make sense to combine this with VM patch and reboot cycles below.

4. Update each guest and reboot.
Note: VMs started/migrated before the Cluster CPU change will keep using the old CPU model
Download the appropriate updates and follow normal patching procedures to apply them. Kernel patches imply a VM reboot. For VMs not already associated with a cluster with the updated hardware type, it might make sense to power each VM off, move to an updated cluster, and then power back on.

Root Cause

Three CVEs were recently made public (CVE-2017-5754, CVE-2017-5753, CVE-2017-5715) which allowed a local attacker to access unauthorized data. CVE-2017-5753 documents the variant of this attack which allows virtualized guests to interact with the host and other guests on the same physical system.

Modern processors make use of parallel execution of instructions to achieve superior performance. The processor uses multiple cache buffers to facilitate smooth flow of instructions and data. The processor core employs different techniques such as Branch Prediction and Speculative Execution to execute instructions, at times even before the correct sequential execution flow is determined. Results of such execution are stored in temporary buffers (cache area), till the time correct sequence of execution is determined.

A targeted side-channel attack was devised to trick the processor into speculatively executing instructions thus allowing unprivileged users to gain access to the cache buffers. Unprivileged guest users could use this flaw to read host memory and/or guest kernel memory.

The first two variants trigger the speculative execution by performing a bounds-check bypass (CVE-2017-5753) or by utilizing branch target injection (CVE-2017-5715). Both variants rely on the presence of a precisely-defined instruction sequence in the privileged code, as well as the fact that memory accesses populate the cache even when the speculatively executed block is being dropped and never committed (executed). As a result, an unprivileged attacker could use these two flaws to read privileged memory by conducting targeted cache side-channel attacks. These variants could be used not only to cross the syscall boundary (variant #1 and variant #2) but also the guest-host boundary (variant #2).

The third variant (CVE-2017-5754, variant #3) relies on the fact that during speculative execution of instruction permission faults, evaluation is suppressed until the retirement of the whole instruction block and that memory accesses populate the cache even when the block is being dropped and never committed (executed). As a result, an unprivileged local attacker could use this flaw to read privileged (kernel space) memory by conducting targeted cache side-channel attacks. A guest VM can not use this issue to read the VM host’s memory.

Virtualized environment has three primary scenarios, unprivileged guest user attempting to access host's memory (guest->host), privileged guest user attempting to access other guest memory (guest->(other)guest) and unprivileged guest user attempting to access guest kernel's memory (guest->(same)guest). To prevent the first two scenarios updates need to be applied on the host and to prevent the third scenario updates need to be applied on both the host and guest system.

Diagnostic Steps

How do I know if my RHV environment has been patched for Meltdown and Spectre CVE?

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.


Attachment missing - script


This statement confuses me "A guest VM can not use this issue to read the VM host’s memory." Should I go thru the update processes of hosted-engine and hosts anyways?


Yes, please. That one is referring to CVE-2017-5754.

Ok. Thanks! :) script content is missing

The script is now attached to the article.

It's available now. Thank you.

Currently the microcode_ctl is not installed on my hypervisors. Does this mean I should install said package. Or only update it if it's already installed?

microcode_ctl is needed for Intel CPUs. Do you have an AMD chip? The point is to update the CPU microcode, you can check with your vendor about the correct way for that particular model

What to do, if "virsh -r capabilities | grep -i IBRS" returns empty output on the hypervisor-server after "yum clean all; yum update; reboot" of the hypervisor-server?

you need to wait for all relevant errata to actually ship. As of right now only kernel has been pushed live. Other packages are in progress right now. For host to report the right capabilities you would need to apply the microcode update (which either happens automatically by microcode_ctl update or after you apply it according to your vendor specific fw update solution), and kernel, libvirt and qemu-kvm-rhev packages

for full mitigation there would be further changes needed though

Red Hat has released updated libvirt and qemu-kvm-rhev packages now, in addition to the kernel update. Having patched again, I still see no traces of IBRS.

you should see it once you apply the host kernel update and the microcode update on Intel. See for a quick summary of defaults. Once your host is protected (for variants which are applicable to your CPU) you can expose the IBRS capability into your virtualized guest by using the newly introduced -IBRS CPU models in RHV to be able to further address the variant #2 in the guest.

It turns out that this is hardware specific: We have Dell servers where we get no traces of IBRS no matter how much we patch. And we have more recent HPE servers where we get IBRS after having updated RHV-M and hypervisors. The discussion threads at has more information.

According to this article, the PCID CPU feature is becoming rather imporatant, due to Meltdown:

I just checked on two different RHV clusters. In both clusters, the underlying hypervisor has "pcid" in the contents of /proc/cpuinfo. But the guest VMs do not. Does this reflect some error in my setup, or that RHV doesn't make PCID available to the guests?

Maybe this is irrelevant when running RHEL guests, because PCID's performance benefits would require kernel backports? - But then, at least other guest types might make use of PCID, if it were exposed by RHV?

PCID exposure to guest depends on the CPU type you use for your Cluster/guests. It's added in Haswell onwards, there is a discussion within KVM to add that to earlier CPUs which already has support - for those you can use host CPU passthrough (and give up live migration) or add the flag using vdsm-hook-cpuflags