Four new microprocessor flaws have been discovered, the most severe of which is rated by Red Hat Product Security as having an Important impact. These flaws, if exploited by an attacker with local shell access to a system, could allow data in the CPU’s cache to be exposed to unauthorized processes. While difficult to execute, a skilled attacker could use these flaws to read memory from a virtual or containerized instance, or the underlying host system. Red Hat has mitigations prepared for affected systems and has detailed steps customers should take as they evaluate their exposure risk and formulate their response.
Issue Details and Background Information
Red Hat has been made aware of a series of microarchitectural (hardware) implementation issues that could allow an unprivileged local attacker to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. These flaws could also be exploited by malicious code running within a container. These issues affect many modern Intel microprocessors, requiring updates to the Linux kernel, virtualization stack, and updates to CPU microcode. The issues have been assigned CVE-2018-12130, as a severity impact of Important, CVE-2018-12126, CVE-2018-12127, and CVE-2019-11091 are considered Moderate severity.
At this time, these specific flaws are only known to affect Intel-based processors, although Red Hat Product Security expects researchers to continue probing for unrelated Simultaneous Multi Threading (SMT) vulnerabilities across a wide range of vendors.
Flaws were found with the manner in which Intel microprocessor designs implement several performance micro-optimizations. Exploitation of the vulnerabilities provide attackers a side channel to access recently used data on the system belonging to other processes, containers, virtual machines, or to the kernel.
These vulnerabilities are referred to as Microarchitectural Data Sampling (MDS) due to the fact that they rely upon leveraging speculation to obtain state left within internal CPU structures.
CVE-2018-12126 - Microarchitectural Store Buffer Data Sampling ( MSBDS )
A flaw was found in many Intel microprocessor designs related to a possible information leak of the processor store buffer structure which contains recent stores (writes) to memory.
Modern Intel microprocessors implement hardware-level micro-optimizations to improve the performance of writing data back to CPU caches. The write operation is split into STA (STore Address) and STD (STore Data) sub-operations. These sub-operations allow the processor to hand-off address generation logic into these sub-operations for optimized writes. Both of these sub-operations write to a shared distributed processor structure called the 'processor store buffer'.
The processor store buffer is conceptually a table of address, value, and 'is valid' entries. As the sub-operations can execute independently of each other, they can each update the address, and/or value columns of the table independently. This means that at different points in time the address or value may be invalid.
The processor may speculatively forward entries from the store buffer. The split design used allows for such forwarding to speculatively use stale values, such as the wrong address, returning data from a previous unrelated store. Since this only occurs for loads that will be reissued following the fault/assist resolution, the program is not architecturally impacted, but store buffer state can be leaked to malicious code carefully crafted to retrieve this data via side-channel analysis.
The processor store buffer entries are equally divided between the number of active Hyper-Threads. Conditions such as power-state change can reallocate the processor store buffer entries in a half-updated state to another thread without ensuring that the entries have been cleared.
This issue is referred to by the researchers as Fallout.
CVE-2018-12127 - Microarchitectural Load Port Data Sampling ( MLPDS )
Microprocessors use ‘load ports’ to perform load operations from memory or IO. During a load operation, the load port receives data from the memory or IO subsystem and then provides the data to the CPU registers and operations in the CPU’s pipelines.
In some implementations, the writeback data bus within each load port can retain data values from older load operations until newer load operations overwrite that data
MLPDS can reveal stale load port data to malicious actors when:
- A faulting/assisting SSE/AVX/AVX-512 loads that are more than 64 bits in size
- A faulting/assisting load which spans a 64-byte boundary.
In the above cases, the load operation speculatively provides stale data values from the internal data structures to dependent operations. Speculatively forwarding this data does not end up modifying program execution, but this can be used as a widget to speculatively infer the contents of a victim process’s data value through timing access to the load port.
CVE-2018-12130 - Microarchitectural Fill Buffer Data Sampling ( MFBDS )
This issue has the most risk associated, which Red Hat has rated as Important. A flaw was found by researchers in the implementation of fill buffers used by Intel microprocessors.
A fill buffer holds data that has missed in the processor L1 data cache, as a result of an attempt to use a value that is not present. When a Level 1 data cache miss occurs within an Intel core, the fill buffer design allows the processor to continue with other operations while the value to be accessed is loaded from higher levels of cache. The design also allows the result to be forwarded to the Execution Unit, acquiring the load directly without being written into the Level 1 data cache.
A load operation is not decoupled in the same way that a store is, but it does involve an Address Generation Unit (AGU) operation. If the AGU generates a fault (#PF, etc.) or an assist (A/D bits) then the classical Intel design would block the load and later reissue it. In contemporary designs, it instead allows subsequent speculation operations to temporarily see a forwarded data value from the fill buffer slot prior to the load actually taking place. Thus it is possible to read data that was recently accessed by another thread if the fill buffer entry is not overwritten.
CVE-2019-11091 - Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
A flaw was found in the implementation of the "fill buffer," a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache. If an attacker can generate a load operation that would create a page fault, the execution will continue speculatively with incorrect data from the fill buffer, while the data is fetched from higher-level caches. This response time can be measured to infer data in the fill buffer.
Red Hat thanks Intel and industry partners for reporting this issue and collaborating on the mitigations for the same.
Additionally Red Hat thanks the original reporters:
Microarchitectural Store Buffer Data Sampling (MSBDS) - CVE-2018-12126
This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently reported by Lei Shi - Qihoo - 360 CERT and by Marina Minkin, Daniel Moghimi, Moritz Lipp, Michael Schwarz, Jo Van Bulck, Daniel Genkin, Daniel Gruss, Berk Sunar, Frank Piessens, Yuval Yarom (1University of Michigan, Worcester Polytechnic Institute, Graz University of Technology, imec-DistriNet, KU Leuven, University of Adelaide).
Microarchitectural Load Port Data Sampling (MLPDS) - CVE-2018-12127
This vulnerability was found internally by Intel employees and Microsoft. Intel would like to thank Brandon Falk – Microsoft Windows Platform Security Team, Ke Sun, Henrique Kawakami, Kekai Hu, and Rodrigo Branco - Intel. It was independently reported by Matt Miller – Microsoft, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam.
Microarchitectural Fill Buffer Data Sampling (MFBDS) - CVE-2018-12130
This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently reported by Giorgi Maisuradze – Microsoft Research, and by Dan Horea Lutas, and Andrei Lutas - Bitdefender, and by Volodymyr Pikhur, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam, and by Moritz Lipp, Michael Schwarz, and Daniel Gruss - Graz University of Technology.
Microarchitectural Data Sampling Uncacheable Memory (MDSUM) - CVE-2019-11091
This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently found by Volodrmyr Pikhur, and by Moritz Lipp, Michael Schwarz, Daniel Gruss - Graz University of Technology, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam.
For more information about this class of issues, please refer to Intel’s Website
Red Hat Product Security has rated this update as having a security impact of Important.
The following Red Hat product versions are impacted:
Red Hat Enterprise Linux 5
Red Hat Enterprise Linux 6
Red Hat Enterprise Linux 7
Red Hat Enterprise Linux 8
Red Hat Atomic Host
Red Hat Enterprise MRG 2
Red Hat OpenShift Online v3
Red Hat Virtualization (RHV/RHV-H)
Red Hat OpenStack Platform
While Red Hat's Linux Containers are not directly impacted by third-party hardware vulnerabilities, their security relies upon the integrity of the host kernel environment. Red Hat recommends that you use the most recent versions of your container images. The Container Health Index, part of the Red Hat Container Catalog, can always be used to verify the security status of Red Hat containers. To protect the privacy of the containers in use, you will need to ensure that the Container host (such as Red Hat Enterprise Linux or Atomic Host) has been updated against these attacks. Red Hat has released an updated Atomic Host for this use case.
|Attack Vector||Vulnerable?||If Vulnerable, how?||Mitigation|
|Local user process against host||yes||read protected memory||kernel MDS patches + microcode + disable HT|
|Local user process against another user process||yes||read protected memory||kernel MDS patches + microcode + disable HT|
|Guest against another guest||yes||read protected memory||kernel MDS patches + microcode + disable HT|
|Guest against host||yes||read protected memory||kernel MDS patches + microcode + disable HT|
|Host user against guest||yes||read protected memory||kernel MDS patches + microcode + disable HT|
|Container against host||yes||read protected memory||kernel MDS patches + microcode + disable HT|
|Container against another container||yes||read protected memory||kernel MDS patches + microcode + disable HT|
In multi-tenant systems where the Host has disabled HT, different guests should not have access to threads on the same core and should not be vulnerable. Host performance and overall availability of resources will be impacted.
In multi-tenant systems where the Host has HT enabled and the hypervisor is vulnerable, guests will also be vulnerable if they have HT disabled or not.
In multi-tenant systems where the Host has HT enabled and the Hypervisor is not vulnerable, guests should consider disabling HT to protect themselves.
Diagnose your vulnerability
Use the detection script to determine if your system is currently vulnerable to this flaw. To verify the legitimacy of the script, you can download the detached GPG signature as well.
For subscribers running Red Hat Virtualization products, a knowledgebase article has been created to verify OEM-supplied microcode/firmware has been applied.
After Applying relevant updates users can check to ensure the patches are in effect by running either of the following:
# dmesg | grep "MDS:"
[ 0.162571] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[ 181.862076] MDS: Mitigation: Clear CPU buffers
# cat /sys/devices/system/cpu/vulnerabilities/mds
Mitigation: Clear CPU buffers; SMT vulnerable
Red Hat customers running affected versions of these Red Hat products are strongly recommended to update them as soon as errata are available. Customers are urged to apply the available updates immediately and enable the mitigations as they feel appropriate.
The order the patches are applied is not important, but after updating firmware and hypervisors, every system/virtual machine will need to power off and restart to recognize a new hardware type.
Updates for Affected Products
|Red Hat Enterprise Linux 8 (z-stream)||kernel||RHSA-2019:1167|
|Red Hat Enterprise Linux 8||kernel-rt||RHSA-2019:1174|
|Red Hat Enterprise Linux 8||virt:rhel||RHSA-2019:1175|
|Red Hat Enterprise Linux 8||microcode_ctl||RHEA-2019:1211|
|Red Hat Enterprise Linux 7 (z-stream)||kernel||RHSA-2019:1168|
|Red Hat Enterprise Linux 7||kernel-rt||RHSA-2019:1176|
|Red Hat Enterprise Linux 7||qemu-kvm||RHSA-2019:1178|
|Red Hat Enterprise Linux 7||qemu-kvm-rhev||RHSA-2019:1179|
|Red Hat Enterprise Linux 7||libvirt||RHSA-2019:1177|
|Red Hat Enterprise Linux 7||microcode_ctl||RHEA-2019:1210|
|Red Hat Enterprise Linux 7.5 Extended Update Support ||kernel||RHSA-2019:1155|
|Red Hat Enterprise Linux 7.5 Extended Update Support ||qemu-kvm||RHSA-2019:1183|
|Red Hat Enterprise Linux 7.5 Extended Update Support ||libvirt||RHSA-2019:1182|
|Red Hat Enterprise Linux 7.5 Extended Update Support ||microcode_ctl||RHEA-2019:1213|
|Red Hat Enterprise Linux 7.4 Extended Update Support ||kernel||RHSA-2019:1170|
|Red Hat Enterprise Linux 7.4 Extended Update Support ||qemu-kvm||RHSA-2019:1185|
|Red Hat Enterprise Linux 7.4 Extended Update Support ||libvirt||RHSA-2019:1184|
|Red Hat Enterprise Linux 7.4 Extended Update Support ||microcode_ctl||RHEA-2019:1214|
|Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support , ||kernel||RHSA-2019:1171|
|Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support , ||qemu-kvm||RHSA-2019:1189|
|Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support , ||libvirt||RHSA-2019:1187|
|Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support , ||microcode_ctl||RHEA-2019:1215|
|Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support , ||kernel||RHSA-2019:1172|
|Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support , ||qemu-kvm||RHSA-2019:1188|
|Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support , ||libvirt||RHSA-2019:1186|
|Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support , ||microcode_ctl||RHEA-2019:1216|
|Red Hat Enterprise Linux 6 (z-stream)||kernel||RHSA-2019:1169|
|Red Hat Enterprise Linux 6||qemu-kvm||RHSA-2019:1181|
|Red Hat Enterprise Linux 6||libvirt||RHSA-2019:1180|
|Red Hat Enterprise Linux 6||microcode_ctl||RHEA-2019:1212|
|Red Hat Enterprise Linux 6.6 Advanced Update Support ||kernel||RHSA-2019:1193|
|Red Hat Enterprise Linux 6.6 Advanced Update Support ||qemu-kvm||RHSA-2019:1195|
|Red Hat Enterprise Linux 6.6 Advanced Update Support ||libvirt||RHSA-2019:1194|
|Red Hat Enterprise Linux 6.6 Advanced Update Support ||microcode_ctl||RHEA-2019:1218|
|Red Hat Enterprise Linux 6.5 Advanced Update Support ||kernel||RHSA-2019:1196|
|Red Hat Enterprise Linux 6.5 Advanced Update Support ||qemu-kvm||RHSA-2019:1198|
|Red Hat Enterprise Linux 6.5 Advanced Update Support ||libvirt||RHSA-2019:1197|
|Red Hat Enterprise Linux 6.5 Advanced Update Support ||microcode_ctl||RHEA-2019:1219|
|Red Hat Enterprise Linux 5 Extended Lifecycle Support ||kernel||see below|
|RHEL Atomic Host ||kernel||respin pending|
|Red Hat Enterprise MRG 2||kernel-rt||RHSA-2019:1190|
|Red Hat Virtualization 4||vdsm||RHSA-2019:1203|
|Red Hat Virtualization 4.2||vdsm||RHSA-2019:1204|
|Red Hat Virtualization 4.3||rhvm-setup-plugins||RHSA-2019:1205|
|Red Hat Virtualization 4.2||rhvm-setup-plugins||RHSA-2019:1206|
|Red Hat Virtualization 4||virtualization host||RHSA-2019:1207|
|Red Hat Virtualization 4||rhvm-appliance||RHSA-2019:1208|
|Red Hat Virtualization 4.2||virtualization host||RHSA-2019:1209|
|Red Hat OpenStack Platform 14 (Rocky)||qemu-kvm-rhev||RHSA-2019:1202|
|Red Hat OpenStack Platform 14 (Rocky)||container image||RHBA-2019:1242|
|Red Hat OpenStack Platform 13 (Queens)||qemu-kvm-rhev||RHSA-2019:1201|
|Red Hat OpenStack Platform 13 (Queens)||container image||RHBA-2019:1241|
|Red Hat OpenStack Platform 10 (Newton)||qemu-kvm-rhev||RHSA-2019:1200|
|Red Hat OpenStack Platform 9 (Mitaka)||qemu-kvm-rhev||RHSA-2019:1199|
 An active EUS subscription is required for access to this patch. Please contact Red Hat sales or your specific sales representative for more information if your account does not have an active EUS subscription.
 An active AUS subscription is required for access to this patch in RHEL AUS.
 An active Update Services for SAP Solutions Add-on or TUS subscription is required for access to this patch in RHEL E4S / TUS.
 For details on how to update Red Hat Enterprise Atomic Host, please see Deploying a specific version fo Red Hat Enterprise Atomic Host.
 At this time, based on the severity of these issues, where Red Hat Enterprise Linux 5 is in its support lifecycle, and the low number of CPU types that will have available microcode that is required for these mitigations, RHEL5 will not be addressed. Please contact Red Hat Support for available upgrade paths and options.
NOTE: Subscribers may need to contact their hardware OEMs to get the most up-to-date versions of CPU microcode/firmware.
There is no known complete mitigation other than applying vendor software updates combined with hardware OEM-provided CPU microcode/firmware or using non-vulnerable microprocessors. All Red Hat customers should apply vendor solutions to patch their CPUs and update the kernel as soon as patches are available. Please consult with your system OEM provider or CPU manufacturer for additional details on microcode availability for supported platforms.
Disabling SMT for affected systems will reduce some of the attack surface, but will not completely eliminate all threats from these vulnerabilities. To mitigate the risks these vulnerabilities introduce, systems will need updated microcode, updated kernel, virtualization patches, and administrators will need to evaluate if disabling SMT/HT is the right choice for their deployments. Additionally, applications may have a performance impact. See article Disabling Hyper-Threading for information on disabling SMT.
Customers are advised to take a risk-based approach to mitigating this issue. Systems that require high degrees of security and trust should be addressed first and should be isolated from untrusted systems until such time as treatments can be applied to those systems to reduce the risk of exploit.
Additionally, an Ansible playbook, disable_mds_smt_mitigate.yml, is provided below. This playbook will disable SMT on the running system, disable SMT for future system reboots, and apply related updates. To use the playbook, specify the hosts you'd like to disable SMT on with the HOSTS extra var:
ansible-playbook -e HOSTS=web,mail,ldap04 disable_mds_smt_mitigate.yml
The playbook can also add command line arguments to prevent runtime re-enabling of SMT by setting the FORCEOFF variable true:
ansible-playbook -e HOSTS=hypervisors -e FORCEOFF=true disable_mds_smt_mitigate.yml
To verify the legitimacy of the playbook, you can download the detached GPG signature.
Performance Impact and Disabling MDS
The MDS CVE mitigations have shown to cause a performance impact. The impact will be felt more in applications with high rates of user-kernel-user space transitions. For example system calls, NMIs, and interrupts.
Although there is no way to say what the impact will be for any given workload, in our testing we measured:
- Applications that spend a lot of time in user mode tended to show the smallest slowdown, usually in the 0-5% range.
- Applications that did a lot of small block or small packet network I/O showed slowdowns in the 10-25% range.
- Some microbenchmarks that did nothing other than enter and return from user space to kernel space showed higher slowdowns.
The performance impact from MDS mitigation can be measured by running your application with MDS enabled and then disabled. MDS mitigation is enabled by default. MDS mitigation can be fully enabled, with SMT also disabled by adding the “mds=full,nosmt” flag to the kernel boot command line. MDS mitigation can be fully disabled by adding the “mds=off” flag to the kernel boot command line. There no way to disable it at runtime.
For the performance impact of disabling hyperthreading, see the “Disabling Hyper-Threading” section at https://access.redhat.com/security/vulnerabilities/L1TF-perf
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.