MDS - Microarchitectural Data Sampling - CVE-2018-12130, CVE-2018-12126, CVE-2018-12127, and CVE-2019-11091
Executive Summary
Four new microprocessor flaws have been discovered, the most severe of which is rated by Red Hat Product Security as having an Important impact. These flaws, if exploited by an attacker with local shell access to a system, could allow data in the CPU’s cache to be exposed to unauthorized processes. While difficult to execute, a skilled attacker could use these flaws to read memory from a virtual or containerized instance, or the underlying host system. Red Hat has mitigations prepared for affected systems and has detailed steps customers should take as they evaluate their exposure risk and formulate their response.
Issue Details and Background Information
Red Hat has been made aware of a series of microarchitectural (hardware) implementation issues that could allow an unprivileged local attacker to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. These flaws could also be exploited by malicious code running within a container. These issues affect many modern Intel microprocessors, requiring updates to the Linux kernel, virtualization stack, and updates to CPU microcode. The issues have been assigned CVE-2018-12130, as a severity impact of Important, CVE-2018-12126, CVE-2018-12127, and CVE-2019-11091 are considered Moderate severity.
At this time, these specific flaws are only known to affect Intel-based processors, although Red Hat Product Security expects researchers to continue probing for unrelated Simultaneous Multi Threading (SMT) vulnerabilities across a wide range of vendors.
Flaws were found with the manner in which Intel microprocessor designs implement several performance micro-optimizations. Exploitation of the vulnerabilities provide attackers a side channel to access recently used data on the system belonging to other processes, containers, virtual machines, or to the kernel.
These vulnerabilities are referred to as Microarchitectural Data Sampling (MDS) due to the fact that they rely upon leveraging speculation to obtain state left within internal CPU structures.
CVE-2018-12126 - Microarchitectural Store Buffer Data Sampling ( MSBDS )
A flaw was found in many Intel microprocessor designs related to a possible information leak of the processor store buffer structure which contains recent stores (writes) to memory.
Modern Intel microprocessors implement hardware-level micro-optimizations to improve the performance of writing data back to CPU caches. The write operation is split into STA (STore Address) and STD (STore Data) sub-operations. These sub-operations allow the processor to hand-off address generation logic into these sub-operations for optimized writes. Both of these sub-operations write to a shared distributed processor structure called the 'processor store buffer'.
The processor store buffer is conceptually a table of address, value, and 'is valid' entries. As the sub-operations can execute independently of each other, they can each update the address, and/or value columns of the table independently. This means that at different points in time the address or value may be invalid.
The processor may speculatively forward entries from the store buffer. The split design used allows for such forwarding to speculatively use stale values, such as the wrong address, returning data from a previous unrelated store. Since this only occurs for loads that will be reissued following the fault/assist resolution, the program is not architecturally impacted, but store buffer state can be leaked to malicious code carefully crafted to retrieve this data via side-channel analysis.
The processor store buffer entries are equally divided between the number of active Hyper-Threads. Conditions such as power-state change can reallocate the processor store buffer entries in a half-updated state to another thread without ensuring that the entries have been cleared.
This issue is referred to by the researchers as Fallout.
CVE-2018-12127 - Microarchitectural Load Port Data Sampling ( MLPDS )
Microprocessors use ‘load ports’ to perform load operations from memory or IO. During a load operation, the load port receives data from the memory or IO subsystem and then provides the data to the CPU registers and operations in the CPU’s pipelines.
In some implementations, the writeback data bus within each load port can retain data values from older load operations until newer load operations overwrite that data
MLPDS can reveal stale load port data to malicious actors when:
- A faulting/assisting SSE/AVX/AVX-512 loads that are more than 64 bits in size
- A faulting/assisting load which spans a 64-byte boundary.
In the above cases, the load operation speculatively provides stale data values from the internal data structures to dependent operations. Speculatively forwarding this data does not end up modifying program execution, but this can be used as a widget to speculatively infer the contents of a victim process’s data value through timing access to the load port.
CVE-2018-12130 - Microarchitectural Fill Buffer Data Sampling ( MFBDS )
This issue has the most risk associated, which Red Hat has rated as Important. A flaw was found by researchers in the implementation of fill buffers used by Intel microprocessors.
A fill buffer holds data that has missed in the processor L1 data cache, as a result of an attempt to use a value that is not present. When a Level 1 data cache miss occurs within an Intel core, the fill buffer design allows the processor to continue with other operations while the value to be accessed is loaded from higher levels of cache. The design also allows the result to be forwarded to the Execution Unit, acquiring the load directly without being written into the Level 1 data cache.
A load operation is not decoupled in the same way that a store is, but it does involve an Address Generation Unit (AGU) operation. If the AGU generates a fault (#PF, etc.) or an assist (A/D bits) then the classical Intel design would block the load and later reissue it. In contemporary designs, it instead allows subsequent speculation operations to temporarily see a forwarded data value from the fill buffer slot prior to the load actually taking place. Thus it is possible to read data that was recently accessed by another thread if the fill buffer entry is not overwritten.
This issue is referred to by researchers as RIDL or ZombieLoad.
CVE-2019-11091 - Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
A flaw was found in the implementation of the "fill buffer," a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache. If an attacker can generate a load operation that would create a page fault, the execution will continue speculatively with incorrect data from the fill buffer, while the data is fetched from higher-level caches. This response time can be measured to infer data in the fill buffer.
Acknowledgements
Red Hat thanks Intel and industry partners for reporting this issue and collaborating on the mitigations for the same.
Additionally Red Hat thanks the original reporters:
Microarchitectural Store Buffer Data Sampling (MSBDS) - CVE-2018-12126
This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently reported by Lei Shi - Qihoo - 360 CERT and by Marina Minkin, Daniel Moghimi, Moritz Lipp, Michael Schwarz, Jo Van Bulck, Daniel Genkin, Daniel Gruss, Berk Sunar, Frank Piessens, Yuval Yarom (1University of Michigan, Worcester Polytechnic Institute, Graz University of Technology, imec-DistriNet, KU Leuven, University of Adelaide).
Microarchitectural Load Port Data Sampling (MLPDS) - CVE-2018-12127
This vulnerability was found internally by Intel employees and Microsoft. Intel would like to thank Brandon Falk – Microsoft Windows Platform Security Team, Ke Sun, Henrique Kawakami, Kekai Hu, and Rodrigo Branco - Intel. It was independently reported by Matt Miller – Microsoft, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam.
Microarchitectural Fill Buffer Data Sampling (MFBDS) - CVE-2018-12130
This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently reported by Giorgi Maisuradze – Microsoft Research, and by Dan Horea Lutas, and Andrei Lutas - Bitdefender, and by Volodymyr Pikhur, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam, and by Moritz Lipp, Michael Schwarz, and Daniel Gruss - Graz University of Technology.
Microarchitectural Data Sampling Uncacheable Memory (MDSUM) - CVE-2019-11091
This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently found by Volodrmyr Pikhur, and by Moritz Lipp, Michael Schwarz, Daniel Gruss - Graz University of Technology, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam.
Additional References
For more information about this class of issues, please refer to Intel’s Website
KCS: Simultaneous Multithreading in Red Hat Enterprise Linux
KCS: Disabling Hyper-Threading
KCS: CPU Side Channel Attack Index Page
KCS: Microcode availability for Pre-Haswell CPUs
KCS: Applying MDS CVE's patches on RHV hosts and manager node
KCS: RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws
Video: All about MDS in about 3 minutes
Video: Longform MDS Technical Explanation
Blog: Deeper Look at the MDS Vulnerability
Blog: Modern IT security: Sometimes caring is NOT sharing
Impacted Products
Red Hat Product Security has rated this update as having a security impact of Important.
The following Red Hat product versions are impacted:
Red Hat Enterprise Linux 5
Red Hat Enterprise Linux 6
Red Hat Enterprise Linux 7
Red Hat Enterprise Linux 8
Red Hat Atomic Host
Red Hat Enterprise MRG 2
Red Hat OpenShift Online v3
Red Hat Virtualization (RHV/RHV-H)
Red Hat OpenStack Platform
While Red Hat's Linux Containers are not directly impacted by third-party hardware vulnerabilities, their security relies upon the integrity of the host kernel environment. Red Hat recommends that you use the most recent versions of your container images. The Container Health Index, part of the Red Hat Container Catalog, can always be used to verify the security status of Red Hat containers. To protect the privacy of the containers in use, you will need to ensure that the Container host (such as Red Hat Enterprise Linux or Atomic Host) has been updated against these attacks. Red Hat has released an updated Atomic Host for this use case.
Attack Vector | Vulnerable? | If Vulnerable, how? | Mitigation |
---|---|---|---|
Local user process against host | yes | read protected memory | kernel MDS patches + microcode + disable HT |
Local user process against another user process | yes | read protected memory | kernel MDS patches + microcode + disable HT |
Guest against another guest | yes | read protected memory | kernel MDS patches + microcode + disable HT |
Guest against host | yes | read protected memory | kernel MDS patches + microcode + disable HT |
Host user against guest | yes | read protected memory | kernel MDS patches + microcode + disable HT |
Container against host | yes | read protected memory | kernel MDS patches + microcode + disable HT |
Container against another container | yes | read protected memory | kernel MDS patches + microcode + disable HT |
In multi-tenant systems where the Host has disabled HT, different guests should not have access to threads on the same core and should not be vulnerable. Host performance and overall availability of resources will be impacted.
In multi-tenant systems where the Host has HT enabled and the hypervisor is vulnerable, guests will also be vulnerable if they have HT disabled or not.
In multi-tenant systems where the Host has HT enabled and the Hypervisor is not vulnerable, guests should consider disabling HT to protect themselves.
Diagnose your vulnerability
Use the detection script to determine if your system is currently vulnerable to this flaw. To verify the legitimacy of the script, you can download the detached GPG signature as well.
For subscribers running Red Hat Virtualization products, a knowledgebase article has been created to verify OEM-supplied microcode/firmware has been applied.
After Applying relevant updates users can check to ensure the patches are in effect by running either of the following:
# dmesg | grep "MDS:"
[ 0.162571] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[ 181.862076] MDS: Mitigation: Clear CPU buffers
# cat /sys/devices/system/cpu/vulnerabilities/mds
Mitigation: Clear CPU buffers; SMT vulnerable
Take Action
Red Hat customers running affected versions of these Red Hat products are strongly recommended to update them as soon as errata are available. Customers are urged to apply the available updates immediately and enable the mitigations as they feel appropriate.
The order the patches are applied is not important, but after updating firmware and hypervisors, every system/virtual machine will need to power off and restart to recognize a new hardware type.
Updates for Affected Products
Product | Package | Advisory/Update |
Red Hat Enterprise Linux 8 (z-stream) | kernel | RHSA-2019:1167 |
Red Hat Enterprise Linux 8 | kernel-rt | RHSA-2019:1174 |
Red Hat Enterprise Linux 8 | virt:rhel | RHSA-2019:1175 |
Red Hat Enterprise Linux 8 [6] | microcode_ctl | RHEA-2019:1211 |
Red Hat Enterprise Linux 7 (z-stream) | kernel | RHSA-2019:1168 |
Red Hat Enterprise Linux 7 | kernel-rt | RHSA-2019:1176 |
Red Hat Enterprise Linux 7 | qemu-kvm | RHSA-2019:1178 |
Red Hat Enterprise Linux 7 | qemu-kvm-rhev | RHSA-2019:1179 |
Red Hat Enterprise Linux 7 | libvirt | RHSA-2019:1177 |
Red Hat Enterprise Linux 7 [6] | microcode_ctl | RHEA-2019:1210 |
Red Hat Enterprise Linux 7.5 Extended Update Support [1] | kernel | RHSA-2019:1155 |
Red Hat Enterprise Linux 7.5 Extended Update Support [1] | qemu-kvm | RHSA-2019:1183 |
Red Hat Enterprise Linux 7.5 Extended Update Support [1] | libvirt | RHSA-2019:1182 |
Red Hat Enterprise Linux 7.5 Extended Update Support [1], [6] | microcode_ctl | RHEA-2019:1213 |
Red Hat Enterprise Linux 7.4 Extended Update Support [1] | kernel | RHSA-2019:1170 |
Red Hat Enterprise Linux 7.4 Extended Update Support [1] | qemu-kvm | RHSA-2019:1185 |
Red Hat Enterprise Linux 7.4 Extended Update Support [1] | libvirt | RHSA-2019:1184 |
Red Hat Enterprise Linux 7.4 Extended Update Support [1], [6] | microcode_ctl | RHEA-2019:1214 |
Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2], [3] | kernel | RHSA-2019:1171 |
Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2], [3] | qemu-kvm | RHSA-2019:1189 |
Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2], [3] | libvirt | RHSA-2019:1187 |
Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2], [3], [6] | microcode_ctl | RHEA-2019:1215 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2], [3] | kernel | RHSA-2019:1172 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2], [3] | qemu-kvm | RHSA-2019:1188 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2], [3] | libvirt | RHSA-2019:1186 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2], [3], [6] | microcode_ctl | RHEA-2019:1216 |
Red Hat Enterprise Linux 6 (z-stream) | kernel | RHSA-2019:1169 |
Red Hat Enterprise Linux 6 | qemu-kvm | RHSA-2019:1181 |
Red Hat Enterprise Linux 6 | libvirt | RHSA-2019:1180 |
Red Hat Enterprise Linux 6 [6] | microcode_ctl | RHEA-2019:1212 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [2] | kernel | RHSA-2019:1193 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [2] | qemu-kvm | RHSA-2019:1195 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [2] | libvirt | RHSA-2019:1194 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [2], [6] | microcode_ctl | RHEA-2019:1218 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [2] | kernel | RHSA-2019:1196 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [2] | qemu-kvm | RHSA-2019:1198 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [2] | libvirt | RHSA-2019:1197 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [2], [6] | microcode_ctl | RHEA-2019:1219 |
Red Hat Enterprise Linux 5 Extended Lifecycle Support [5] | kernel | see below |
RHEL Atomic Host [4] | kernel | respin complete |
Red Hat Enterprise MRG 2 | kernel-rt | RHSA-2019:1190 |
Red Hat Virtualization 4 [7] | vdsm | RHSA-2019:1203 |
Red Hat Virtualization 4.2 [7] | vdsm | RHSA-2019:1204 |
Red Hat Virtualization 4.3 [7] | rhvm-setup-plugins | RHSA-2019:1205 |
Red Hat Virtualization 4.2 [7] | rhvm-setup-plugins | RHSA-2019:1206 |
Red Hat Virtualization 4 [7] | virtualization host | RHSA-2019:1207 |
Red Hat Virtualization 4 [7] | rhvm-appliance | RHSA-2019:1208 |
Red Hat Virtualization 4.2 [7] | virtualization host | RHSA-2019:1209 |
Red Hat OpenStack Platform 14 (Rocky) [8] | qemu-kvm-rhev | RHSA-2019:1202 |
Red Hat OpenStack Platform 14 (Rocky) [8] | container image | RHBA-2019:1242 |
Red Hat OpenStack Platform 13 (Queens) [8] | qemu-kvm-rhev | RHSA-2019:1201 |
Red Hat OpenStack Platform 13 (Queens) [8] | container image | RHBA-2019:1241 |
Red Hat OpenStack Platform 10 (Newton) [8] | qemu-kvm-rhev | RHSA-2019:1200 |
Red Hat OpenStack Platform 9 (Mitaka) [8] | qemu-kvm-rhev | RHSA-2019:1199 |
[1] An active EUS subscription is required for access to this patch. Please contact Red Hat sales or your specific sales representative for more information if your account does not have an active EUS subscription.
What is the Red Hat Enterprise Linux Extended Update Support Subscription?
[2] An active AUS subscription is required for access to this patch in RHEL AUS.
What is Advanced mission critical Update Support (AUS)?
[3] An active Update Services for SAP Solutions Add-on or TUS subscription is required for access to this patch in RHEL E4S / TUS.
[4] For details on how to update Red Hat Enterprise Atomic Host, please see Deploying a specific version fo Red Hat Enterprise Atomic Host.
FAQ: Red Hat Enterprise Linux 5 Extended Life Cycle Support (ELS) Add-On
[5] At this time, based on the severity of these issues, where Red Hat Enterprise Linux 5 is in its support lifecycle, and the low number of CPU types that will have available microcode that is required for these mitigations, RHEL5 will not be addressed. Please contact Red Hat Support for available upgrade paths and options.
[6] Availability of Updated Intel CPU Microcode Addressing Microarchitectural Data Sampling (MDS) Vulnerability / Microcode availability for Pre-Haswell CPUs
[7] Applying MDS CVE's patches on RHV hosts and manager node
[8] RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws
Mitigation
There is no known complete mitigation other than applying vendor software updates combined with hardware OEM-provided CPU microcode/firmware or using non-vulnerable microprocessors. All Red Hat customers should apply vendor solutions to patch their CPUs and update the kernel as soon as patches are available. Please consult with your system OEM provider or CPU manufacturer for additional details on microcode availability for supported platforms.
Disabling SMT for affected systems will reduce some of the attack surface, but will not completely eliminate all threats from these vulnerabilities. To mitigate the risks these vulnerabilities introduce, systems will need updated microcode, updated kernel, virtualization patches, and administrators will need to evaluate if disabling SMT/HT is the right choice for their deployments. Additionally, applications may have a performance impact. See article Disabling Hyper-Threading for information on disabling SMT.
Customers are advised to take a risk-based approach to mitigating this issue. Systems that require high degrees of security and trust should be addressed first and should be isolated from untrusted systems until such time as treatments can be applied to those systems to reduce the risk of exploit.
Ansible playbook
Additionally, an Ansible playbook, disable_mds_smt_mitigate.yml, is provided below. This playbook will disable SMT on the running system, disable SMT for future system reboots, and apply related updates. To use the playbook, specify the hosts you'd like to disable SMT on with the HOSTS extra var:
ansible-playbook -e HOSTS=web,mail,ldap04 disable_mds_smt_mitigate.yml
The playbook can also add command line arguments to prevent runtime re-enabling of SMT by setting the FORCEOFF variable true:
ansible-playbook -e HOSTS=hypervisors -e FORCEOFF=true disable_mds_smt_mitigate.yml
To verify the legitimacy of the playbook, you can download the detached GPG signature.
Performance Impact and Disabling MDS
The MDS CVE mitigations have shown to cause a performance impact. The impact will be felt more in applications with high rates of user-kernel-user space transitions. For example system calls, NMIs, and interrupts.
Although there is no way to say what the impact will be for any given workload, in our testing we measured:
- Applications that spend a lot of time in user mode tended to show the smallest slowdown, usually in the 0-5% range.
- Applications that did a lot of small block or small packet network I/O showed slowdowns in the 10-25% range.
- Some microbenchmarks that did nothing other than enter and return from user space to kernel space showed higher slowdowns.
The performance impact from MDS mitigation can be measured by running your application with MDS enabled and then disabled. MDS mitigation is enabled by default. MDS mitigation can be fully enabled, with SMT also disabled by adding the “mds=full,nosmt” flag to the kernel boot command line. MDS mitigation can be fully disabled by adding the “mds=off” flag to the kernel boot command line. There no way to disable it at runtime.
For the performance impact of disabling hyperthreading, see the “Disabling Hyper-Threading” section at https://access.redhat.com/security/vulnerabilities/L1TF-perf
37 Comments
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.
Current Customers and Partners
Log in for full access
Log InNew to Red Hat?
Learn more about Red Hat subscriptions
I'm a bit confused on the fix process. Are the Ansible scripts only for mitigation if you don't yet apply the updates? Or:
1) Do you have to apply the updates AND run the Ansible scripts?
2) Do the package updates alone disable MT after power-down and boot?
3) Do you have to also manually disable MT or anything at the hardware BIOS level, like in VMWARE virtual BIOS, or are the package updates enough (and a full power-down and boot)?
Thanks!
Hi Michael, Updating the packages alone won't disable SMT. The playbook will disable SMT for the running system, it will change the system's boot settings so that SMT is disabled at boot time, and it will apply updates related to this issue. You'll still need to reboot the system, as only new kernels have mitigations in place. Regarding disabling hyperthreading in your system BIOS, that varies from system to system, but generally speaking the playbook should make that unnecessary.
After you've updated and run the playbook or disabled SMT by other means, you can run the detection script provided on the Diagnose tab of this page to check your system's status.
OK -- Thanks! Just to confim (not sure if I'm taking the docs too literally) -- does this actually require a cold boot or full powerdown, and not a simple reboot, or can a regular reboot work as long as the Ansible stuff has been run beforehand? Or is the cold reboot only if HT is disabled at the BIOS level rather than at the kernel param level via Ansible?
Correct, a reboot should suffice if SMT is disabled by a kernel argument.
I've got a quick question about the mitigation procedure.
After applying the Ansible playbook and rebooted I still get the below output. Any idea why my system is still Vulnerable? [root@server tmp]# ./cve-2018-12130--2019-05-14-1319.sh
This script (v1.0) is primarily designed to detect CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, and CVE-2019-11091 on supported Red Hat Enterprise Linux systems and kernel packages. Result may be inaccurate for other RPM based systems.
Detected CPU vendor: Intel CPU: Intel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHz CPU model: 63 (0x3f) Running kernel: 3.10.0-957.12.2.el7.x86_64 Architecture: x86_64 Virtualization: vmware Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
* CPU microcode update is not detected[root@server tmp]# dmesg | grep "MDS:" [ 0.059810] MDS: Vulnerable: Clear CPU buffers attempted, no microcode [root@server tmp]# cat /sys/devices/system/cpu/vulnerabilities/mds Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
One side note -- the Ansible playbook does not seem to work with the default Python on RHEL6 (Python 2.6) for the Ansible controller. My Ansible system is still on RHEL6, since I was waiting to go straight to RHEL8, and the syntax blew-up immediately. I had to install a 3rd-party Python and use Ansible with that before it would work. Both Python 2.7 and 3.7 worked for me. I was an early adopter for Ansible, using it pre-v1.0, but I'm guessing there probably aren't many using it with RHEL6, but in case there are still some people doing so and the playbook doesn't work -- it's the Python version. Note that I am referring to the Python for the Ansible controller, not managed/target systems.
You can decide the right way to deploy the fixes in your environment. The Playbook automates the patches, if you're comfortable with that. If not, "yum update" also is a great tried and true method if you need to move at a different pace.
HT is NOT disabled without manual intervention for users not deploying the Playbook. That's a risk assessment you need to make for your particular deployment and then you can disable it via the command line. Via the command line you can either set it to be persistent and then reboot to take affect or if you have a newer kernel you can dynamically turn it off and on.
Thanks! I had one last question in reply above.
is yum update only can fix above said vulnerability? or other stuff can do also update? please help.
Hi Bhupendra, under the Resolve tab there is a section called Mitigation and discusses this further. - customers will need to either be using non-vulnerable CPU's, or if vulnerable to take steps to apply microcode updates to the CPU, along with kernel, and virtualisation packages. Additionally, due to the SMT aspect (as discussed under the Diagnose tab) you will want to evaluate your workload/environments on if changes are needed to your SMT settings. Under Overview we also have additional resources, such as blog and technical articles.
I'm sorry to say that I'm still a bit confused. To completely mitigate these vulnerabilities, we have to:
Disabling SMT/HT is just a way to lessen (not eliminate) the attack surface until the 3 above steps can be done. Right? Disabling SMT/HT is not a 4th step that must be done in order to fully mitigate these, right?
Just like all the previous issues you must apply cpu microcode_clt, you must apply relevant Red Hat patches (kernel, libvirt, qemu-kvm, etc), and to guarantee your system recognizes the uCode, a reboot is strongly recommended. You particular circumstance may or may not see the uCode without a reboot, but for consistency we ask for one to be done. Disabling SMT//HT is an option that should be considered if you are running multi-tenant workloads with sensitive data.
Your title ("MDS - Microarchitectural Store Buffer Data (...)") suggests MDS stands for "Microarchitectural Store Buffer Data, yet several other sources like https://en.wikipedia.org/wiki/Microarchitectural_Data_Sampling think MDS stands for "Microarchitectural Data Sampling".
Is the title correct or false?
Hi, MDS as you mention specifically stands for Microarchitectural Data Sampling. I can see how the title as it is, could cause some confusion.
Just for Info: There is an error in the ansible playbook in following line (perhaps also in other b64decoded strings):
Instead you should check for
We will look into it. Thanks for the report.
This issue was addressed in version 1.1 of the playbook. Thanks for letting us know.
Is there any ETA for fixed package/image for "Red Hat OpenStack Platform 10+"?
Hi, the qemu-kvm-rhev RPM's were released overnight for the various OpenStack versions. We are respining the container images and we will update the table when we are notified that they are available to use. The table now links to the various qemu-kvm-rhev RHSA's. Regards, Cliff
Hi, I'm pleased to say that the OpenStack 13 and 14 Container Images have also been published and are now available to download from our Container Catalog. Links to the RHBA's are also listed under the Resolve tab. Regards, Cliff
Is it sufficient to apply microcode released by RHEL microcode_ctl ? OR Do we need to apply micro codes from vendors [like Lenovo] as well like it was recommended in case of Spectre/Meltdown Variant 2?
The microcode_clt package has been supplied straight from Intel to us. It has the most currently publicly available updates from them (newer updates are planned for additional CPUs over the coming days/weeks, and we'll reissue package accordingly). OEMs may make additional changes to their BIOS code to unlock special vendor features or performance ticks. Check with your system OEM/integrator to ensure they have not made additional modifications (or what those changes might be). The package we supply will work with the CPUs in scope, but might not contain those additional changes.
Hello,
Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
microcode_ctl-1.17-33.11.el6_10.x86_64
dmesg | grep "MDS:"MDS: Vulnerable: Clear CPU buffers attempted, no microcode
cat /sys/devices/system/cpu/smt/controloff
Could you please confirm if microcode_ctl package from RedHat mitigates this vulnerability or we need some other update here ?
Thanks, Ruchi Kaushal
We have been faced the same problem as Ruchi. Any updates ?
Hi, for both Ruchi Kaushal and Yukinori Yokoshima - I'm sorry that this isn't working as expected. It is possible that the microcode provided to us by Intel doesn't cover your CPU. Please review article: https://access.redhat.com/articles/4138151 - which lists in detail. While it had most CPU's, it doesn't cover all CPU's currently supported by Intel. If you feel that your CPU is on the list and so should be working, please contact Red Hat Support and allow us to investigate closer with you. Red Hat does intend to release future microcode with additional CPU's, as we get them from Intel and they pass our internal testing. Regards, Cliff
Hi Cliff, Thank you for your reply. I confirmed our CPU in on the list ( model 0x3f, stepping 0x02 ). We are contacting our support now. Regards, Yukinori.
Hello RedHat,
when I ran: dmesg | grep "MDS:" or this one: cat /sys/devices/system/cpu/vulnerabilities/mds I don't see any output which means I do not have to worry, correct? Also, it affected only RHEL 6 or? Thanks,
Hi, ALL versions of RHEL are impacted, if running on most Intel CPUs. If you have not yet applied the updated kernel and rebooted, then you would see no MDS messages. If you are using an Intel CPU, and seeing this after a reboot with new kernel, then please can you contact Red Hat Support for assistance, and/or troubleshooting. Regards, Cliff
It's my understanding that at least for some of these vulnerabilities, the server has to have SMT enabled, meaning that we allow multiple threads per core. So, I'm guessing that if SMT is either disabled or unsupported, or we're only configured to allow one thread per core, then we should be fine. Is this correct?
SMT is only one factor, which if left enabled could expose data for untrusted or multi-tenant environments where multiple threads share the same core. You will still need to apply updated kernel, microcode and virtualization packages to secure the environment from the non-SMT attacks, which can happen for all listed CVEs.
Hi... On the 'diagnose' tab, it states: "In multi-tenant systems where the Host has HT enabled and the Hypervisor is not vulnerable, guests should consider disabling HT to protect themselves." We are having a hard time envisioning this. Can you give an example of such a Hypervisor where the guest can independently disable HT? Not looking for details, just an example senario.
When can we expect the update for RHEL Atomic Host? It has been saying 'respin pending' for days now.
Hi Ferry - sorry for the lag in getting the page updated. The atomic respin happened a few days back and should be available to you now. If you subscribe to our errata feed you should have seen an RHBA for it.
For instances running on Amazon EC2, the reason why the script reports "Vulnerable" is due to the identification of "no microcode: shown." Amazon EC2 engineering reports that the identification that instances are vulnerable is the result of the missing md_clear CPUID flag. Currently, this flag is not passed to customer instances. Amazon EC2 engineering reports that customers are protected when the kernel patches are applied .
Even though this diagnostic script may identify the infrastructure as "Vulnerable," after the RHEL kernel is updated, they are patched. All EC2 host infrastructure has been updated with these new protections the Intel team published in a a new security advisory notification on 2019-05-14 at 17:00 UTC (INTEL-SA-00233) and no further action is required at the infrastructure level.
https://aws.amazon.com/security/security-bulletins/AWS-2019-004/
The above bulletin addresses the following CVE Identifiers: CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091 and also addresses Xen Security Advisories: XSA-297
Hi David - I'm going to arrange a conversation with our DevOps team to discuss how the CPU ID looks that's passed to the kernel in an AWS instance. Look from a email from us shortly so we can work through this.
It says that it canbe fixed by yum update or update the kernel to the latest version, Whil it is still showing the same vul? I also ran the insights-client just to make sure the status is current. Thanks
I've downloaded and run the playbook successfully which also took care of rebooting the system. Insights was still showing the same vulnerabilities so I downloaded and ran the detection script which shows:
This script (v1.2) is primarily designed to detect CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, and CVE-2019-11091 on supported Red Hat Enterprise Linux systems and kernel packages. Result may be inaccurate for other RPM based systems.
Detected CPU vendor: Intel CPU: Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz CPU model: 85 (0x55) Running kernel: 3.10.0-1160.59.1.el7.x86_64 Architecture: x86_64 Virtualization: hyperv
Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
For more information about the vulnerabilities see: * https://access.redhat.com/security/vulnerabilities/mds
For more information about correctly enabling mitigations in VMs, see: * https://access.redhat.com/articles/3331571 (This is Spectre related article, but the steps are the same)
So it's looking like the playbook reports success but wasn't actually successful? Any ideas?