MDS - Microarchitectural Data Sampling - CVE-2018-12130, CVE-2018-12126, CVE-2018-12127, and CVE-2019-11091

Public Date:
Updated -
Status
Resolved
Impact
Important

Executive Summary

Four new microprocessor flaws have been discovered, the most severe of which is rated by Red Hat Product Security as having an Important impact. These flaws, if exploited by an attacker with local shell access to a system, could allow data in the CPU’s cache to be exposed to unauthorized processes. While difficult to execute, a skilled attacker could use these flaws to read memory from a virtual or containerized instance, or the underlying host system. Red Hat has mitigations prepared for affected systems and has detailed steps customers should take as they evaluate their exposure risk and formulate their response. 

Issue Details and Background Information

Red Hat has been made aware of a series of microarchitectural (hardware) implementation issues that could allow an unprivileged local attacker to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. These flaws could also be exploited by malicious code running within a container. These issues affect many modern Intel microprocessors, requiring updates to the Linux kernel, virtualization stack, and updates to CPU microcode. The issues have been assigned CVE-2018-12130, as a severity impact of Important, CVE-2018-12126CVE-2018-12127, and CVE-2019-11091 ​are considered Moderate severity.

At this time, these specific flaws are only known to affect Intel-based processors, although Red Hat Product Security expects researchers to continue probing for unrelated Simultaneous Multi Threading (SMT) vulnerabilities across a wide range of vendors.

Flaws were found with the manner in which Intel microprocessor designs implement several performance micro-optimizations. Exploitation of the vulnerabilities provide attackers a side channel to access recently used data on the system belonging to other processes, containers, virtual machines, or to the kernel.

These vulnerabilities are referred to as Microarchitectural Data Sampling (MDS) due to the fact that they rely upon leveraging speculation to obtain state left within internal CPU structures.  

CVE-2018-12126 - Microarchitectural Store Buffer Data Sampling ( MSBDS )

A flaw was found in many Intel microprocessor designs related to a possible information leak of the processor store buffer structure which contains recent stores (writes) to memory.

Modern Intel microprocessors implement hardware-level micro-optimizations to improve the performance of writing data back to CPU caches. The write operation is split into STA (STore Address) and STD (STore Data) sub-operations. These sub-operations allow the processor to hand-off address generation logic into these sub-operations for optimized writes. Both of these sub-operations write to a shared distributed processor structure called the 'processor store buffer'.

The processor store buffer is conceptually a table of address, value, and 'is valid' entries. As the sub-operations can execute independently of each other, they can each update the address, and/or value columns of the table independently. This means that at different points in time the address or value may be invalid. 


The processor may speculatively forward entries from the store buffer. The split design used allows for such forwarding to speculatively use stale values, such as the wrong address, returning data from a previous unrelated store. Since this only occurs for loads that will be reissued following the fault/assist resolution, the program is not architecturally impacted, but store buffer state can be leaked to malicious code carefully crafted to retrieve this data via side-channel analysis.

The processor store buffer entries are equally divided between the number of active Hyper-Threads. Conditions such as power-state change can reallocate the processor store buffer entries in a half-updated state to another thread without ensuring that the entries have been cleared.

This issue is referred to by the researchers as Fallout.


CVE-2018-12127 - Microarchitectural Load Port Data Sampling ( MLPDS )

Microprocessors use ‘load ports’ to perform load operations from memory or IO. During a load operation, the load port receives data from the memory or IO subsystem and then provides the data to the CPU registers and operations in the CPU’s pipelines.

In some implementations, the writeback data bus within each load port can retain data values from older load operations until newer load operations overwrite that data 

MLPDS can reveal stale load port data to malicious actors when:

  •  A faulting/assisting SSE/AVX/AVX-512 loads that are more than 64 bits in size 
  •  A faulting/assisting load which spans a 64-byte boundary.

In the above cases, the load operation speculatively provides stale data values from the internal data structures to dependent operations. Speculatively forwarding this data does not end up modifying program execution, but this can be used as a widget to speculatively infer the contents of a victim process’s data value through timing access to the load port.


CVE-2018-12130 - Microarchitectural Fill Buffer Data Sampling ( MFBDS )

This issue has the most risk associated, which Red Hat has rated as Important. A flaw was found by researchers in the implementation of fill buffers used by Intel microprocessors. 

A fill buffer holds data that has missed in the processor L1 data cache, as a result of an attempt to use a value that is not present. When a Level 1 data cache miss occurs within an Intel core, the fill buffer design allows the processor to continue with other operations while the value to be accessed is loaded from higher levels of cache. The design also allows the result to be forwarded to the Execution Unit, acquiring the load directly without being written into the Level 1 data cache.

A load operation is not decoupled in the same way that a store is, but it does involve an Address Generation Unit (AGU) operation. If the AGU generates a fault (#PF, etc.) or an assist (A/D bits) then the classical Intel design would block the load and later reissue it. In contemporary designs, it instead allows subsequent speculation operations to temporarily see a forwarded data value from the fill buffer slot prior to the load actually taking place. Thus it is possible to read data that was recently accessed by another thread if the fill buffer entry is not overwritten.

This issue is referred to by researchers as RIDL or ZombieLoad.

CVE-2019-11091 - Microarchitectural Data Sampling Uncacheable Memory (MDSUM)

A flaw was found in the implementation of the "fill buffer," a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache.  If an attacker can generate a load operation that would create a page fault, the execution will continue speculatively with incorrect data from the fill buffer, while the data is fetched from higher-level caches.  This response time can be measured to infer data in the fill buffer.

Acknowledgements

Red Hat thanks Intel and industry partners for reporting this issue and collaborating on the mitigations for the same.  
 
Additionally Red Hat thanks the original reporters:
Microarchitectural Store Buffer Data Sampling (MSBDS) - CVE-2018-12126
This vulnerability was found internally by Intel employees.  Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently reported by Lei Shi - Qihoo - 360 CERT and by Marina Minkin, Daniel Moghimi, Moritz Lipp, Michael Schwarz, Jo Van Bulck, Daniel Genkin, Daniel Gruss, Berk Sunar, Frank Piessens, Yuval Yarom (1University of Michigan, Worcester Polytechnic Institute, Graz University of Technology, imec-DistriNet, KU Leuven, University of Adelaide).    
 
Microarchitectural Load Port Data Sampling (MLPDS) - CVE-2018-12127
This vulnerability was found internally by Intel employees and Microsoft.  Intel would like to thank Brandon Falk – Microsoft Windows Platform Security Team, Ke Sun, Henrique Kawakami, Kekai Hu, and Rodrigo Branco - Intel. It was independently reported by Matt Miller – Microsoft, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam.
 
Microarchitectural Fill Buffer Data Sampling (MFBDS) - CVE-2018-12130

This vulnerability was found internally by Intel employees.  Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently reported by Giorgi Maisuradze – Microsoft Research, and by Dan Horea Lutas, and Andrei Lutas - Bitdefender, and by Volodymyr Pikhur, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam, and by Moritz Lipp, Michael Schwarz, and Daniel Gruss - Graz University of Technology.

Microarchitectural Data Sampling Uncacheable Memory (MDSUM) - CVE-2019-11091

This vulnerability was found internally by Intel employees. Intel would like to thank Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco. It was independently found by Volodrmyr Pikhur, and by Moritz Lipp, Michael Schwarz, Daniel Gruss - Graz University of Technology, and by Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida - VUSec group at VU Amsterdam.

Additional References

For more information about this class of issues, please refer to Intel’s Website

KCS: Simultaneous Multithreading in Red Hat Enterprise Linux

KCS: Disabling Hyper-Threading

KCS: CPU Side Channel Attack Index Page

KCS: Microcode availability for Pre-Haswell CPUs

KCS: Availability of Updated Intel CPU Microcode Addressing Microarchitectural Data Sampling (MDS) Vulnerability 

KCS:   Applying MDS CVE's patches on RHV hosts and manager node 

Video:  All about MDS in about 3 minutes

Video:  Longform MDS Technical Explanation

Blog: Deeper Look at the MDS Vulnerability

Blog: Modern IT security: Sometimes caring is NOT sharing 


Impacted Products

Red Hat Product Security has rated this update as having a security impact of Important.

The following Red Hat product versions are impacted:

  • Red Hat Enterprise Linux 5

  • Red Hat Enterprise Linux 6

  • Red Hat Enterprise Linux 7

  • Red Hat Enterprise Linux 8

  • Red Hat Atomic Host

  • Red Hat Enterprise MRG 2

  • Red Hat OpenShift Online v3

  • Red Hat Virtualization (RHV/RHV-H)

  • Red Hat OpenStack Platform 


    While Red Hat's Linux Containers are not directly impacted by third-party hardware vulnerabilities, their security relies upon the integrity of the host kernel environment. Red Hat recommends that you use the most recent versions of your container images. The Container Health Index, part of the Red Hat Container Catalog, can always be used to verify the security status of Red Hat containers. To protect the privacy of the containers in use, you will need to ensure that the Container host (such as Red Hat Enterprise Linux or Atomic Host) has been updated against these attacks. Red Hat has released an updated Atomic Host for this use case.


    Attack Vector Matrix
    Attack VectorVulnerable?If Vulnerable, how?Mitigation
    Local user process against hostyesread protected memorykernel MDS patches + microcode + disable HT
    Local user process against another user processyesread protected memorykernel MDS patches + microcode + disable HT
    Guest against another guestyesread protected memorykernel MDS patches + microcode + disable HT
    Guest against hostyesread protected memorykernel MDS patches + microcode + disable HT
    Host user against guestyesread protected memorykernel MDS patches + microcode + disable HT
    Container against hostyesread protected memorykernel MDS patches + microcode + disable HT
    Container against another containeryesread protected memorykernel MDS patches + microcode + disable HT

    In multi-tenant systems where the Host has disabled HT, different guests should not have access to threads on the same core and should not be vulnerable. Host performance and overall availability of resources will be impacted.

    In multi-tenant systems where the Host has HT enabled and the hypervisor is vulnerable, guests will also be vulnerable if they have HT disabled or not.

    In multi-tenant systems where the Host has HT enabled and the Hypervisor is not vulnerable, guests should consider disabling HT to protect themselves.


    Diagnose your vulnerability

    Use the detection script to determine if your system is currently vulnerable to this flaw. To verify the legitimacy of the script, you can download the detached GPG signature as well.

    Determine if your system is vulnerable

    Current version: 1.0

    For subscribers running Red Hat Virtualization products, a knowledgebase article has been created to verify OEM-supplied microcode/firmware has been applied.

    After Applying relevant updates users can check to ensure the patches are in effect by running either of the following:

    # dmesg | grep "MDS:"
    [    0.162571] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
    [  181.862076] MDS: Mitigation: Clear CPU buffers

    # cat /sys/devices/system/cpu/vulnerabilities/mds
    Mitigation: Clear CPU buffers; SMT vulnerable

    Take Action

    Red Hat customers running affected versions of these Red Hat products are strongly recommended to update them as soon as errata are available. Customers are urged to apply the available updates immediately and enable the mitigations as they feel appropriate.   
     
    The order the patches are applied is not important, but after updating firmware and hypervisors, every system/virtual machine will need to power off and restart to recognize a new hardware type.

    Updates for Affected Products

    ProductPackageAdvisory/Update
    Red Hat Enterprise Linux 8 (z-stream)kernelRHSA-2019:1167
    Red Hat Enterprise Linux 8kernel-rtRHSA-2019:1174
    Red Hat Enterprise Linux 8virt:rhelRHSA-2019:1175
    Red Hat Enterprise Linux 8microcode_ctlRHEA-2019:1211
    Red Hat Enterprise Linux 7 (z-stream)kernelRHSA-2019:1168
    Red Hat Enterprise Linux 7kernel-rtRHSA-2019:1176
    Red Hat Enterprise Linux 7qemu-kvmRHSA-2019:1178
    Red Hat Enterprise Linux 7qemu-kvm-rhevRHSA-2019:1179
    Red Hat Enterprise Linux 7libvirtRHSA-2019:1177
    Red Hat Enterprise Linux 7microcode_ctlRHEA-2019:1210
    Red Hat Enterprise Linux 7.5 Extended Update Support [1]kernelRHSA-2019:1155
    Red Hat Enterprise Linux 7.5 Extended Update Support [1]qemu-kvmRHSA-2019:1183
    Red Hat Enterprise Linux 7.5 Extended Update Support [1]libvirtRHSA-2019:1182
    Red Hat Enterprise Linux 7.5 Extended Update Support [1]microcode_ctlRHEA-2019:1213
    Red Hat Enterprise Linux 7.4 Extended Update Support [1]kernelRHSA-2019:1170
    Red Hat Enterprise Linux 7.4 Extended Update Support [1]qemu-kvmRHSA-2019:1185
    Red Hat Enterprise Linux 7.4 Extended Update Support [1]libvirtRHSA-2019:1184
    Red Hat Enterprise Linux 7.4 Extended Update Support [1]microcode_ctlRHEA-2019:1214
    Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2][3]kernelRHSA-2019:1171
    Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2][3]qemu-kvmRHSA-2019:1189
    Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2][3]libvirtRHSA-2019:1187
    Red Hat Enterprise Linux 7.3 Update Services for SAP Solutions, & Advanced Update Support [2][3]microcode_ctlRHEA-2019:1215
    Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2][3]kernelRHSA-2019:1172
    Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2][3]qemu-kvmRHSA-2019:1188
    Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2][3]libvirtRHSA-2019:1186
    Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [2][3]microcode_ctlRHEA-2019:1216
    Red Hat Enterprise Linux 6 (z-stream)kernelRHSA-2019:1169
    Red Hat Enterprise Linux 6  qemu-kvmRHSA-2019:1181
    Red Hat Enterprise Linux 6  libvirtRHSA-2019:1180
    Red Hat Enterprise Linux 6microcode_ctlRHEA-2019:1212
    Red Hat Enterprise Linux 6.6 Advanced Update Support [2]kernelRHSA-2019:1193
    Red Hat Enterprise Linux 6.6 Advanced Update Support [2]qemu-kvmRHSA-2019:1195
    Red Hat Enterprise Linux 6.6 Advanced Update Support [2]libvirtRHSA-2019:1194
    Red Hat Enterprise Linux 6.6 Advanced Update Support [2]microcode_ctlRHEA-2019:1218
    Red Hat Enterprise Linux 6.5 Advanced Update Support [2]kernelRHSA-2019:1196
    Red Hat Enterprise Linux 6.5 Advanced Update Support [2]qemu-kvmRHSA-2019:1198
    Red Hat Enterprise Linux 6.5 Advanced Update Support [2]libvirtRHSA-2019:1197
    Red Hat Enterprise Linux 6.5 Advanced Update Support [2]microcode_ctlRHEA-2019:1219
    Red Hat Enterprise Linux 5 Extended Lifecycle Support [5]kernelsee below
    RHEL Atomic Host [4]kernelrespin pending
    Red Hat Enterprise MRG 2kernel-rtRHSA-2019:1190
    Red Hat Virtualization 4vdsmRHSA-2019:1203
    Red Hat Virtualization 4.2vdsmRHSA-2019:1204
    Red Hat Virtualization 4.3rhvm-setup-pluginsRHSA-2019:1205
    Red Hat Virtualization 4.2rhvm-setup-pluginsRHSA-2019:1206
    Red Hat Virtualization 4virtualization hostRHSA-2019:1207
    Red Hat Virtualization 4rhvm-applianceRHSA-2019:1208
    Red Hat Virtualization 4.2virtualization hostRHSA-2019:1209
    Red Hat OpenStack Platform 14 (Rocky)qemu-kvm-rhevRHSA-2019:1202
    Red Hat OpenStack Platform 14 (Rocky)container imageRHBA-2019:1242
    Red Hat OpenStack Platform 13 (Queens)qemu-kvm-rhevRHSA-2019:1201
    Red Hat OpenStack Platform 13 (Queens)container imageRHBA-2019:1241
    Red Hat OpenStack Platform 10 (Newton)qemu-kvm-rhevRHSA-2019:1200
    Red Hat OpenStack Platform 9 (Mitaka)qemu-kvm-rhevRHSA-2019:1199


    [1] An active EUS subscription is required for access to this patch.  Please contact Red Hat sales or your specific sales representative for more information if your account does not have an active EUS subscription.

    What is the Red Hat Enterprise Linux Extended Update Support Subscription?

    [2] An active AUS subscription is required for access to this patch in RHEL AUS.

    What is Advanced mission critical Update Support (AUS)?

    [3] An active Update Services for SAP Solutions Add-on or TUS subscription is required for access to this patch in RHEL E4S / TUS.

    [4] For details on how to update Red Hat Enterprise Atomic Host, please see Deploying a specific version fo Red Hat Enterprise Atomic Host.

    FAQ: Red Hat Enterprise Linux 5 Extended Life Cycle Support (ELS) Add-On

    [5] At this time, based on the severity of these issues, where Red Hat Enterprise Linux 5 is in its support lifecycle, and the low number of CPU types that will have available microcode that is required for these mitigations, RHEL5 will not be addressed.  Please contact Red Hat Support for available upgrade paths and options.

    NOTE: Subscribers may need to contact their hardware OEMs to get the most up-to-date versions of CPU microcode/firmware.

    Mitigation

    There is no known complete mitigation other than applying vendor software updates combined with hardware OEM-provided CPU microcode/firmware or using non-vulnerable microprocessors. All Red Hat customers should apply vendor solutions to patch their CPUs and update the kernel as soon as patches are available.  Please consult with your system OEM provider or CPU manufacturer for additional details on microcode availability for supported platforms.

    Disabling SMT for affected systems will reduce some of the attack surface, but will not completely eliminate all threats from these vulnerabilities.  To mitigate the risks these vulnerabilities introduce, systems will need updated microcode, updated kernel, virtualization patches, and administrators will need to evaluate if disabling SMT/HT is the right choice for their deployments. Additionally, applications may have a performance impact. See article Disabling Hyper-Threading for information on disabling SMT.

    Customers are advised to take a risk-based approach to mitigating this issue. Systems that require high degrees of security and trust should be addressed first and should be isolated from untrusted systems until such time as treatments can be applied to those systems to reduce the risk of exploit.

    Ansible playbook

    Additionally, an Ansible playbook, disable_mds_smt_mitigate.yml, is provided below. This playbook will disable SMT on the running system, disable SMT for future system reboots, and apply related updates. To use the playbook, specify the hosts you'd like to disable SMT on with the HOSTS extra var:

    ansible-playbook -e HOSTS=web,mail,ldap04 disable_mds_smt_mitigate.yml

    The playbook can also add command line arguments to prevent runtime re-enabling of SMT by setting the FORCEOFF variable true:

    ansible-playbook -e HOSTS=hypervisors -e FORCEOFF=true disable_mds_smt_mitigate.yml

    To verify the legitimacy of the playbook, you can download the detached GPG signature.

    Automate the mitigation

    Current version: 1.1

    Performance Impact and Disabling MDS

    The MDS CVE mitigations have shown to cause a performance impact.  The impact will be felt more in applications with high rates of user-kernel-user space transitions.  For example system calls, NMIs, and interrupts.

    Although there is no way to say what the impact will be for any given workload, in our testing we measured:

    • Applications that spend a lot of time in user mode tended to show the smallest slowdown, usually in the 0-5% range.
    • Applications that did a lot of small block or small packet network I/O showed slowdowns in the 10-25% range.
    • Some microbenchmarks that did nothing other than enter and return from user space to kernel space showed higher slowdowns.

    The performance impact from MDS mitigation can be measured by running your application with MDS enabled and then disabled. MDS mitigation is enabled by default. MDS mitigation can be fully enabled, with SMT also disabled by adding the  “mds=full,nosmt”  flag to the kernel boot command line.  MDS mitigation can be fully disabled by adding the “mds=off” flag to the kernel boot command line. There no way to disable it at runtime.

    For the performance impact of disabling hyperthreading, see the “Disabling Hyper-Threading” section at https://access.redhat.com/security/vulnerabilities/L1TF-perf

    Subscriber exclusive content

    A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

    Current Customers and Partners

    Log in for full access

    Log In

    26 Comments

    Subscriber exclusive content

    A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

    Current Customers and Partners

    Log in for full access

    Log In

    I'm a bit confused on the fix process. Are the Ansible scripts only for mitigation if you don't yet apply the updates? Or:
    1) Do you have to apply the updates AND run the Ansible scripts?
    2) Do the package updates alone disable MT after power-down and boot?
    3) Do you have to also manually disable MT or anything at the hardware BIOS level, like in VMWARE virtual BIOS, or are the package updates enough (and a full power-down and boot)?

    Thanks!

    Hi Michael, Updating the packages alone won't disable SMT. The playbook will disable SMT for the running system, it will change the system's boot settings so that SMT is disabled at boot time, and it will apply updates related to this issue. You'll still need to reboot the system, as only new kernels have mitigations in place. Regarding disabling hyperthreading in your system BIOS, that varies from system to system, but generally speaking the playbook should make that unnecessary.

    After you've updated and run the playbook or disabled SMT by other means, you can run the detection script provided on the Diagnose tab of this page to check your system's status.

    OK -- Thanks! Just to confim (not sure if I'm taking the docs too literally) -- does this actually require a cold boot or full powerdown, and not a simple reboot, or can a regular reboot work as long as the Ansible stuff has been run beforehand? Or is the cold reboot only if HT is disabled at the BIOS level rather than at the kernel param level via Ansible?

    Correct, a reboot should suffice if SMT is disabled by a kernel argument.

    I've got a quick question about the mitigation procedure.

    After applying the Ansible playbook and rebooted I still get the below output. Any idea why my system is still Vulnerable? [root@server tmp]# ./cve-2018-12130--2019-05-14-1319.sh

    This script (v1.0) is primarily designed to detect CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, and CVE-2019-11091 on supported Red Hat Enterprise Linux systems and kernel packages. Result may be inaccurate for other RPM based systems.

    Detected CPU vendor: Intel CPU: Intel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHz CPU model: 63 (0x3f) Running kernel: 3.10.0-957.12.2.el7.x86_64 Architecture: x86_64 Virtualization: vmware Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown

    * CPU microcode update is not detected

    [root@server tmp]# dmesg | grep "MDS:" [ 0.059810] MDS: Vulnerable: Clear CPU buffers attempted, no microcode [root@server tmp]# cat /sys/devices/system/cpu/vulnerabilities/mds Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown

    One side note -- the Ansible playbook does not seem to work with the default Python on RHEL6 (Python 2.6) for the Ansible controller. My Ansible system is still on RHEL6, since I was waiting to go straight to RHEL8, and the syntax blew-up immediately. I had to install a 3rd-party Python and use Ansible with that before it would work. Both Python 2.7 and 3.7 worked for me. I was an early adopter for Ansible, using it pre-v1.0, but I'm guessing there probably aren't many using it with RHEL6, but in case there are still some people doing so and the playbook doesn't work -- it's the Python version. Note that I am referring to the Python for the Ansible controller, not managed/target systems.

    You can decide the right way to deploy the fixes in your environment. The Playbook automates the patches, if you're comfortable with that. If not, "yum update" also is a great tried and true method if you need to move at a different pace.

    HT is NOT disabled without manual intervention for users not deploying the Playbook. That's a risk assessment you need to make for your particular deployment and then you can disable it via the command line. Via the command line you can either set it to be persistent and then reboot to take affect or if you have a newer kernel you can dynamically turn it off and on.

    Thanks! I had one last question in reply above.

    is yum update only can fix above said vulnerability? or other stuff can do also update? please help.

    Hi Bhupendra, under the Resolve tab there is a section called Mitigation and discusses this further. - customers will need to either be using non-vulnerable CPU's, or if vulnerable to take steps to apply microcode updates to the CPU, along with kernel, and virtualisation packages. Additionally, due to the SMT aspect (as discussed under the Diagnose tab) you will want to evaluate your workload/environments on if changes are needed to your SMT settings. Under Overview we also have additional resources, such as blog and technical articles.

    I'm sorry to say that I'm still a bit confused.  To completely mitigate these vulnerabilities, we have to:

    • apply Red Hat errata,
    • apply hardware OEM-provided CPU microcode/firmware updates, and
    • reboot physical and virtual systems.

    Disabling SMT/HT is just a way to lessen (not eliminate) the attack surface until the 3 above steps can be done.  Right?  Disabling SMT/HT is not a 4th step that must be done in order to fully mitigate these, right?

    Just like all the previous issues you must apply cpu microcode_clt, you must apply relevant Red Hat patches (kernel, libvirt, qemu-kvm, etc), and to guarantee your system recognizes the uCode, a reboot is strongly recommended. You particular circumstance may or may not see the uCode without a reboot, but for consistency we ask for one to be done. Disabling SMT//HT is an option that should be considered if you are running multi-tenant workloads with sensitive data.

    Your title ("MDS - Microarchitectural Store Buffer Data (...)") suggests MDS stands for "Microarchitectural Store Buffer Data, yet several other sources like https://en.wikipedia.org/wiki/Microarchitectural_Data_Sampling think MDS stands for "Microarchitectural Data Sampling".

    Is the title correct or false?

    Hi, MDS as you mention specifically stands for Microarchitectural Data Sampling. I can see how the title as it is, could cause some confusion.

    Just for Info: There is an error in the ansible playbook in following line (perhaps also in other b64decoded strings):

    • when: '"notsupported" == smtcontrol_contents.content|b64decode'

    Instead you should check for

    • when: '"notsupported\n" == smtcontrol_contents.content|b64decode'

    We will look into it. Thanks for the report.

    This issue was addressed in version 1.1 of the playbook. Thanks for letting us know.

    Is there any ETA for fixed package/image for "Red Hat OpenStack Platform 10+"?

    Hi, the qemu-kvm-rhev RPM's were released overnight for the various OpenStack versions. We are respining the container images and we will update the table when we are notified that they are available to use. The table now links to the various qemu-kvm-rhev RHSA's. Regards, Cliff

    Hi, I'm pleased to say that the OpenStack 13 and 14 Container Images have also been published and are now available to download from our Container Catalog. Links to the RHBA's are also listed under the Resolve tab. Regards, Cliff

    Is it sufficient to apply microcode released by RHEL microcode_ctl ? OR Do we need to apply micro codes from vendors [like Lenovo] as well like it was recommended in case of Spectre/Meltdown Variant 2?

    The microcode_clt package has been supplied straight from Intel to us. It has the most currently publicly available updates from them (newer updates are planned for additional CPUs over the coming days/weeks, and we'll reissue package accordingly). OEMs may make additional changes to their BIOS code to unlock special vendor features or performance ticks. Check with your system OEM/integrator to ensure they have not made additional modifications (or what those changes might be). The package we supply will work with the CPUs in scope, but might not contain those additional changes.

    Hello RedHat,

    when I ran: dmesg | grep "MDS:" or this one: cat /sys/devices/system/cpu/vulnerabilities/mds I don't see any output which means I do not have to worry, correct? Also, it affected only RHEL 6 or? Thanks,

    Hi, ALL versions of RHEL are impacted, if running on most Intel CPUs. If you have not yet applied the updated kernel and rebooted, then you would see no MDS messages. If you are using an Intel CPU, and seeing this after a reboot with new kernel, then please can you contact Red Hat Support for assistance, and/or troubleshooting. Regards, Cliff

    It's my understanding that at least for some of these vulnerabilities, the server has to have SMT enabled, meaning that we allow multiple threads per core. So, I'm guessing that if SMT is either disabled or unsupported, or we're only configured to allow one thread per core, then we should be fine. Is this correct?

    SMT is only one factor, which if left enabled could expose data for untrusted or multi-tenant environments where multiple threads share the same core. You will still need to apply updated kernel, microcode and virtualization packages to secure the environment from the non-SMT attacks, which can happen for all listed CVEs.