Meltdown & Spectre - Kernel Side-Channel Attacks - CVE-2017-5754 CVE-2017-5753 CVE-2017-5715
Red Hat has been made aware of multiple microarchitectural (hardware) implementation issues affecting many modern microprocessors, requiring updates to the Linux kernel, virtualization-related components, and/or in combination with a microcode update. An unprivileged attacker can use these flaws to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. There are 3 known CVEs related to this issue in combination with Intel, AMD, and ARM architectures. Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian).
Background Information
An industry-wide issue was found with the manner in which many modern microprocessor designs have implemented speculative execution of instructions (a commonly used performance optimization). There are three primary variants of the issue which differ in the way the speculative execution can be exploited. All three rely upon the fact that modern high performance microprocessors implement both speculative execution, and utilize VIPT (Virtually Indexed, Physically Tagged) level 1 data caches that may become allocated with data in the kernel virtual address space during such speculation.
The first two variants abuse speculative execution to perform bounds-check bypass (CVE-2017-5753), or by utilizing branch target injection (CVE-2017-5715) to cause kernel code at an address under attacker control to execute speculatively. Collectively these are known as "Spectre". Both variants rely upon the presence of a precisely-defined instruction sequence in the privileged code, as well as the fact that memory accesses may cause allocation into the microprocessor’s level 1 data cache even for speculatively executed instructions that never actually commit (retire). As a result, an unprivileged attacker could use these two flaws to read privileged memory by conducting targeted cache side-channel attacks. These variants could be used not only to cross syscall boundary (variant 1 and variant 2) but also guest/host boundary (variant 2).
The third variant (CVE-2017-5754) relies on the fact that, on impacted microprocessors, during speculative execution of instruction permission faults, exception generation triggered by a faulting access is suppressed until the retirement of the whole instruction block. Researchers have called this exploit "Meltdown". Subsequent memory accesses may cause an allocation into the L1 data cache even when they reference otherwise inaccessible memory locations. As a result, an unprivileged local attacker could read privileged (kernel space) memory (including arbitrary physical memory locations on a host) by conducting targeted cache side-channel attacks.
Acknowledgements
Red Hat would like to thank Google Project Zero for reporting these flaws.
Additional References
Is CPU microcode available to address CVE-2017-5715 via the microcode_ctl package ?
Have questions? Watch the webinar on demand.
Find out more about the Speculative Store Bypass, which was announced in May 2018.
What are Meltdown and Spectre? Here’s what you need to know.
https://googleprojectzero.blogspot.ca/2018/01/reading-privileged-memory-with-side.html
Speculative Execution Exploit Performance Impacts - Describing the performance impacts to security patches for CVE-2017-5754 CVE-2017-5753 and CVE-2017-5715Controlling the Performance Impact of Microcode and Security Patches for CVE-2017-5754 CVE-2017-5715 and CVE-2017-5753 using Red Hat Enterprise Linux Tunables
AMD Reccomendations for CVE-2017-5715
Additional Red Hat Product References
Options to address CVE-2017-5753 on XEN platform
Impacts of CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 to Red Hat Virtualization products
Impact of CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 to Red Hat OpenStack
Impacted Products
Red Hat Product Security has rated this update as having a security impact of Important.
The following Red Hat product versions are impacted:
Red Hat Enterprise Linux 5
Red Hat Enterprise Linux 6
Red Hat Enterprise Linux 7
Red Hat Atomic Host
Red Hat Enterprise MRG 2
Red Hat OpenShift Online v2
Red Hat OpenShift Online v3
Red Hat Virtualization (RHEV-H/RHV-H)
Red Hat Enterprise Linux OpenStack Platform 6.0 (Juno)
Red Hat Enterprise Linux OpenStack Platform 7.0 (Kilo) for RHEL7
Red Hat Enterprise Linux OpenStack Platform 7.0 (Kilo) director for RHEL7
Red Hat OpenStack Platform 8.0 (Liberty)
Red Hat OpenStack Platform 8.0 (Liberty) director
Red Hat OpenStack Platform 9.0 (Mitaka)
Red Hat OpenStack Platform 9.0 (Mitaka) director
Red Hat OpenStack Platform 10.0 (Newton)
Red Hat OpenStack Platform 11.0 (Ocata)
Red Hat OpenStack Platform 12.0 (Pike)
While Red Hat's Linux Containers are not directly impacted by kernel issues, their security relies upon the integrity of the host kernel environment. Red Hat recommends that you use the most recent versions of your container images. The Container Health Index, part of the Red Hat Container Catalog, can always be used to verify the security status of Red Hat containers. To protect the privacy of the containers in use, you will need to ensure that the Container host (such as Red Hat Enterprise Linux or Atomic Host) has been updated against these attacks. Red Hat has released an updated Atomic Host for this use case.
Attack Description and Impact
The attacks described in this article abuse the speculative execution capability of modern high-performance microprocessors. Modern microprocessors implement a design optimization known as “Out-of-Order” (OoO) execution, meaning the microprocessor will begin to execute independent instructions as soon as their data dependencies become available (so-called “data flow” model) rather than always executing instructions in the literal order provided by the programmer through their application binary. The illusion of a sequential execution is maintained within the processor by means of various internal reordering structures that buffer these intermediate execution states of the processor and present the in-order results. Out-of-Order (OoO) Execution was originally invented by Robert Tomasulo in 1967 for use in the early IBM mainframe systems. In the intervening decades, it has become a standard feature of nearly all microprocessors.
An extension of the Out-of-Order execution model adds highly sophisticated branch prediction algorithms that aim to predict whether a given path (branch) of software code will be executed. A branch can be thought of as changing the flow of instructions being executed by the processor in response to an “if” statement of the form “if this, then do A, otherwise do B”. The condition upon which a branch is taken or not taken often depends upon data that may not immediately be available (for example, because it requires a load from slower RAM into the internal microprocessor cache hierarchy). Since the branch condition may not be ready in a timely manner, the processor may begin to speculatively execute the most likely path, based on input from the branch predictor. Results from this execution are stored in such a manner that the entire path can be discarded if the speculated branch direction later turns out to be incorrect. Thus, speculative execution is normally completely invisible to the programmer, or to other users of the same system.
The attacks described in this article rely upon breaking open the black box that is the internal state of the microprocessor during speculative execution. In particular, the attacks rely upon a technique known as cache side-channel analysis. During speculative execution, a processor will not intermediately make results available in memory or registers visible to the programmer, to other processors, or to other running applications. Yet, in order to access memory within speculated code paths, it must bring the data in the processor cache. Side-channel analysis allows an attacker to observe speculated allocations (loads) into the system caches, even those coming from execution paths that ultimately have been discarded. A specially crafted program can then be designed to speculatively perform loads into the cache from privileged memory locations, and monitor the results which can be used to infer the content of that privileged memory.
One case that triggers CPU speculative execution is branches. An attacker can start by training the branch predictor that a particular branch in kernel code is heavily taken (or not taken). The next time the branch executes, the processor will start executing the code in the direction chosen by the attacker. If the attacker chooses a path that loads a value from memory, such load will be executed speculatively. Attacks against branch prediction can (in some affected microprocessor implementations) be extended across the kernel/hypervisor boundary, allowing unprivileged guest Operating Systems to exert influence over the execution of the hypervisor and, when combined with side-channel analysis, to extract sensitive hypervisor memory.
The effects of speculative execution however can be even more wide-ranging. Because the internal state of the microprocessor is not visible to the programmer, or to other users or applications running on the system, the processor may perform speculative data accesses even before checking whether they are permitted. Permission checks will occur in parallel and ultimately trigger an abort of the speculation prior to retiring the speculated instructions and making their execution results visible outside the processor. If the processor speculatively uses cached data from memory prior to completing the permission checks, then it becomes possible to observe that data by using it in subsequent memory accesses.
One example of such permission checks is page access checks from the memory management unit (MMU). Paging, also known as virtual memory, is a common feature of high performance microprocessors; it lets the operating system control the mapping of virtual addresses into the physical addresses of the system RAM, and also limit accesses to the virtual addresses through access control bits. For example, a page can be marked as “read-only” (so that writes cause a page fault exception) or as “kernel memory” (so that user-mode accesses cause a page fault exception).
Because the processor’s permission checks enforce that user applications cannot access kernel memory, it is standard practice in the industry for operating system kernels (including Linux) to map kernel virtual memory addresses into the same address space as user applications. Doing so creates a significant performance advantage since applications make frequent use of kernel-provided system calls, and switching address spaces during each system call would incur a significant performance overhead. Each switch would require flushing (invalidating) the content of many internal CPU structures, such as TLBs (Translation Lookaside Buffers) that cache virtual-to-physical memory translations and accelerate the use of virtual memory.
Sharing the page tables between the kernel and user applications, however, enables another kind of attack against speculative execution. In this case, the preparatory phase has the attacker trick the kernel into loading an “interesting” virtual address into the processor’s Level 1 (L1) data cache. The L1 data cache is commonly organized using a technique known as VIPT (Virtually Indexed, Physically Tagged), which lets the virtual-to-physical address translation and permission checks occur in parallel with access. In the presence of a shared virtual address space, kernel privileged virtual addresses might be accessed speculatively through the L1 cache by untrusted user code during speculative execution, and the loaded values can be used further down the speculatively-executed path. A second speculative memory access can thus fill the cache in a manner that depends on privileged memory contents, and the effects can be observed to derive those contents.
These microprocessor side-channel attacks may allow an untrusted user with access to a machine to extract sensitive information from privileged kernel or hypervisor memory, as well as from other applications or virtual machines running on the same system. Mitigation involves a number of discrete steps, some or all of which may be required depending upon the exact make and model of the microprocessor, each of which may be vulnerable to a differing extent to each variant of the attack:
- Separating the kernel and user virtual address spaces: this is performed using a design change to the Operating System kernel known as KPTI (Kernel Page Table Isolation), sometimes referred to using the older name “KAISER”.
- Disabling indirect branch prediction upon entry into the kernel or into the hypervisor: New capabilities have been added to many microprocessors across the industry through microcode, millicode, firmware, and other updates. These new capabilities are leveraged by updates to Red Hat Enterprise Linux which control their use.
- Fencing speculative loads of certain memory locations: Such loads have to be annotated through small changes to the Linux kernel, which have been integrated into Red Hat updates.
These software solutions, in combination with microcode, millicode, and firmware updates can mitigate the attacks described in this article. However, the mitigation comes at some cost to system performance. Depending upon the specific system, make, and model of the microprocessors, as well as the characteristics of the workloads, the performance impact can be significant. Red Hat is taking a proactive position that favors security over performance, while allowing users the flexibility to assess their own environment and make appropriate tradeoffs through selectively enabling and disabling the various mitigations.
The Red Hat Performance Engineering team has created a Knowledgebase article reporting observed performance impact for a variety of representative workloads, and describing options users can take to disable parts of the security fixes to regain the desired level of performance if the customer is confident their computers are physically isolated. Additional performance data will be published as this incident develops.
The Red Hat Performance Engineering and Product Engineering teams have developed a Knowledgebase article detailing the tunables available to selectively enable or disable these new security features.
Additional guidance for subscribers using AMD-based systems can be found in this Knowledgebase article.
Detection & Diagnosis
Determine if your system is vulnerable
Red Hat has created a Labs app to assist in detection of exposure to these vulnerabilities. Subscribers can use the following link to access our Labs tool.
Determine if your system is vulnerable. Use the detection script to determine if your system is currently vulnerable to this flaw. To verify the legitimacy of the script, you can download the detached GPG signature as well. The current version of the script is 3.3.
For subscribers running Red Hat Virtualization products, a Knowledgebase article has been created to verify OEM-supplied microcode/firmware has been applied.
Take Action
Red Hat customers running affected versions of the Red Hat products are strongly recommended to update them as soon as errata are available. Customers are urged to apply the appropriate updates immediately. All impacted products should apply fixes to mitigate all 3 variants; CVE-2017-5753 (variant 1), CVE-2017-5715 (variant 2), and CVE-2017-5754 (variant 3).
NOTES
- Meltdown is the branded name for CVE-2017-5754 (variant 3)
- Spectre is the branded name for the combined CVE-2017-5753 (variant 1) & CVE-2017-5715 (variant 2)
- Due to the nature of changes required, a kpatch for customers running Red Hat Enterprise Linux 7.2 or greater will NOT be available.
- Variant 2 can be exploited both locally (within the same OS) and through the virtualization guest boundary. Fixes require CPU microcode/firmware to activate. Subscribers are advised to contact their hardware OEM to receive the appropriate microcode/firmware for their processor. An additional Knowledgebase article is available with more details.
- The errata for eligible releases that address CVE-2017-5754 (Red Hat Enterprise Linux 6), CVE-2017-5753 (Red Hat Enterprise Linux 5 and 6) and CVE-2017-5715 (Red Hat Enterprise Linux 5 and 6) do also include support for x86 32bit (i686) kernels.
- For customers using Red Hat OpenStack Platform, an additional Knowledgebase article is also available.
- For customers using Red Hat Satellite 6, an additional Satellite 6 Knowledgebase article is also available. NOTE: Please use the table below to help quickly identify the specific RHSA ID's that you want to apply to your systems.
CVE-2017-5754 (variant 3, aka Meltdown) patches for 32-bit Red Hat Enterprise Linux 5
Red Hat has no current plans to provide mitigations for the Meltdown vulnerability in 32-bit Red Hat Enterprise Linux 5 environments.
Following many hours of engineering investigation and analysis, Red Hat has determined that introducing changes to the Red Hat Enterprise Linux 5 environment would destabilize customer deployments and violate our application binary interface (ABI) and kernel ABI commitments to customers who rely on Red Hat Enterprise Linux 5 to be absolutely stable.
Although Red Hat has delivered patches to mitigate the Meltdown vulnerability in other supported product offerings, the 32-bit Red Hat Enterprise Linux 5 environment presents unique challenges. The combination of limited address space in 32-bit environments plus the mechanism for passing control from the userspace to kernel and limitations on the stack during this transfer make the projected changes too invasive and disruptive for deployments that require the highest level of system stability. By contrast, 32-bit Meltdown mitigations have been delivered for Red Hat Enterprise Linux 6, where the changes are far less invasive and risky.
Updates for Affected Products
Product | Package | Advisory/Update | Applicable to Variant |
Red Hat Enterprise Linux 7 (z-stream) | kernel | RHSA-2018:0007 | 1,2,3 |
Red Hat Enterprise Linux 7 | kernel-rt | RHSA-2018:0016 | 1,2,3 |
Red Hat Enterprise Linux 7 | libvirt | RHSA-2018:0029 | 2 |
Red Hat Enterprise Linux 7 | qemu-kvm | RHSA-2018:0023 | 2 |
Red Hat Enterprise Linux 7 | dracut | RHBA-2018:0042 | 2 |
Red Hat Enterprise Linux 7.3 Extended Update Support [2] | kernel | RHSA-2018:0009 | 1,2,3 |
Red Hat Enterprise Linux 7.3 Extended Update Support [2] | libvirt | RHSA-2018:0031 | 2 |
Red Hat Enterprise Linux 7.3 Extended Update Support [2] | qemu-kvm | RHSA-2018:0027 | 2 |
Red Hat Enterprise Linux 7.3 Extended Update Support [2] | dracut | RHBA-2018:0043 | 2 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [3],[4] | kernel | RHSA-2018:0010 | 1,2,3 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [3],[4] | libvirt | RHSA-2018:0032 | 2 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [3],[4] | qemu-kvm | RHSA-2018:0026 | 2 |
Red Hat Enterprise Linux 7.2 Update Services for SAP Solutions, & Advanced Update Support [3],[4] | dracut | RHBA-2018:0062 | 2 |
Red Hat Enterprise Linux 6 (z-stream) | kernel | RHSA-2018:1319 | 1,2,3 |
Red Hat Enterprise Linux 6 | libvirt | RHSA-2018:0030 | 2 |
Red Hat Enterprise Linux 6 | qemu-kvm | RHSA-2018:0024 | 2 |
Red Hat Enterprise Linux 6.7 Extended Update Support [2] | kernel | RHSA-2018:1346 | 1,2,3 |
Red Hat Enterprise Linux 6.7 Extended Update Support [2] | libvirt | RHSA-2018:0108 | 2 |
Red Hat Enterprise Linux 6.7 Extended Update Support [2] | qemu-kvm | RHSA-2018:0103 | 2 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [3],[4] | kernel | RHSA-2018:0017 | 1,2,3 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [3],[4] | libvirt | RHSA-2018:0109 | 2 |
Red Hat Enterprise Linux 6.6 Advanced Update Support [3],[4] | qemu-kvm | RHSA-2018:0104 | 2 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [3] | kernel | RHSA-2018:0022 | 1,2,3 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [3] | libvirt | RHSA-2018:0110 | 2 |
Red Hat Enterprise Linux 6.5 Advanced Update Support [3] | qemu-kvm | RHSA-2018:0105 | 2 |
Red Hat Enterprise Linux 6.4 Advanced Update Support [3] | kernel | RHSA-2018:0018 | 1,2,3 |
Red Hat Enterprise Linux 6.4 Advanced Update Support [3] | libvirt | RHSA-2018:0111 | 2 |
Red Hat Enterprise Linux 6.4 Advanced Update Support [3] | qemu-kvm | RHSA-2018:0106 | 2 |
Red Hat Enterprise Linux 6.2 Advanced Update Support [3] | kernel | RHSA-2018:0020 | 1,2,3 |
Red Hat Enterprise Linux 6.2 Advanced Update Support [3] | libvirt | RHSA-2018:0112 | 2 |
Red Hat Enterprise Linux 6.2 Advanced Update Support [3] | qemu-kvm | RHSA-2018:0107 | 2 |
Red Hat Enterprise Linux 5 Extended Lifecycle Support [1] | kernel | RHSA-2018:1196 | 1,2,3 |
Red Hat Enterprise Linux 5.9 Advanced Update Support [3] | kernel | RHSA-2018:1252 | 1,2,3 |
RHEL Atomic Host | kernel | Images respun on 5 January 2018 | 1,2,3 |
Red Hat Enterprise MRG 2 | kernel-rt | RHSA-2018:0021 | 1,2,3 |
Red Hat Virtualization 4 (RHEV-H/RHV-H) [6] | redhat-virtualization-host | RHSA-2018:0047 | 1,2,3 |
Red Hat Virtualization 4 (RHEV-H/RHV-H) [6] | rhvm-appliance | RHSA-2018:0045 | 1,2,3 |
Red Hat Virtualization 4 (RHEV-H/RHV-H) [6] | qemu-kvm-rhev | RHSA-2018:0025 | 2 |
Red Hat Virtualization 4 (RHEV-H/RHV-H) [6] | vdsm | RHSA-2018:0050 | 2 |
Red Hat Virtualization 4 (RHEV-H/RHV-H) [6] | ovirt-guest-agent-docker | RHSA-2018:0047 | 2 |
Red Hat Virtualization 4 (RHEV-H/RHV-H) [6] | rhevm-setup-plugins | RHSA-2018:0051 | 2 |
Red Hat Virtualization 3 (RHEV-H/RHV-H) [6] | redhat-virtualization-host | RHSA-2018:0044 | 1,2,3 |
Red Hat Virtualization 3 ELS (RHEV-H/RHV-H) [6] | rhev-hypervisor7 | RHSA-2018:0046 | 1,2,3 |
Red Hat Virtualization 3 ELS (RHEV-H/RHV-H) [6] | qemu-kvm-rhev | RHSA-2018:0028 | 2 |
Red Hat Virtualization 3 ELS (RHEV-H/RHV-H) [6] | vdsm | RHSA-2018:0048 | 2 |
Red Hat Virtualization 3 ELS (RHEV-H/RHV-H) [6] | rhevm-setup-plugins | RHSA-2018:0052 | 2 |
Red Hat Enterprise Linux OpenStack Platform 6.0 (Juno) for RHEL7 [7] | qemu-kvm-rhev | RHSA-2018:0054 | 2 |
Red Hat Enterprise Linux OpenStack Platform 7.0 (Kilo) for RHEL7 [7] | qemu-kvm-rhev | RHSA-2018:0055 | 2 |
Red Hat Enterprise Linux OpenStack Platform 7.0 (Kilo) director for RHEL7 [7] | director images | RHBA-2018:0064 | 1,2,3 |
Red Hat OpenStack Platform 8.0 (Liberty) [7] | qemu-kvm-rhev | RHSA-2018:0056 | 2 |
Red Hat OpenStack Platform 8.0 (Liberty) [7] | director images | RHBA-2018:0065 | 1,2,3 |
Red Hat OpenStack Platform 9.0 (Mitaka) [7] | qemu-kvm-rhev | RHSA-2018:0057 | 2 |
Red Hat OpenStack Platform 9.0 (Mitaka) [7] | director images | RHBA-2018:0069 | 1,2,3 |
Red Hat OpenStack Platform 10.0 (Newton) [7] | qemu-kvm-rhev | RHSA-2018:0058 | 2 |
Red Hat OpenStack Platform 10.0 (Newton) [7] | director images | RHBA-2018:0067 | 1,2,3 |
Red Hat OpenStack Platform 11.0 (Ocata) [7] | qemu-kvm-rhev | RHSA-2018:0059 | 2 |
Red Hat OpenStack Platform 11.0 (Ocata) [7] | director images | RHBA-2018:0068 | 1,2,3 |
Red Hat OpenStack Platform 12.0 (Pike) [7] | qemu-kvm-rhev | RHSA-2018:0060 | 2 |
Red Hat OpenStack Platform 12.0 (Pike) [7] | director images | RHBA-2018:0066 | 1,2,3 |
Red Hat OpenStack Platform 12.0 (Pike) [7] | containers | RHBA-2018:0070 | 1,2,3 |
[1] An active ELS subscription is required for access to this patch. Please contact Red Hat sales or your specific sales representative for more information if your account does not have an active ELS subscription.
[2] An active EUS subscription is required for access to this patch. Please contact Red Hat sales or your specific sales representative for more information if your account does not have an active EUS subscription.
What is the Red Hat Enterprise Linux Extended Update Support Subscription?
[3] An active AUS subscription is required for access to this patch in RHEL AUS.
What is Advanced mission critical Update Support (AUS)?
[4] An active TUS subscription is required for access to this patch in RHEL TUS.
[5] Subscribers should contact their hardware OEMs to get the most up-to-date versions of CPU microcode/firmware.
[6] Impacts of CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 to Red Hat Virtualization products
[7] Impact of CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 to Red Hat OpenStack
Mitigation
There is no known mitigation, other than applying vendor software updates combined with hardware OEMs CPU microcode/fiirmware. All Red Hat customers should apply vendor solutions to patch their CPUs, and update the kernel as soon as patches are available.
Customers are advised to take a risk-based approach in mitigating this issue. Systems that require high degrees of security and trust should be addressed first, and should be isolated from untrusted systems until such time as treatments can be applied to those systems to reduce the risk of exploit.
Customers running Xen hypervisors should be aware of technical limitations of that software that cannot completely eliminate the variant 2 exploit, and cannot eliminate the variant 3 exploit on paravirtualized guests. Red Hat has prepared a Knowledgebase article detailing the Xen situation and options available to Xen hypervisor users.
46 Comments
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.
Current Customers and Partners
Log in for full access
Log InNew to Red Hat?
Learn more about Red Hat subscriptions
Yes, but the bugzilla's are currently in a private state and will be made public once all QE has completed and we publish new RHSA's indicating the removal. Regards, Cliff
Dears, I downloaded the diagnostic script and ran it on 3 servers and I got the following: Variant #1 (Spectre): Mitigated Variant #2 (Spectre): Vulnerable Variant #3 (Meltdown): Mitigated
which Variaint #2 is always Vulnerable what patch am I missing? is this the vendor one?
also I am wondering if there is a way to benchmark performance before and after the apply yum update patches?
thanks!
Dears, I downloaded the diagnostic script and ran it on 3 servers and I got the following: Variant #1 (Spectre): Mitigated Variant #2 (Spectre): Vulnerable Variant #3 (Meltdown): Mitigated
which Variaint #2 is always Vulnerable what patch am I missing? is this the vendor one?
also I am wondering if there is a way to benchmark performance before and after the apply yum update patches?
thanks!
Hi Saad, Yes, for Variant 2 for it to be mitigated, it requires the microcode/firmware to be applied from your OEM/hardware vendor. Regards, Cliff
Pretty disappointed in Red Hat.
A lot of your customers utilize Nutanix for their hardware platform, and 7.4 is not supported yet.
You've only released kernel-3.10.0-514.36.5.el7 to EUS subscribers, which pretty much leaves us hostage to either purchase the subscription or wait for our hypervisor to support 7.4, meanwhile we're vulnerable.
We can't even QA the patches...
I think you should discuss that in a support case or with your account manager. On the other hand, you're not disappointed to see that Nutanix doesn't support a minor OS release that is 6 months old already?
according to CVE-2017-5015, sould I downgrade linux-firmware and mocrocode_ctl package upgraded for spectre and meltdown already?
Hi JK, did you get any response from anyone/RH for your question ?
J.K. Please review our microcode KCS article: https://access.redhat.com/solutions/3315431 for more information about microcode. TL/DR - please contact your hardware supplier/microprocessor manufacturer for the appropriate microcode for your specific CPU.
After the patch update both kernel and hardware did you experienced any issues with cpu performance? Did it slows down by 30%?
Thanks,
We're not a big shop, but in our case no one noticed the differences of applying the software patches. Please note that our servers are not heavily used.
Hello, Are patches released for RHEL 5? I still see as "pending" under Resolve tab for RHEL5.
RHEL5 fixes are still be developed and tested. Check back to the Resolve tab of this article for updates.
Dears, Is there a script to benchmark performance before and after applying the patch?
I found a simple cc script, if anyone wants to benchmark before and after:
include <unistd.h> include <sys/syscall.h> include <stdio.h>int main(void) { long value; for (int i=0; i<100000000;i++) value=syscall(SYS_getpid); printf("%d\n",(int)value); return 0; } save it as test.cc gcc -o test test.cc run: time ./test.cc
I did run it on 3 RHEL 7.4 servers and here are the outputs:
Server 1 (fully patched) Variant #1 (Spectre): Mitigated Variant #2 (Spectre): Mitigated Variant #3 (Meltdown): Mitigated Bench: 38109 real 1m47.325s user 0m44.354s sys 1m2.971s
RHEL 7.4 (half patched): Variant #1 (Spectre): Mitigated Variant #2 (Spectre): Vulnerable Variant #3 (Meltdown): Mitigated
Bench: 12976 real 0m49.508s user 0m26.314s
sys 0m23.194sRHEL 7.4 (not patched) Variant #1 (Spectre): Vulnerable Variant #2 (Spectre): Vulnerable Variant #3 (Meltdown): Vulnerable 19869 real 0m11.315s user 0m2.610s
sys 0m8.702sSignificant decrease of performance... what do you guys think?
Confirmed - we get similar results with your code on multiple RHEL6 and RHEL7 hosts.. example..
RHEL7 - unmitigated real 0m12.753s user 0m3.086s sys 0m9.540s
versus
RHEL7 - mitigated for all 3.. real 1m33.863s user 0m25.401s sys 1m8.408s
RHEL6 - unmitigated: real 0m15.305s user 0m2.729s sys 0m12.560s
versus
RHEL6 - mitigated for all 3.. real 1m36.795s user 0m21.246s sys 1m15.448s
Ran same compiled binary (g++ -o test /tmp/test.cc) on both platforms.
so hows the performance? i tried it while overloading the core usage.
BEFORE
real 0m22.869s user 0m3.251s sys 0m13.005s
real 0m21.607s user 0m3.355s sys 0m12.728s
real 0m24.133s user 0m3.285s sys 0m12.789s
real 0m23.475s user 0m3.315s sys 0m12.548s
real 0m21.986s user 0m3.287s sys 0m12.782s
real 0m22.195s user 0m3.277s sys 0m12.572s
AFTER
real 0m34.918s user 0m10.688s sys 0m24.226s
real 0m36.652s user 0m10.583s sys 0m26.065s
real 0m36.435s user 0m10.540s sys 0m25.891s
Ok so we just received new patches on the 15th that backs this out... SO Do we apply the latest patches or the ones from the 3rd? Which ones? This is very confusing and this does not address that. RHSA-2018:0093 - New one RHSA-2018:0094 - New one RHSA-2018:0014 - OLD RHSA-2018:0013 - OLD RHSA-2018:0012 - OLD
Any clarification? We have both RHEL6 and RHEL7 servers.
Please see the updates to https://access.redhat.com/solutions/3315431 Red Hat have simply followed Intel's advise, that the latest firmware was causing higher than expected reboots, and have reverted the microcode updates by releasing a new package.
I selected to get notified when this content is updated and never received anything even though this document was updated many times since. Is that a known issue?
+1 I never got any updates either.
I'll open a bug with our customer portal team to investigate. Thank you for bringing this to my attention.
Thanks Christopher. I also didn't get any notification about the fact that someone replied to my comment.
Yesterday an announcement was made the RH will be reverting the microcode_ctl package. All my hardened OS's have to be re-spun. The changes made were originally for fixing CVE-2017-5715. However, on the page for CVE-2017-5715, I dont see any mention of the changes-revert. Other than the announcement email, where can I read more about it? Are more changes coming regarding these?
Sam - the only change we made was to revert our supplied microcode to what was known good and tested that was available prior to January 3rd, 2018. The software patches we supplied for that CVE have not changed. Please contact your system supplier/cpu OEM to get the appropriate microcode for your particular system.
Thanks Chris for clearing that up. Are you aware if any additional changes are in the works for either fixing or reverting the changes aimed at Spectre and Meltdown vulnerabilities?
Hello I use RHEL7.3 server. So I update my kernel. my problem is:
RHEL doesn't supply the CPU microcode anymore, which is needed for Variant #2. You have to ask your server vendor for a microcode update (in my case (HPE), it was a BIOS update). But even then some hardware vendors don't provide the microcode for now. For example, for HPE servers, the update was available and is still available for Gen10 servers. For Gen9 servers, they did provide it for a while but removed it (like Red Hat did). For Gen8 and older servers, it has never been available.
This is becoming more confusing with the recent reversal of the
microcodectl
andlinux-firmware
packages from RedHat. Intel provides the microcode data file at https://downloadcenter.intel.com/download/27431/Linux-Processor-Microcode-Data-File?product=873 , but the instructions state that the file should be placed in/etc/firmware
directory. That has no effect when the zipped archive is placed there. The contents of the file are similiar to what themicrocodectl
package provides.So, I have no idea what's going on right now, who's doing what or why the updates were reverted. Is there a possibility to undo the revert or are there instructions on how to use the available tools on RHEL to put the microcode updates in place in a sane way?
For what it's worth, a dirty workaround is to download the files from the intel website, extract the archive, and copy the
intel-ucode
directory contents to/lib/firmware/intel-ucode/
.Hello, That is what we are doing (so far): Applying the Red Hat rpm's (including latest microcode_ctl), and then just copying the latest Intel microcode to /lib/firmware/intel-ucode. This allows Red Hat's script to pass on all our 2U servers except for some newer Dell Poweredge systems (processor 06-4f-01). For those systems, the Intel update does not fix the issue ("updated microcode: NO").
Hello, Are patches released for RHEL 6.8 and 6.9 ? tanks in advance
RHEL 6.9 has patches available as Red Hat Enterprise Linux 6 , as you can see in the resolve tab
RHEL 6.8 is unsupported and there will be no patches coming out
Hello,
Is there any ETA for RHEL 5 patch ?
Thanks !!!
I'd like to support Alexander Kim's question on the ETA for RHEL5.
Hi guys, did anyone of you installed microcode packages of RHSA-2018:0093 advisory?
I have an case open about what version of microcode to use and support is saying me to revert back -manually- to:
microcode_ctl-2.1-22.el7 [https://access.redhat.com/errata/RHEA-2017:1851]
instead of using the latest one:
microcode_ctl-2.1-22.5.el7_4 [https://access.redhat.com/errata/RHSA-2018:0093]
even if they does not differ:
rpm -qpl microcode_ctl-2.1-22.el7.x86_64.rpm > 22 && rpm -qpl microcode_ctl-2.1-22.5.el7_4.x86_64.rpm > 22.5 && diff 22 22.5
Hope you're not left in the dark. Guess you might be already aware of this: 1. https://newsroom.intel.com/news/root-cause-of-reboot-issue-identified-updated-guidance-for-customers-and-partners/ 2. https://access.redhat.com/solutions/3315431 As the first microcode update may cause irregular reboot at several sites, Red Hat had pulled it back. Intel is still working on a new version.
You can also check the previous comments on this thread for advices from other members, especially those written on 12-Jan.
b.rgds
I tried the Intel microcode upgrade for a E3-1245, and the procedure described by Intel is incorrect, so it does not work for me.
Hi, Sorry, I need an update for my RHEL6.4 WS, but can't donwload in from https://access.redhat.com/errata/RHSA-2018:0018 (Updated Packages). Why package names are without links?
Thank you!
My one ask is (for such a long running case with a multiplicity of products) is RedHat would put a change log at the top or bottom of the Overview tab (or better yet, create a 'Changelog' tab) a user can click to see a synopsis of the updates as they occur.
For example, the page currently states:
Public Date: January 2 2018 at 7:00 PM Updated Monday at 1:50 PM - English
What was updated Monday at 1:50pm? I imagine users would want to see that.
I noticed that the cpu usage has decreased by 30%. Here's the test result of an overloaded cpu usage up to 99% per core before and after the kernel upgrade and patching of 7 z stream. What do you think guys?
BEFORE
real 0m22.869s user 0m3.251s sys 0m13.005s
real 0m21.607s user 0m3.355s sys 0m12.728s
real 0m24.133s user 0m3.285s sys 0m12.789s
real 0m23.475s user 0m3.315s sys 0m12.548s
real 0m21.986s user 0m3.287s sys 0m12.782s
real 0m22.195s user 0m3.277s sys 0m12.572s
AFTER
real 0m34.918s user 0m10.688s sys 0m24.226s
real 0m36.652s user 0m10.583s sys 0m26.065s
real 0m36.435s user 0m10.540s sys 0m25.891s
Itani's test is to report how much time is needed for the CPU to add 100 million times. By that one can compare how much time is needed before and after applying the mitigation.
From your examples above, your CPU takes more time to do the maths after applying the mitigation, meaning the performance of your CPU is much worse after the patch.
I guess your example is in line with the expectation of most people, if not much worse. For me, I'm still waiting for the micorcode update from Intel before making further moves.
Right it became worse after the patching. I just tested it in one of my dev machines.
It looks like patches for RHEL 5 are out: https://access.redhat.com/errata/RHSA-2018:0292
It says the Red Hat Enterprise Linux 5 Extended Lifecycle Support is only applicable to variants 1,3. However, all the other Kernels for RHEL 6 and RHEL 7 show applicable to variants 1,2,3. If I go the Errata page here: https://access.redhat.com/errata/RHSA-2018:0292 it says it mitigates all 3 variants. So it is just a typo on this page or am I missing something?
Also, do I need to the microcode for RHEL 5 ELS to mitigate CVE-2017-5715 (variant 2)? This help page here: https://access.redhat.com/solutions/3315431 discussing it does not mention RHEL 5 only RHEL 6 and RHEL 7.
Thanks.
Edit: Nevermind I have found answers to all my questions.
Hi, now the RHEL 5 patches are out, is there an update planned to the detection script to work with RHEL 5 please.
Yes, it is.
The updated script is now available.
Pages