Speculative Execution Exploit Performance Impacts - Describing the performance impacts to security patches for CVE-2017-5754 CVE-2017-5753 and CVE-2017-5715

Updated -


This is the 2nd version of the Performance Considerations with results from testing updated kernels for Red Hat Enterprise Linux 7 and 6, based on "Retpoline" optimizations recently accepted upstream.

Kernel Side-Channel Attacks - CVE-2017-5754 CVE-2017-5753 CVE-2017-5715

The recent speculative execution CVEs address three potential attacks across a wide variety of architectures and hardware platforms, each requiring slightly different fixes. In many cases, these fixes also require microcode updates from the hardware vendors. Red Hat has delivered updated Red Hat Enterprise Linux kernels that focus on securing customer deployments. The nature of these vulnerabilities and their fixes introduces the possibility of reduced performance on patched systems. The performance impact depends on the hardware and the applications in place. We are actively working with our technology partners to reduce or eliminate these performance impacts as quickly as possible.


The Red Hat Performance Engineering team characterized application workloads to help guide partners and customers on the potential impact of the fixes supplied to correct CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715, including "Retpoline" kernels to secure pre-Skylake class machines to mitigate part of the Spectre vulnerability for ibrs. These machines still need OEM microcode to mitigate ibpb part of Spectre, which has little to no impact on performance. The performance impact of these patches still has considerable variations based on workload under test and the hardware configuration. Measurements are reported based on Industry Standard Benchmarks (ISB) representing a set of workloads that most closely mirror common customer deployments.

Red Hat has tested complete solutions, including updated kernels and updated microcode, on variants of the following modern high volume Intel systems: Haswell / Broadwell (not including Skylake in this report). In each instance, there is performance impact caused by the additional overhead required for security hardening in user-to kernel and kernel-to-user transitions. The impact varies with workload and hardware implementation and configuration. As is typical with performance, the impact that we measured ranged in Jan 2018 was between 1-20% and has now improved to be within 1-8% for the ISB set of application workloads tested.

In order to provide more detail, Red Hat’s performance team is sharing performance results measured on RHEL7, (with similar behavior on RHEL6/5), for a wide variety of benchmarks based on performance impact:

  • Measurable: previously 8-19% - updated w/ retpoline to be 4-8% - Highly cached random memory, with buffered I/O, OLTP database workloads, HPC (High Performance Computing) scale-out environments using MPI and benchmarks with high kernel-to-user space transitions were measured to be impacted the most. Examples include OLTP Workloads (tpc), sysbench, pgbench.

  • Modest: previously 3-7% - updated w/ retpoline to be 2-5% - Database analytics, Decision Support System (DSS), and Java VMs were measured to be impacted less than the “Measurable” category. These applications may have a significant sequential disk or network traffic, but kernel/device drivers are able to aggregate requests to moderate level of kernel-to-user transitions. Examples include SPECjbb2005, Queries/Hour and overall analytic timing (sec).

  • Small: previously 2-5% - updated w/ retpoline to be 1-2% - Computation loads like HPC (High Performance Computing) in scale-up using OpenMP, CPU-intensive workloads that spend little time in the kernel were measured to have a very small performance penalty. Examples include Linpack NxN on x86 and SPECcpu2006.

  • Minimal: Linux accelerator technologies that generally bypass the kernel in favor of user direct access were measured to have the least performance impact with less than 1% overhead measured. Examples tested include DPDK (VsPERF at 64 byte) and OpenOnload (STAC-N), Mellanox RDMA. Userspace accesses to VDSO, like gettimeofday (64-bit) were not impacted.

  • NOTE: Because microbenchmarks like netperf/uperf, iozone, and fio are designed to stress a specific hardware component or operation, their results may not be generally representative of customer workload. Some microbenchmarks have shown a larger performance impacts.

Because containers are implemented as generic Linux processes, applications deployed in containers incur the same performance impact as those deployed on bare metal. We expect the impact on applications deployed in virtual guests to be higher than bare metal because of the increased frequency of user-to-kernel transitions.

The actual performance impact that customers see may vary considerably based on the nature of their workload, hardware/devices, and system constraints such as whether the workloads are CPU bound or memory bound. If an application is running on a system that has consumed the full capacity of memory and CPU, the overhead of this fix may max out the configuration, resulting in more significant performance degradation. Consequently, the only deterministic way to characterize the impact is to run your workloads in your environment.

Red Hat Enterprise Linux settings for these patches default to maximum security. Recognizing, however, that customers' needs vary, these patches may be enabled or disabled at boot time or at runtime. As a diagnostic approach, some customers may want to measure results on the patched kernel in configurations with and without the CVE patches enabled. In order to facilitate this, the kernel team has added dynamic tunables to enable/disable most of the CVE microcode/security patches through debugfs tunables as described below.

Controlling the Performance Impact of Microcode and Security Patches for CVE-2017-5754 CVE-2017-5715 and CVE-2017-5753 using Red Hat Enterprise Linux Tunables

Red Hat will continue to optimize the patched kernel in future versions of Red Hat Enterprise Linux. We fully expect that hardware vendors will prevent these vulnerabilities in new implementations of silicon/microcode. Meanwhile, Red Hat continues to focus on improving customer application performance by better characterizing relevant workloads and isolating factors that affect performance. As always, our experts will be available for consultation about the specifics of your applications and environments.


This article mentions boot options and sysctl tunables to enable or disable the patches to test performance impacts. However, it does not describe these parameters and does not, as far as I can see, provide a further reference for them.

Hi Floyd,

One option to disable the PTI patches would be to add the parameter nopti or pti=off manually to the GRUB boot parameters as a temporary solution, or by adding one of them to /etc/default/grub as a permanent solution. But I don't recommend to do that, do it only when it's unavoidable. :)


Thanks, Christian. But the article mentioned runtime controls an was hoping there would be some explanation of those.


The runtime options were just added. You should see a new link if you refresh. Copied below https://access.redhat.com/articles/3311301

Best, Nick

I add line "echo 0 >/sys/kernel/debug/x86/pti_enabled" to /etc/rc.local , but "info rc.local[xxx]: /etc/rc.local: line xxx: /sys/kernel/debug/x86/pti_enabled: Permission denied " in message log when boot system rhel7。

Are there any plans to back-port the PCID support from Linux kernel 4.14 ?

My understanding is that RedHat 6 and 5 Kernel levels are also affected by the performance impact? Thanks.