Speculative Execution Exploit Performance Impacts - Describing the performance impacts to security patches for CVE-2017-5754 CVE-2017-5753 and CVE-2017-5715

Updated -

The recent speculative execution CVEs address three potential attacks across a wide variety of architectures and hardware platforms, each requiring slightly different fixes. In many cases, these fixes also require microcode updates from the hardware vendors. Red Hat has delivered updated Red Hat Enterprise Linux kernels that focus on securing customer deployments. The nature of these vulnerabilities and their fixes introduces the possibility of reduced performance on patched systems. The performance impact depends on the hardware and the applications in place.

The attacks target commonly used optimizations that were designed to improve performance; accordingly, correcting the security vulnerabilities comes at a performance price for some workloads. This report from the Red Hat Performance Engineering team describes our research into the performance impact of various microcode and kernel patch combinations. Performance characterizations and development of optimizations will be an ongoing effort that may result in subsequent kernel enhancements and updated reports. In addition, we are actively working with our technology partners to reduce or eliminate these performance impacts as quickly as possible.

Summary:

The recent speculative execution CVEs address three potential attacks across a wide variety of architectures and hardware platforms, each requiring slightly different fixes. In many cases, these fixes also require microcode updates from the hardware vendors. Red Hat has delivered updated Red Hat Enterprise Linux kernels that focus on securing customer deployments. The nature of these vulnerabilities and their fixes introduces the possibility of reduced performance on patched systems. The performance impact depends on the hardware and the applications in place.

The attacks target commonly used optimizations that were designed to improve performance; accordingly, correcting the security vulnerabilities comes at a performance price for some workloads. This report from the Red Hat Performance Engineering team describes our research into the performance impact of various microcode and kernel patch combinations. Performance characterizations and development of optimizations will be an ongoing effort that may result in subsequent kernel enhancements and updated reports. In addition, we are actively working with our technology partners to reduce or eliminate these performance impacts as quickly as possible.

Details:

The Red Hat Performance Engineering team characterized application workloads to help guide partners and customers on the potential impact of the fixes supplied to correct CVE-2017-5754, CVE-2017-5753 and CVE-2017-5715, including OEM microcode, Red Hat Enterprise Linux kernel, and virtualization patches. The performance impact of these patches may vary considerably based on workload and hardware configuration. Measurements are reported based on Industry Standard Benchmarks (ISB) representing a set of workloads that most closely mirror common customer deployments.

Red Hat has tested complete solutions, including updated kernels and updated microcode, on variants of the following modern high volume Intel systems: Haswell, Broadwell, and Skylake. In each instance, there is performance impact caused by the additional overhead required for security hardening in user-to-kernel and kernel-to-user transitions. The impact varies with workload and hardware implementation and configuration. As is typical with performance, the impact can be best characterized by sharing a range between 1-20% for the ISB set of application workloads tested.

In order to provide more detail, Red Hat’s performance team has categorized the performance results for Red Hat Enterprise Linux 7, (with similar behavior on Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 5), on a wide variety of benchmarks based on performance impact:

  • Measureable: 8-19% - Highly cached random memory, with buffered I/O, OLTP database workloads, and benchmarks with high kernel-to-user space transitions are impacted between 8-19%. Examples include OLTP Workloads (tpc), sysbench, pgbench, netperf (< 256 byte), and fio (random I/O to NvME).

  • Modest: 3-7% - Database analytics, Decision Support System (DSS), and Java VMs are impacted less than the “Measurable” category. These applications may have significant sequential disk or network traffic, but kernel/device drivers are able to aggregate requests to moderate level of kernel-to-user transitions. Examples include SPECjbb2005, Queries/Hour and overall analytic timing (sec).

  • Small: 2-5% - HPC (High Performance Computing) CPU-intensive workloads are affected the least with only 2-5% performance impact because jobs run mostly in user space and are scheduled using cpu-pinning or numa-control. Examples include Linpack NxN on x86 and SPECcpu2006.

  • Minimal: Linux accelerator technologies that generally bypass the kernel in favor of user direct access are the least affected, with less than 2% overhead measured. Examples tested include DPDK (VsPERF at 64 byte) and OpenOnload (STAC-N). Userspace accesses to VDSO like get-time-of-day are not impacted. We expect similar minimal impact for other offloads.

  • NOTE: Because microbenchmarks like netperf/uperf, iozone, and fio are designed to stress a specific hardware component or operation, their results are not generally representative of customer workload. Some microbenchmarks have shown a larger performance impact, related to the specific area they stress.

Due to containerized applications being implemented as generic Linux processes, applications deployed in containers incur the same performance impact as those deployed on bare metal. We expect the impact on applications deployed in virtual guests to be higher than bare metal due to the increased frequency of user-to-kernel transitions. Those results will be available soon in an updated version of this document.

The actual performance impact that customers see may vary considerably based on the nature of their workload, hardware/devices, and system constraints such as whether the workloads are CPU bound or memory bound. If an application is running on a system that has consumed the full capacity of memory and CPU, the overhead of this fix may overload a system payload, resulting in more significant performance degradation. Consequently, the only deterministic way to characterize the impact is to evaluate the end impact on workloads, in the end environment.

Red Hat Enterprise Linux settings for these patches default to maximum security. Recognizing, however, that customers' needs vary, these patches may be enabled or disabled at boot time or at runtime. As a diagnostic approach, some customers may want to measure results on the patched kernel in configurations with and without the CVE patches enabled. In order to facilitate this, the kernel team has added dynamic tunables to enable/disable most of the CVE microcode/security patches through tunables and/or by building new tuned profiles. Please review the following for details regarding these tunables.

Controlling the Performance Impact of Microcode and Security Patches for CVE-2017-5754 CVE-2017-5715 and CVE-2017-5753 using Red Hat Enterprise Linux Tunables

Using tunables to disable these features will recover much of the lost performance. But even disabled, the additional code and the microcode updates may have a slight performance impact.

Red Hat will continue to optimize the patched kernel in future versions of Red Hat Enterprise Linux. And over time, we fully expect that hardware vendors will prevent these vulnerabilities in new implementations of silicon/microcode. Meanwhile, Red Hat continues to focus on improving customer application performance by better characterizing relevant workloads and isolating factors that affect performance. As always, our experts will be available for consultation about the specifics of your applications and environments.

Was this helpful?

We appreciate your feedback. Leave a comment if you would like to provide more detail.
It looks like we have some work to do. Leave a comment to let us know how we could improve.
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.