Difference in CPU USE% between RHEL8.0GA and RHEL8.1BETA

Latest response

In RHEL8.1BETA, start the iperf service and execute the command iperf3 -s -p 12345 , then execute perf3 -c xxx.xxx.xxx.xxx -p 12345 -l 256 -P 1 -i 2 -t 50 , and execute the same command on RHEL 8.0GA. The USE% usage rate of the CPU of the iperf process of RHEL8.1BETA was found to be more than twice that of RHEL8.0GA.

I hope that when iperf is executed on RHEL8.1BETA, the user% usage of the iperf process can be the same as RHEL8.0GA.

After investigation, the difference may be caused by different kernel versions. The kernel version of RHEL8.1BETA is kernel4.18.0-107, and the kernel version of RHEL8.0GA is kernel4.18.0-100. However, it is still unknown how it is caused.

Thanks.

Responses

It could be a difference in CPU time accounting, or it could be a different change which actually affects the amount of work the system does.

The -l 256 really isn't going to help you. That's a very small amount of data to read from the large buffer and will cause a context switch for each read, which is a fairly expensive operation.

It's possible there could be a change relating to syscall cost which accounts for what you see. Some of the security vulnerability mitigations about speculative execution have this sort of pattern. The solution there is not to make so many context switches.

If your production application doesn't read its buffer in 256-byte lengths, then there's not really any point running iperf like that either.

If your production application does read its buffer in 256-byte lengths, then reconsider that behaviour if you can.

If you install the debug symbols for iperf and all its related userspace libraries, you can probably account for the CPU time difference with perf, as described at:

However we don't have a knowledgebase page teaching you how to analyse perf data.

If you have access to an account with support entitlements, feel free to open a support case to investigate further with us.