Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

Chapter 9. Networking

The networking subsystem is comprised of a number of different parts with sensitive connections. Red Hat Enterprise Linux 7 networking is therefore designed to provide optimal performance for most workloads, and to optimize its performance automatically. As such, it is not usually necessary to manually tune network performance. This chapter discusses further optimizations that can be made to functional networking systems.
Network performance problems are sometimes the result of hardware malfunction or faulty infrastructure. Resolving these issues is beyond the scope of this document.

9.1. Considerations

To make good tuning decisions, you need a thorough understanding of packet reception in Red Hat Enterprise Linux. This section explains how network packets are received and processed, and where potential bottlenecks can occur.
A packet sent to a Red Hat Enterprise Linux system is received by the network interface card (NIC) and placed in either an internal hardware buffer or a ring buffer. The NIC then sends a hardware interrupt request, prompting the creation of a software interrupt operation to handle the interrupt request.
As part of the software interrupt operation, the packet is transferred from the buffer to the network stack. Depending on the packet and your network configuration, the packet is then forwarded, discarded, or passed to a socket receive queue for an application and then removed from the network stack. This process continues until either there are no packets left in the NIC hardware buffer, or a certain number of packets (specified in /proc/sys/net/core/dev_weight) are transferred.
The Red Hat Enterprise Linux Network Performance Tuning Guide available on the Red Hat Customer Portal contains information on packet reception in the Linux kernel, and covers the following areas of NIC tuning: SoftIRQ misses (netdev budget), tuned tuning daemon, numad NUMA daemon, CPU power states, interrupt balancing, pause frames, interrupt coalescence, adapter queue (netdev backlog), adapter RX and TX buffers, adapter TX queue, module parameters, adapter offloading, Jumbo Frames, TCP and UDP protocol tuning, and NUMA locality.

9.1.1. Before You Tune

Network performance problems are most often the result of hardware malfunction or faulty infrastructure. Red Hat highly recommends verifying that your hardware and infrastructure are working as expected before beginning to tune the network stack.

9.1.2. Bottlenecks in Packet Reception

While the network stack is largely self-optimizing, there are a number of points during network packet processing that can become bottlenecks and reduce performance.
The NIC hardware buffer or ring buffer
The hardware buffer might be a bottleneck if a large number of packets are being dropped. For information about monitoring your system for dropped packets, see Section 9.2.4, “ethtool”.
The hardware or software interrupt queues
Interrupts can increase latency and processor contention. For information on how interrupts are handled by the processor, see Section 6.1.3, “Interrupt Request (IRQ) Handling”. For information on how to monitor interrupt handling in your system, see Section 6.2.3, “/proc/interrupts”. For configuration options that affect interrupt handling, see Section 6.3.7, “Setting Interrupt Affinity on AMD64 and Intel 64”.
The socket receive queue for the application
A bottleneck in an application's receive queue is indicated by a large number of packets that are not copied to the requesting application, or by an increase in UDP input errors (InErrors) in /proc/net/snmp. For information about monitoring your system for these errors, see Section 9.2.1, “ss” and Section 9.2.5, “/proc/net/snmp”.