Network performance tuning - FAQ

Latest response

*Why are you interested in network performance tuning?
For years, I have been supporting customers in different industries and I have realized that Linux is now a mature operating system. However, tuning its performance to meet business requirements is still a pain point. Enterprises especially in the finance sector are quite sensitive to performance issues. I started to pay attention to these issues in order to support them.Network performance tuning is a wide topic and I still need to learn many things. I hope to share my experience with network performance tuning through this opportunity.

 

*Where is a good place to start with network performance tuning?
Network performance is obviously dependant on underlying hardware and system environments because it is very hard to overcome a hardware's limit by tuning network performance from a software perspective. Once a proper hardware system is prepared, we can start to tune the system. Power management features provided by hardware devices are incompatible with performance so select 'Maximum Performance' profile in a system firmware. In addition, Red Hat Enterprise Linux 6 has enhanced power management features including a tickless kernel. I would like to recommend disabling those features by adding 'processor.max_cstate=1' and 'nohz' kernel parameters. For more information about the tickless kernel, refer to http://red.ht/SyHwAZ

 

*Are there tuning options for TCP network throughput?
TCP network throughput is limited by congestion control and flow control mechanisms. If the speed of incoming packets are much faster than the speed of processing the packets by the receiver, the receiver eventually exceeds its capacity and starts to drop packets, Then, the network bandwidth could be wasted by retransmitted packets. In case of using a long slow network, there is another performance issue that takes time to get acknowledgements from the receiver. TCP receive window is used for those situations and its maximum size is automatically tuned up to 65,535 bytes. With the TCP scaling option, the limit could be extended more.

 

*How can I determine the TCP initial window size and scaling option?
TCP initial window size is important for TCP network throughput due to Slow-start, TCP congestion control strategy. If your network bandwidth is large enough and the network is only used for your systems, then you can use a large size of TCP initial window. There are many facts used to determine TCP initial window size and knowing them is one of the key points of tuning TCP network throughput. Mainly net.ipv4.tcp_rmem and tcp_window_scaling options are effected a lot. http://red.ht/SyISM3 has a detailed explanation

 

*How can I improve UDP throughput?
UDP is popularly used for heavy network load like multimedia streaming services and it easily causes packet drop or buffer overrun issues. To avoid those performance problems, 1) use an ethernet device supporting a multiqueue feature, 2) increase the number of NIC hardware buffer with ethtool, 3) increase UDP buffer size with net.core.rmem_default. This Red Hat Enterprise Linux 6 Performance Tuning Guide explains more about the NIC hardware buffer http://red.ht/SyJlhi This knowledge article describes how to increase the UDP buffer size: http://red.ht/VA5DEO

 

*What tuning options can I use to reduce network latency?
Generally, TSO (TCP Segmentation Offload) and GSO (Generic Segmentation Offload) are very useful to improve the network throughput but it leads delay in some cases. In those cases, disabling features with ethtool could reduce the network latency. To use network bandwidth efficiently, TCP uses Nagle's algorithm to collect small outgoing packets to send all at once. However, this can have an effect on the network latency. If you need to send a packet immediately regardless of its size, TCP_NODELAY has to be enabled on a socket. For more information about this, refer to http://red.ht/SyJRvS

 

*Do you have any other suggestions regarding network performance tuning?
Please don't assume all settings will work for you. They need to be carefully applied because almost every setting could bring a negative effect depending on your environment and workload. Make sure to carefully consider all information

Responses

Nice content, thanks Rogan.

 

We also have the following article which gives a very brief overview of calculating correct buffer size for best throughput over a WAN connection:

 

 How do I tune RHEL for better TCP performance over a WAN connection?

 https://access.redhat.com/knowledge/solutions/168483