RHEL 7 on IBM Power: Taking advantage of TCP large send and receive

Updated -

Large send offload (LSO) is a technique for increasing egress throughput of high-bandwidth network connections by reducing CPU overhead. It works by passing a multipacket buffer to the network interface card (NIC). The NIC then split them into separate packets. The technique is also called TCP segmentation offload (TSO) when applied to TCP, or generic segmentation offload (GSO). The inbound counterpart of large segment offload is large receive offload (LRO). (source: Wikipedia).

In order for this to work, the other network devices — the Ethernet switches through which all traffic flows — all have to agree on the frame size. The server cannot send frames that are larger than the Maximum Transmission Unit (MTU) supported by the switches. (source: Peer Wisdom).

The large send offload option is enabled by default on Ethernet adapters that support the option when you are working in dedicated mode. This option improves the performance on 10 Gigabit Ethernet and faster adapters for workloads that manage data streaming (such as file transfer protocol (FTP), RCP, tape backup, and similar bulk data movement applications). The virtual Ethernet adapter and SEA devices are exceptions, where the large send offload option is disabled by default due to inter-operability problems with the operating system.

Overview of requirements outside of Red Hat Enterprise Linux:

  • IBM Virtual Input-Output Server (VIOS) version 2.2.5.20 or later
  • VIOS contains appropriate NIC resource
  • VIOS has virtual NIC configured as Shared Ethernet Adapter (SEA)
  • VIOS has the "large_receive" and "large_send" features enabled

Requirements related to Red Hat Enterprise Linux:

  • Version: 7.4 or later, 7.3 with kernel 3.10.0-514.10.2 or later, 7.2 with kernel 3.10.0-327.49.2 or later
  • PowerVM Linux LPARs. This feature is not supported by KVM for Power guests.
  • All Linux LPARs on the Central Electronics Complex (CEC) must be configured as below. If some LPARs do not have this configuration, traffic to unconfigured LPARs will be lost. It is not possible to run a mixed environment with some large send/receive and some not.
    • LPARs with Virtual Ethernet network interfaces (ibmveth driver)
    • LPARs have the "options ibmveth old_large_send=1" line in a configuration file, such as /etc/modprobe.d/ibmveth.conf
    • LPARs have GSO and TSO enabled using ethtool, applied automatically at boot as described in the How to make NIC ethtool settings persistent article
    • LPARs do not need GRO or LRO enabled via ethtool, the large receive functionality is done in the firmware and driver, separately to the large receive ethtool options
    • The maximum and ideal Maximum Transmission Unit (MTU) between LPARs is 64 KB

For each connection in the Linux LPARs, enter the following commands:

# rmmod ibmveth   // unload the veth driver
# modprobe ibmveth old_large_send=1 // reload the veth driver
# ethtool -K eth0 tso on // turn on large_send on veth

To show large_send and large_receive packet numbers, use the ethtool -S command.

For more information, see the Taking advantage of networking large-send large-receive article on The Linux on Power Community Wiki.

Comments