RHEL 7 on IBM Power: Taking advantage of TCP large send and receive

Updated -

Large send offload (LSO) is a technique for increasing egress throughput of high-bandwidth network connections by reducing CPU overhead. It works by passing a multipacket buffer to the network interface card (NIC). The NIC then split them into separate packets. The technique is also called TCP segmentation offload (TSO) when applied to TCP, or generic segmentation offload (GSO). The inbound counterpart of large segment offload is large receive offload (LRO). (source: Wikipedia).

In order for this to work, the other network devices — the Ethernet switches through which all traffic flows — all have to agree on the frame size. The server cannot send frames that are larger than the Maximum Transmission Unit (MTU) supported by the switches. (source: Peer Wisdom).

The large send offload option is enabled by default on Ethernet adapters that support the option when you are working in dedicated mode. This option improves the performance on 10 Gigabit Ethernet and faster adapters for workloads that manage data streaming (such as file transfer protocol (FTP), RCP, tape backup, and similar bulk data movement applications). The virtual Ethernet adapter and SEA devices are exceptions, where the large send offload option is disabled by default due to inter-operability problems with the operating system.

Overview of requirements outside of Red Hat Enterprise Linux:

  • IBM Virtual Input-Output Server (VIOS) version or later
  • VIOS contains appropriate NIC resource
  • VIOS has virtual NIC configured as Shared Ethernet Adapter (SEA)
  • VIOS has the "large_receive" and "large_send" features enabled

Requirements related to Red Hat Enterprise Linux:

  • Version: 7.4 or later, 7.3 with kernel 3.10.0-514.10.2 or later, 7.2 with kernel 3.10.0-327.49.2 or later
  • PowerVM Linux LPARs. This feature is not supported by KVM for Power guests.
  • All Linux LPARs on the Central Electronics Complex (CEC) must be configured as below. If some LPARs do not have this configuration, traffic to unconfigured LPARs will be lost. It is not possible to run a mixed environment with some large send/receive and some not.
    • LPARs with Virtual Ethernet network interfaces (ibmveth driver)
    • LPARs have the "options ibmveth old_large_send=1" line in a configuration file, such as /etc/modprobe.d/ibmveth.conf
    • LPARs have GSO and TSO enabled using ethtool, applied automatically at boot as described in the How to make NIC ethtool settings persistent article
    • LPARs do not need GRO or LRO enabled via ethtool, the large receive functionality is done in the firmware and driver, separately to the large receive ethtool options
    • The maximum and ideal Maximum Transmission Unit (MTU) between LPARs is 64 KB

For each connection in the Linux LPARs, enter the following commands:

# rmmod ibmveth   // unload the veth driver
# modprobe ibmveth old_large_send=1 // reload the veth driver
# ethtool -K eth0 tso on // turn on large_send on veth

To show large_send and large_receive packet numbers, use the ethtool -S command.

For more information, see the Taking advantage of networking large-send large-receive article on The Linux on Power Community Wiki.


This article is referencing dead links as the IBM Power developerworks wiki is dead. You also need to include documentation that discusses IBM Platform Large Send. You might want to review https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102502 and the Network Configuration for HANA Workloads pdf for the ibmveth module. Platform Large Send Offload (PLSO) The Platform Large Send Offload (PLSO) becomes available on POWER-servers starting with the Server firmware 840.10 for P8 and Server firmware 940.02 for P9. High level, the PLSO is a mechanism which observes the communication of a LPAR over the virtual Ethernet (ibmveth) and intervenes under the following circumstances to optimize the throughput. One point of intervention happens if the Hypervisor detects that both communicating LPARs are able to use largesend and largereceive but having an MTU of 1500 in the client LPAR. Then the Hypervisor tries to use jumbo frames between the servers for most, but not all, parts of the communication. The throughput of such a communication may achieve around 80% of a communication as where the LPARs would have used MTU=9000 natively. A second point where a PLSO intervention happens is when the communication partner resides on the same server. In this case the Ethernet-traffic bypasses the VIOs and most of the Ethernet switch stack and is send directly to the communication partner. This bypassing can improve performance up to 200% of the throughput of a normal communication.