Chapter 29. Configuring ethtool settings in NetworkManager connection profiles

NetworkManager can configure certain network driver and hardware settings persistently. Compared to using the ethtool utility to manage these settings, this has the benefit of not losing the settings after a reboot.

You can set the following ethtool settings in NetworkManager connection profiles:

Offload features
Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput.
Interrupt coalesce settings
By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput.
Ring buffers
These buffers store incoming and outgoing network packets. You can increase the ring buffer sizes to reduce a high packet drop rate.
Channel settings

A network interface manages its associated number of channels along with hardware settings and network drivers. All devices associated with a network interface communicate with each other through interrupt requests (IRQ). Each device queue holds pending IRQ and communicates with each other over a data line known as channel. Types of queues are associated with specific channel types. These channel types include:

  • rx for receiving queues
  • tx for transmit queues
  • other for link interrupts or single root input/output virtualization (SR-IOV) coordination
  • combined for hardware capacity-based multipurpose channels

29.1. Configuring an ethtool offload feature by using nmcli

You can use NetworkManager to enable and disable ethtool offload features in a connection profile.

Procedure

  1. For example, to enable the RX offload feature and disable TX offload in the enp1s0 connection profile, enter:

    # nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off

    This command explicitly enables RX offload and disables TX offload.

  2. To remove the setting of an offload feature that you previously enabled or disabled, set the feature’s parameter to a null value. For example, to remove the configuration for TX offload, enter:

    # nmcli con modify enp1s0 ethtool.feature-tx ""
  3. Reactivate the network profile:

    # nmcli connection up enp1s0

Verification

  • Use the ethtool -k command to display the current offload features of a network device:

    # ethtool -k network_device

Additional resources

  • nm-settings-nmcli(5) man page

29.2. Configuring an ethtool offload feature by using the network RHEL system role

You can use the network RHEL system role to configure ethtool features of a NetworkManager connection.

Important

When you run a play that uses the network RHEL system role and if the setting values do not match the values specified in the play, the role overrides the existing connection profile with the same name. To prevent resetting these values to their defaults, always specify the whole configuration of the network connection profile in the play, even if the configuration, for example the IP configuration, already exists.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Configure the network
      hosts: managed-node-01.example.com
      tasks:
        - name: Configure an Ethernet connection with ethtool features
          ansible.builtin.include_role:
            name: rhel-system-roles.network
          vars:
            network_connections:
              - name: enp1s0
                type: ethernet
                autoconnect: yes
                ip:
                  address:
                    - 198.51.100.20/24
                    - 2001:db8:1::1/64
                  gateway4: 198.51.100.254
                  gateway6: 2001:db8:1::fffe
                  dns:
                    - 198.51.100.200
                    - 2001:db8:1::ffbb
                  dns_search:
                    - example.com
                ethtool:
                  features:
                    gro: "no"
                    gso: "yes"
                    tx_sctp_segmentation: "no"
                state: up

    This playbook creates the enp1s0 connection profile with the following settings, or updates it if the profile already exists:

    • A static IPv4 address - 198.51.100.20 with a /24 subnet mask
    • A static IPv6 address - 2001:db8:1::1 with a /64 subnet mask
    • An IPv4 default gateway - 198.51.100.254
    • An IPv6 default gateway - 2001:db8:1::fffe
    • An IPv4 DNS server - 198.51.100.200
    • An IPv6 DNS server - 2001:db8:1::ffbb
    • A DNS search domain - example.com
    • ethtool features:

      • Generic receive offload (GRO): disabled
      • Generic segmentation offload (GSO): enabled
      • TX stream control transmission protocol (SCTP) segmentation: disabled
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.network/README.md file
  • /usr/share/doc/rhel-system-roles/network/ directory

29.3. Configuring an ethtool coalesce settings by using nmcli

You can use NetworkManager to set ethtool coalesce settings in connection profiles.

Procedure

  1. For example, to set the maximum number of received packets to delay to 128 in the enp1s0 connection profile, enter:

    # nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128
  2. To remove a coalesce setting, set it to a null value. For example, to remove the ethtool.coalesce-rx-frames setting, enter:

    # nmcli connection modify enp1s0 ethtool.coalesce-rx-frames ""
  3. To reactivate the network profile:

    # nmcli connection up enp1s0

Verification

  1. Use the ethtool -c command to display the current offload features of a network device:

    # ethtool -c network_device

Additional resources

  • nm-settings-nmcli(5) man page

29.4. Configuring an ethtool coalesce settings by using the network RHEL system role

You can use the network RHEL system role to configure ethtool coalesce settings of a NetworkManager connection.

Important

When you run a play that uses the network RHEL system role and if the setting values do not match the values specified in the play, the role overrides the existing connection profile with the same name. To prevent resetting these values to their defaults, always specify the whole configuration of the network connection profile in the play, even if the configuration, for example the IP configuration, already exists.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Configure the network
      hosts: managed-node-01.example.com
      tasks:
        - name: Configure an Ethernet connection with ethtool coalesce settings
          ansible.builtin.include_role:
            name: rhel-system-roles.network
          vars:
            network_connections:
              - name: enp1s0
                type: ethernet
                autoconnect: yes
                ip:
                  address:
                    - 198.51.100.20/24
                    - 2001:db8:1::1/64
                  gateway4: 198.51.100.254
                  gateway6: 2001:db8:1::fffe
                  dns:
                    - 198.51.100.200
                    - 2001:db8:1::ffbb
                  dns_search:
                    - example.com
                ethtool:
                  coalesce:
                    rx_frames: 128
                    tx_frames: 128
                state: up

    This playbook creates the enp1s0 connection profile with the following settings, or updates it if the profile already exists:

    • A static IPv4 address - 198.51.100.20 with a /24 subnet mask
    • A static IPv6 address - 2001:db8:1::1 with a /64 subnet mask
    • An IPv4 default gateway - 198.51.100.254
    • An IPv6 default gateway - 2001:db8:1::fffe
    • An IPv4 DNS server - 198.51.100.200
    • An IPv6 DNS server - 2001:db8:1::ffbb
    • A DNS search domain - example.com
    • ethtool coalesce settings:

      • RX frames: 128
      • TX frames: 128
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.network/README.md file
  • /usr/share/doc/rhel-system-roles/network/ directory

29.5. Increasing the ring buffer size to reduce a high packet drop rate by using nmcli

Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.

Receive ring buffers are shared between the device driver and network interface controller (NIC). The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs.

The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket.

The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.

Procedure

  1. Display the packet drop statistics of the interface:

    # ethtool -S enp1s0
        ...
        rx_queue_0_drops: 97326
        rx_queue_1_drops: 63783
        ...

    Note that the output of the command depends on the network card and the driver.

    High values in discard or drop counters indicate that the available buffer fills up faster than the kernel can process the packets. Increasing the ring buffers can help to avoid such loss.

  2. Display the maximum ring buffer sizes:

    # ethtool -g enp1s0
     Ring parameters for enp1s0:
     Pre-set maximums:
     RX:             4096
     RX Mini:        0
     RX Jumbo:       16320
     TX:             4096
     Current hardware settings:
     RX:             255
     RX Mini:        0
     RX Jumbo:       0
     TX:             255

    If the values in the Pre-set maximums section are higher than in the Current hardware settings section, you can change the settings in the next steps.

  3. Identify the NetworkManager connection profile that uses the interface:

    # nmcli connection show
    NAME                UUID                                  TYPE      DEVICE
    Example-Connection  a5eb6490-cc20-3668-81f8-0314a27f3f75  ethernet  enp1s0
  4. Update the connection profile, and increase the ring buffers:

    • To increase the RX ring buffer, enter:

      # nmcli connection modify Example-Connection ethtool.ring-rx 4096
    • To increase the TX ring buffer, enter:

      # nmcli connection modify Example-Connection ethtool.ring-tx 4096
  5. Reload the NetworkManager connection:

    # nmcli connection up Example-Connection
    Important

    Depending on the driver your NIC uses, changing in the ring buffer can shortly interrupt the network connection.

29.6. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role

Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.

Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs.

The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket.

The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.

Important

When you run a play that uses the network RHEL system role and if the setting values do not match the values specified in the play, the role overrides the existing connection profile with the same name. To prevent resetting these values to their defaults, always specify the whole configuration of the network connection profile in the play, even if the configuration, for example the IP configuration, already exists.

Prerequisites

  • You have prepared the control node and the managed nodes
  • You are logged in to the control node as a user who can run playbooks on the managed nodes.
  • The account you use to connect to the managed nodes has sudo permissions on them.
  • You know the maximum ring buffer sizes that the device supports.

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Configure the network
      hosts: managed-node-01.example.com
      tasks:
        - name: Configure an Ethernet connection with increased ring buffer sizes
          ansible.builtin.include_role:
            name: rhel-system-roles.network
          vars:
            network_connections:
              - name: enp1s0
                type: ethernet
                autoconnect: yes
                ip:
                  address:
                    - 198.51.100.20/24
                    - 2001:db8:1::1/64
                  gateway4: 198.51.100.254
                  gateway6: 2001:db8:1::fffe
                  dns:
                    - 198.51.100.200
                    - 2001:db8:1::ffbb
                  dns_search:
                    - example.com
                ethtool:
                  ring:
                    rx: 4096
                    tx: 4096
                state: up

    This playbook creates the enp1s0 connection profile with the following settings, or updates it if the profile already exists:

    • A static IPv4 address - 198.51.100.20 with a /24 subnet mask
    • A static IPv6 address - 2001:db8:1::1 with a /64 subnet mask
    • An IPv4 default gateway - 198.51.100.254
    • An IPv6 default gateway - 2001:db8:1::fffe
    • An IPv4 DNS server - 198.51.100.200
    • An IPv6 DNS server - 2001:db8:1::ffbb
    • A DNS search domain - example.com
    • Maximum number of ring buffer entries:

      • Receive (RX): 4096
      • Transmit (TX): 4096
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.network/README.md file
  • /usr/share/doc/rhel-system-roles/network/ directory

29.7. Configuring an ethtool channels settings by using nmcli

By using NetworkManager, you can manage network devices and connections. The ethtool utility manages the link speed and related settings of a network interface card. ethtool handles IRQ based communication with associated devices to manage related channels settings in connection profiles.

Procedure

  1. Display the channels associated with a network device:

    # ethtool --show-channels enp1s0
    Channel parameters for enp1s0:
    Pre-set maximums:
    RX:		4
    TX:		3
    Other:	   10
    Combined:  63
    
    Current hardware settings:
    RX:   	 1
    TX:   	 1
    Other:   1
    Combined:  1
  2. Update the channel settings of a network interface:

    # nmcli connection modify enp1s0 ethtool.channels-rx 4 ethtool.channels-tx 3 ethtools.channels-other 9 ethtool.channels-combined 50
  3. Reactivate the network profile:

    # nmcli connection up enp1s0

Verification

  • Check the updated channel settings associated with the network device:

    # ethtool --show-channels enp1s0
    Channel parameters for enp1s0:
    Pre-set maximums:
    RX:		4
    TX:		3
    Other:	  10
    Combined: 63
    
    Current hardware settings:
    RX:   	 4
    TX:   	 3
    Other:   9
    Combined:  50

Additional resources

  • The nmcli(5) man page