Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

Chapter 8. Configure Network Teaming

8.1. Understanding Network Teaming

The combining or aggregating of network links to provide a logical link with higher throughput, or to provide redundancy, is known by many names, for example channel bonding, Ethernet bonding, port trunking, channel teaming, NIC teaming, or link aggregation. This concept as originally implemented in the Linux kernel is widely referred to as bonding. The term Network Teaming has been chosen to refer to this new implementation of the concept. The existing bonding driver is unaffected, Network Teaming is offered as an alternative and does not replace bonding in Red Hat Enterprise Linux 7.

Note

Regarding the Mode 4 Link Aggregation Control Protocol (LACP) teaming mode, requires configuring the switch to aggregate the links. For more details, see https://www.kernel.org/doc/Documentation/networking/bonding.txt
Network Teaming, or Team, is designed to implement the concept in a different way by providing a small kernel driver to implement the fast handling of packet flows, and various user-space applications to do everything else in user space. The driver has an Application Programming Interface (API), referred to as Team Netlink API, which implements Netlink communications. User-space applications can use this API to communicate with the driver. A library, referred to as lib, has been provided to do user space wrapping of Team Netlink communications and RT Netlink messages. An application daemon, teamd, which uses the libteam library is also available. One instance of teamd can control one instance of the Team driver. The daemon implements the load-balancing and active-backup logic, such as round-robin, by using additional code referred to as runners. By separating the code in this way, the Network Teaming implementation presents an easily extensible and scalable solution for load-balancing and redundancy requirements. For example, custom runners can be relatively easily written to implement new logic through teamd, and even teamd is optional, users can write their own application to use libteam.
The teamdctl utility is available to control a running instance of teamd using D-bus. teamdctl provides a D-Bus wrapper around the teamd D-Bus API. By default, teamd listens and communicates using Unix Domain Sockets but still monitors D-Bus. This is to ensure that teamd can be used in environments where D-Bus is not present or not yet loaded. For example, when booting over teamd links, D-Bus would not yet be loaded. The teamdctl utility can be used during run time to read the configuration, the state of link-watchers, check and change the state of ports, add and remove ports, and to change ports between active and backup states.
Team Netlink API communicates with user-space applications using Netlink messages. The libteam user-space library does not directly interact with the API, but uses libnl or teamnl to interact with the driver API.
To sum up, the instances of Team driver, running in the kernel, do not get configured or controlled directly. All configuration is done with the aid of user space applications, such as the teamd application. The application then directs the kernel driver part accordingly.

Note

In the context of network teaming, the term port is also known as slave. Port is preferred when using teamd directly while slave is used when using NetworkManager to refer to interfaces which create a team.