Support Policies for RHEL High Availability Clusters - Cluster Interconnect Network Interfaces

Updated -



Applicable Environments

  • Red Hat Enterprise Linux (RHEL) 6 or 7 with the High Availability Add-On

Useful References and Guides


This guide lays out Red Hat's policies related to network interconnect interfaces used by RHEL High Availability clusters. Users of RHEL High Availability clusters should adhere to these policies in order to be eligible for support from Red Hat with the appropriate product support subscriptions.


Which network interface to use as cluster-interconnect: When a choice exists between multiple eligible network interfaces, Red Hat does not specifically require the use of any particular one to serve as the interconnect between cluster members.

  • It is not required that the cluster use a "dedicated" interconnect or heartbeat network - although it may be recommended in some environments that warrant having less contention for network resources or bandwidth, or that need additional security in cluster communications.
  • A cluster can be configured to communicate over a network interface that is shared for some other purpose.

Network bonding: Bonded interfaces are supported for usage as the cluster's interconnect, subject to limitations around the bonding mode. The following tables describes the RHEL release and update level that these packages must come from for each bonding mode to be supported.

Mode 0: balance-rr Mode 1: active-backup Mode 2: balance-xor Mode 3: broadcast Mode 4: 802.3ad (LACP)
RHEL 6 Update 4 and later, RHEL 7 All RHEL releases RHEL 6 Update 4 and later, RHEL 7 Unsupported RHEL 6 Update 6 and later, RHEL 7 Update 1 and later

Network teaming: Red Hat provides support for clusters utilizing "teamed" devices - technology that became available with RHEL 7 - as the cluster's interconnect network, or for other application or resource networks within a cluster. Red Hat does not restrict its support to any particular runner type - those provided by teamd are all allowed for use within a High Availability cluster, however it is the responsibility of the organization deploying the cluster to ensure the network configuration is stable and can deliver the cluster's traffic as needed.

As support for team devices did not exist at an operating system level in RHEL 6, Red Hat does not support use of team devices in RHEL 6 High Availability clusters.

Redundant Ring Protocol (RRP): Red Hat supports the configuration and usage of redundant interconnects in the cluster subject to the following conditions:

  • Supported releases:
  • Applicable interconnects: Clusters utilizing RRP are subject to the typical policies for network interfaces in RHEL clusters
  • rrp_mode: Red Hat does not support the usage of the rrp_mode: active setting. Clusters utilizing RRP must use rrp_mode: passive to receive support from Red Hat.
  • udp transport with broadcast messaging: Red Hat does not support the usage of broadcast for messaging in the udp transport protocol in clusters configured with RRP.
  • Ethernet only: RRP is supported on Ethernet networks only. IPoIB (IP over Infiniband) is not supported as an interconnect for RRP clusters.
  • RRP with DLM: Red Hat does not support usage of DLM in RRP-configured clusters. See the DLM support policies for more information.

Cluster interconnect network latency: See the specific policy guide explaining the requirements and limitations for latency on the network used by a cluster for its interconnect.

Supported network configuration/management systems:

  • RHEL 7: Red Hat does not limit which methods can be used to administer or manage the network interface that serves as a RHEL High Availability cluster interconnect - such as NetworkManager, ifcfg files, or manual configuration by command-line - as long as the other conditions laid out in these policies are met.

  • RHEL 6: NetworkManager is not supported for use with cman, and thus with RHEL 6 High Availability clusters. Other methods like ifcfg files, the network init service, or the ip or ifconfig utilty can be used to manage a cluster node's network interfaces.