Support Policies for RHEL High Availability Clusters - Cluster Interconnect Network Interfaces

Updated -

Red Hat Insights can detect this issue

Proactively detect and remediate issues impacting your systems.
View matching systems and remediation

Contents

Overview

Applicable Environments

  • Red Hat Enterprise Linux (RHEL) with the High Availability Add-On

Useful References and Guides

Introduction

This guide lays out Red Hat's policies related to network interconnect interfaces used by RHEL High Availability clusters. Users of RHEL High Availability clusters should adhere to these policies in order to be eligible for support from Red Hat with the appropriate product support subscriptions.

Policies

Which network interface to use as cluster-interconnect: When a choice exists between multiple eligible network interfaces, Red Hat does not specifically require the use of any particular one to serve as the interconnect between cluster members.

  • It is not required that the cluster use a "dedicated" interconnect or heartbeat network - although it may be recommended in some environments that warrant having less contention for network resources or bandwidth, or that need additional security in cluster communications.
  • A cluster can be configured to communicate over a network interface that is shared for some other purpose.
  • If using RRP (RHEL 6, RHEL 7) or knet (RHEL 8) for multiple membership rings then each membership ring must be on different network because of a limitation in the kernel that is outlined in the following article: How to connect two network interfaces on the same subnet? Instead of using multiple rings on the same network, use bonding or teaming over a single membership ring.

Network bonding: Bonded interfaces are supported for usage as the cluster's interconnect, subject to limitations around the bonding mode. The following tables describes the RHEL release and update level that these packages must come from for each bonding mode to be supported.

Mode 0: balance-rr Mode 1: active-backup Mode 2: balance-xor Mode 3: broadcast Mode 4: 802.3ad (LACP) Mode 5: balance-tlb Mode 6: balance-alb
RHEL 8, RHEL 7, RHEL 6 Update 4 and later All RHEL releases RHEL 8, RHEL 7, RHEL 6 Update 4 and later Unsupported RHEL 8, RHEL 7 Update 1 and later, RHEL 6 Update 6 and later Unsupported Unsupported

Network teaming: Red Hat provides support for clusters utilizing "teamed" devices - technology that became available with RHEL 7 and remains in RHEL 8 - as the cluster's interconnect network, or for other application or resource networks within a cluster. Red Hat does not restrict its support to any particular runner type - those provided by teamd are all allowed for use within a High Availability cluster, however it is the responsibility of the organization deploying the cluster to ensure the network configuration is stable and can deliver the cluster's traffic as needed.

As support for team devices did not exist at an operating system level in RHEL 6, Red Hat does not support use of team devices in RHEL 6 High Availability clusters.


Multiple Rings on Deployments Spanning Multiple Sites (stretch cluster): Using multiple corosync rings with knet or 2 rings with RRP is supported on a cluster that spans multiple sites. Do note that any limitation or requirement that applies to a single ring, also applies to multiple rings. For more information then see the following:


Redundant Ring Protocol (RRP): Red Hat supports the configuration and usage of redundant interconnects in the cluster subject to the following conditions:

  • Supported releases:
  • Applicable interconnects: Clusters utilizing RRP are subject to the typical policies for network interfaces in RHEL clusters
  • rrp_mode: Red Hat does not support the usage of the rrp_mode: active setting. Clusters utilizing RRP must use rrp_mode: passive to receive support from Red Hat.
  • udp transport with broadcast messaging: Red Hat does not support the usage of broadcast for messaging in the udp transport protocol in clusters configured with RRP.
  • Ethernet only: RRP is supported on Ethernet networks only. IPoIB (IP over Infiniband) is not supported as an interconnect for RRP clusters.
  • RRP with DLM: Red Hat does not support usage of DLM in RRP-configured clusters. See the DLM support policies for more information.
  • Mixed IP families: Red Hat does not support mixing IP families either within a ring (e.g., one ring0_addr using IPv4 and another using IPv6) or between rings (i.e., all ring0_addr values using IPv4 and all ring1_addr1 values using IPv6). All rings on all cluster node members must use the same IP family (all use IPv4 or all use IPv6).
  • RRP requires that the corosync rings are on different networks because of a limitation in the kernel which is outlined in the following solution: How to connect two network interfaces on the same subnet? If the network interfaces have to use the same network then we recommend that bonding or teaming is used on a single corosync ring instead of RRP.

Cluster interconnect network latency: See the specific policy guide explaining the requirements and limitations for latency on the network used by a cluster for its interconnect.


DHCP used for IP assignment for network interfaces used by corosync: Using DHCP on a network interface used by one or more corosync rings is unsupported except in supported cloud environments that require dynamic (DHCP) IP addresses. In all other cases, static IP assignment should be used. For more information, see: Is the use of DHCP for the network interface supporting corosync traffic supported in RHEL High Availability Clustering?


Supported network configuration/management systems:

  • RHEL 7 and 8: Red Hat does not limit which methods can be used to administer or manage the network interface that serves as a RHEL High Availability cluster interconnect - such as NetworkManager, ifcfg files, or manual configuration by command-line - as long as the other conditions laid out in these policies are met.

  • RHEL 6: NetworkManager is not supported for use with cman, and thus with RHEL 6 High Availability clusters. Other methods like ifcfg files, the network init service, or the ip or ifconfig utilty can be used to manage a cluster node's network interfaces.


Using crossover cables with corosync (heartbeat network) for cluster communication:

Comments