- Red Hat Enterprise Linux (RHEL) with the High Availability Add-On
Useful References and Guides
This guide lays out policies for the size and membership of RHEL High Availability clusters. Users of RHEL HA clusters should adhere to these policies in order to be eligible for support from Red Hat with the appropriate product support subscriptions.
Maximum cluster size: Red Hat provides support for 32 members in the latest releases of RHEL 7 and 8, above its previous limit of 16 members in older releases. These limits apply to the following RHEL releases:
- RHEL 8
- RHEL 8.1 and later: Support for up to 32 nodes
- RHEL 8.0: Support for up to 16 nodes
- RHEL 7
- RHEL 7.7 and later: Support for up to 32 nodes
- RHEL 7.6 and earlier: Support for up to 16 nodes
- Support for 32 nodes does not apply to Resilient Storage Clusters utilizing clvmd, gfs2, dlm, cmirror, etc. The supported limit for Resilient Storage clusters is still 16 nodes across all RHEL releases..
- Members running
pacemaker_remotedo not go towards this limit of 16 nodes. See the section below for more details on
- This limit does not extend across multiple distinct clusters interacting with each other. For example, clusters coordinating resource management via
boothdo not have a total size limit of 32 across all clusters combined, but rather a limit of 32 nodes per cluster (in accordance with the above release matrix).
- IBM z Systems z/VM Guests as Cluster Members support between 2 and 4 cluster nodes.
Minimum cluster size: Red Hat provides support for clusters with only 1 member in the latest releases of RHEL 8, below its previous requirement of at least 2 members in older releases. These requirements apply to the following RHEL releases:
RHEL 8.2 and later: Support for 1 or more nodes
DLM and GFS2 filesystems are not supported on single-node clusters (as they require fencing). See the following document for more information: Configuring GFS2 file systems Red Hat Enterprise Linux 8 | 1.2. GFS2 support considerations.
Exception: A single-node cluster mounting gfs2 filesystems (which uses DLM) is supported for the purposes of a secondary-site Disaster Recovery (DR) node. This exception is for DR purposes only and not for transferring the main cluster workload to the secondary-site.
For example, copying off the data from the filesystem mounted on the secondary-site (while primary site is offline) is supported. However, migrating a workload from the primary site directly to a single-node cluster secondary site is unsupported. If the full work load needs to be migrated to the single-node secondary-site then the secondary-site is required to be the same size as the primary site.
It is recommended to mount the gfs2 filesystem with the mount options
errors=panicso that the single-node cluster will panic when a gfs2 withdraw occurs since the single-node cluster will not be able to fence itself when encountering filesystem errors.
RHEL 8.1 and earlier: Support for 2 or more nodes
- Red Hat will provide assistance with clusters that are temporarily in a degraded membership of only a single node, as long as the cluster was not designed with the expectation it would operate with a single node for long periods of time, and efforts are underway to return the cluster to a full-membership of at least two nodes.
- With issues arising out of a cluster running in a degraded single-node membership over a long period of time, Red Hat's assistance may be primarily focused on returning the cluster to a full membership of two or more nodes. In environments operating long-term with a single node, Red Hat may require reintroduction of additional members in order to assist with concerns in that environment.
pacemaker_remote nodes: Red Hat does not place an upper limit on the number of "remote" nodes that can operate in a cluster. However, if scaling to large volumes of remote nodes (many hundreds, or thousands) Red Hat recommends contacting Red Hat Support for guidance, and thoroughly testing behaviors and failure scenarios at peak loads. The
pacemaker remote nodes and the cluster node members are required to be running the same version of RHEL (ex. RHEL 7 cluster node and RHEL 7 remote nodes).