Support Policies for RHEL High Availability Clusters - Membership and cluster size

Updated -

Red Hat Insights can detect this issue

Proactively detect and remediate issues impacting your systems.
View matching systems and remediation

Contents

Overview

Applicable Environments

  • Red Hat Enterprise Linux (RHEL) with the High Availability Add-On

Useful References and Guides

Introduction

This guide lays out policies for the size and membership of RHEL High Availability clusters. Users of RHEL HA clusters should adhere to these policies in order to be eligible for support from Red Hat with the appropriate product support subscriptions.

Policies

Maximum cluster size: Red Hat provides support for 32 members in the latest releases of RHEL 7 and 8, above its previous limit of 16 members in older releases. These limits apply to the following RHEL releases:

  • RHEL 8
    • RHEL 8.1 and later: Support for up to 32 nodes
    • RHEL 8.0: Support for up to 16 nodes
  • RHEL 7
    • RHEL 7.7 and later: Support for up to 32 nodes
    • RHEL 7.6 and earlier: Support for up to 16 nodes

IMPORTANT NOTES:

ADDITIONAL REFERENCES:


Minimum cluster size: Red Hat provides support for clusters with only 1 member in the latest releases of RHEL 8, below its previous requirement of at least 2 members in older releases. These requirements apply to the following RHEL releases:

  • RHEL 8.2 and later: Support for 1 or more nodes

    • DLM and GFS2 filesystems are not supported on single-node clusters (as they require fencing). See the following document for more information: Configuring GFS2 file systems Red Hat Enterprise Linux 8 | 1.2. GFS2 support considerations.

      • Exception: A single-node cluster mounting gfs2 filesystems (which uses DLM) is supported for the purposes of a secondary-site Disaster Recovery (DR) node. This exception is for DR purposes only and not for transferring the main cluster workload to the secondary-site.

        For example, copying off the data from the filesystem mounted on the secondary-site (while primary site is offline) is supported. However, migrating a workload from the primary site directly to a single-node cluster secondary site is unsupported. If the full work load needs to be migrated to the single-node secondary-site then the secondary-site is required to be the same size as the primary site.

        It is recommended to mount the gfs2 filesystem with the mount options errors=panic so that the single-node cluster will panic when a gfs2 withdraw occurs since the single-node cluster will not be able to fence itself when encountering filesystem errors.

  • RHEL 8.1 and earlier: Support for 2 or more nodes

    • Red Hat will provide assistance with clusters that are temporarily in a degraded membership of only a single node, as long as the cluster was not designed with the expectation it would operate with a single node for long periods of time, and efforts are underway to return the cluster to a full-membership of at least two nodes.
    • With issues arising out of a cluster running in a degraded single-node membership over a long period of time, Red Hat's assistance may be primarily focused on returning the cluster to a full membership of two or more nodes. In environments operating long-term with a single node, Red Hat may require reintroduction of additional members in order to assist with concerns in that environment.

pacemaker_remote nodes: Red Hat does not place an upper limit on the number of "remote" nodes that can operate in a cluster. However, if scaling to large volumes of remote nodes (many hundreds, or thousands) Red Hat recommends contacting Red Hat Support for guidance, and thoroughly testing behaviors and failure scenarios at peak loads. The pacemaker remote nodes and the cluster node members are required to be running the same version of RHEL (ex. RHEL 7 cluster node and RHEL 7 remote nodes).

Comments