Red Hat Training

A Red Hat training course is available for RHEL 8

Chapter 13. High availability and clusters

In Red Hat Enterprise Linux 8, pcs fully supports the Corosync 3 cluster engine and the Kronosnet (knet) network abstraction layer for cluster communication. When planning an upgrade to a RHEL 8 cluster from an existing RHEL 7 cluster, some of the considerations you must take into account are as follows:

  • Application versions: What version of the highly-available application will the RHEL 8 cluster require?
  • Application process order: What may need to change in the start and stop processes of the application?
  • Cluster infrastructure: Since pcs supports multiple network connections in RHEL 8, does the number of NICs known to the cluster change?
  • Needed packages: Do you need to install all of the same packages on the new cluster?

Because of these and other considerations for running a Pacemaker cluster in RHEL 8, it is not possible to perform in-place upgrades from RHEL 7 to RHEL 8 clusters and you must configure a new cluster in RHEL 8. You cannot run a cluster that includes nodes running both RHEL 7 and RHEL 8.

Additionally, you should plan for the following before performing an upgrade:

  • Final cutover: What is the process to stop the application running on the old cluster and start it on the new cluster to reduce application downtime?
  • Testing: Is it possible to test your upgrade strategy ahead of time in a development/test environment?

The major differences in cluster creation and administration between RHEL 7 and RHEL 8 are listed in the following sections.

13.1. New formats for pcs cluster setup, pcs cluster node add and pcs cluster node remove commands

In Red Hat Enterprise Linux 8, pcs fully supports the use of node names, which are now required and replace node addresses in the role of node identifier. Node addresses are now optional.

  • In the pcs host auth command, node addresses default to node names.
  • In the pcs cluster setup and pcs cluster node add commands, node addresses default to the node addresses specified in the pcs host auth command.

With these changes, the formats for the commands to set up a cluster, add a node to a cluster, and remove a node from a cluster have changed. For information about these new command formats, see the help display for the pcs cluster setup, pcs cluster node add and pcs cluster node remove commands.

13.2. Master resources renamed to promotable clone resources

Red Hat Enterprise Linux (RHEL) 8 supports Pacemaker 2.0, in which a master/slave resource is no longer a separate type of resource but a standard clone resource with a promotable meta-attribute set to true. The following changes have been implemented in support of this update:

  • It is no longer possible to create master resources with the pcs command. Instead, it is possible to create promotable clone resources. Related keywords and commands have been changed from master to promotable.
  • All existing master resources are displayed as promotable clone resources.
  • When managing a RHEL7 cluster in the Web UI, master resources are still called master, as RHEL7 clusters do not support promotable clones.

13.3. New commands for authenticating nodes in a cluster

Red Hat Enterprise Linux (RHEL) 8 incorporates the following changes to the commands used to authenticate nodes in a cluster.

  • The new command for authentication is pcs host auth. This command allows users to specify host names, addresses and pcsd ports.
  • The pcs cluster auth command authenticates only the nodes in a local cluster and does not accept a node list
  • It is now possible to specify an address for each node. pcs/pcsd will then communicate with each node using the specified address. These addresses can be different than the ones corosync uses internally.
  • The pcs pcsd clear-auth command has been replaced by the pcs pcsd deauth and pcs host deauth commands. The new commands allow users to deauthenticate a single host as well as all hosts.
  • Previously, node authentication was bidirectional, and running the pcs cluster auth command caused all specified nodes to be authenticated against each other. The pcs host auth command, however, causes only the local host to be authenticated against the specified nodes. This allows better control of what node is authenticated against what other nodes when running this command. On cluster setup itself, and also when adding a node, pcs automatically synchronizes tokens on the cluster, so all nodes in the cluster are still automatically authenticated as before and the cluster nodes can communicate with each other.

Note that these changes are not backward compatible. Nodes that were authenticated on a RHEL 7 system will need to be authenticated again.

13.4. LVM volumes in a Red Hat High Availability active/passive cluster

When configuring LVM volumes as resources in a Red Hat HA active/passive cluster in RHEL 8, you configure the volumes as an LVM-activate resource. In RHEL 7, you configured the volumes as an LVM resource. For an example of a cluster configuration procedure that includes configuring an LVM volume as a resource in an active/passive cluster in RHEL 8, see Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster.

13.5. Shared LVM volumes in a Red Hat High Availability active/active cluster

In RHEL 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage devices in an active/active cluster. This requires that you configure the logical volumes on which you mount a GFS2 file system as shared logical volumes.

Additionally, this requires that you use the LVM-activate resource agent to manage an LVM volume and that you use the lvmlockd resource agent to manage the lvmlockd daemon.

For a full procedure for configuring a RHEL 8 Pacemaker cluster that includes GFS2 file systems using shared logical volumes, see Configuring a GFS2 file system in a cluster.

13.6. GFS2 file systems in a RHEL 8 Pacemaker cluster

In RHEL 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage devices in an active/active cluster as described in Section 12.3.1, “Removal of clvmd for managing shared storage devices”.

To use GFS2 file systems that were created on a RHEL 7 system in a RHEL 8 cluster, you must configure the logical volumes on which they are mounted as shared logical volumes in a RHEL 8 system, and you must start locking for the volume group. For an example of the procedure that configures existing RHEL 7 logical volumes as shared logical volumes for use in a RHEL 8 Pacemaker cluster, see Migrating a GFS2 file system from RHEL7 to RHEL8.