Chapter 25. Cluster quorum
A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum
service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. The service must be loaded into all nodes or none; if it is loaded into a subset of cluster nodes, the results will be unpredictable. For information on the configuration and operation of the votequorum
service, see the votequorum
(5) man page.
25.1. Configuring quorum options
There are some special features of quorum configuration that you can set when you create a cluster with the pcs cluster setup
command. Table 25.1, “Quorum Options” summarizes these options.
Table 25.1. Quorum Options
Option | Description |
---|---|
|
When enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the
The
The |
| When enabled, the cluster will be quorate for the first time only after all nodes have been visible at least once at the same time.
The
The |
|
When enabled, the cluster can dynamically recalculate |
|
The time, in milliseconds, to wait before recalculating |
For further information about configuring and using these options, see the votequorum
(5) man page.
25.2. Modifying quorum options
You can modify general quorum options for your cluster with the pcs quorum update
command. Executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum
(5) man page.
The format of the pcs quorum update
command is as follows.
pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[time-in-ms] [wait_for_all=[0|1]]
The following series of commands modifies the wait_for_all
quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running.
[root@node1:~]#pcs quorum update wait_for_all=1
Checking corosync is not running on nodes... Error: node1: corosync is running Error: node2: corosync is running [root@node1:~]#pcs cluster stop --all
node2: Stopping Cluster (pacemaker)... node1: Stopping Cluster (pacemaker)... node1: Stopping Cluster (corosync)... node2: Stopping Cluster (corosync)... [root@node1:~]#pcs quorum update wait_for_all=1
Checking corosync is not running on nodes... node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes... node1: Succeeded node2: Succeeded [root@node1:~]#pcs quorum config
Options: wait_for_all: 1
25.3. Displaying quorum configuration and status
Once a cluster is running, you can enter the following cluster quorum commands.
The following command shows the quorum configuration.
pcs quorum [config]
The following command shows the quorum runtime status.
pcs quorum status
25.4. Running inquorate clusters
If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause quorum loss, you can change the value of the expected_votes
parameter for the live cluster with the pcs quorum expected-votes
command. This allows the cluster to continue operation when it does not have quorum.
Changing the expected votes in a live cluster should be done with extreme caution. If less than 50% of the cluster is running because you have manually changed the expected votes, then the other nodes in the cluster could be started separately and run cluster services, causing data corruption and other unexpected results. If you change this value, you should ensure that the wait_for_all
parameter is enabled.
The following command sets the expected votes in the live cluster to the specified value. This affects the live cluster only and does not change the configuration file; the value of expected_votes
is reset to the value in the configuration file in the event of a reload.
pcs quorum expected-votes votes
In a situation in which you know that the cluster is inquorate but you want the cluster to proceed with resource management, you can use the pcs quorum unblock
command to prevent the cluster from waiting for all nodes when establishing quorum.
This command should be used with extreme caution. Before issuing this command, it is imperative that you ensure that nodes that are not currently in the cluster are switched off and have no access to shared resources.
# pcs quorum unblock
25.5. Quorum devices
You can allow a cluster to sustain more node failures than standard quorum rules allows by configuring a separate quorum device which acts as a third-party arbitration device for the cluster. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation.
You must take the following into account when configuring a quorum device.
- It is recommended that a quorum device be run on a different physical network at the same site as the cluster that uses the quorum device. Ideally, the quorum device host should be in a separate rack than the main cluster, or at least on a separate PSU and not on the same network segment as the corosync ring or rings.
- You cannot use more than one quorum device in a cluster at the same time.
-
Although you cannot use more than one quorum device in a cluster at the same time, a single quorum device may be used by several clusters at the same time. Each cluster using that quorum device can use different algorithms and quorum options, as those are stored on the cluster nodes themselves. For example, a single quorum device can be used by one cluster with an
ffsplit
(fifty/fifty split) algorithm and by a second cluster with anlms
(last man standing) algorithm. - A quorum device should not be run on an existing cluster node.
25.5.1. Installing quorum device packages
Configuring a quorum device for a cluster requires that you install the following packages:
Install
corosync-qdevice
on the nodes of an existing cluster.[root@node1:~]#
yum install corosync-qdevice
[root@node2:~]#yum install corosync-qdevice
Install
pcs
andcorosync-qnetd
on the quorum device host.[root@qdevice:~]#
yum install pcs corosync-qnetd
Start the
pcsd
service and enablepcsd
at system start on the quorum device host.[root@qdevice:~]#
systemctl start pcsd.service
[root@qdevice:~]#systemctl enable pcsd.service
25.5.2. Configuring a quorum device
The following procedure configures a quorum device and adds it to the cluster. In this example:
-
The node used for a quorum device is
qdevice
. The quorum device model is
net
, which is currently the only supported model. Thenet
model supports the following algorithms:-
ffsplit
: fifty-fifty split. This provides exactly one vote to the partition with the highest number of active nodes. lms
: last-man-standing. If the node is the only one left in the cluster that can see theqnetd
server, then it returns a vote.WarningThe LMS algorithm allows the cluster to remain quorate even with only one remaining node, but it also means that the voting power of the quorum device is great since it is the same as number_of_nodes - 1. Losing connection with the quorum device means losing number_of_nodes - 1 votes, which means that only a cluster with all nodes active can remain quorate (by overvoting the quorum device); any other cluster becomes inquorate.
For more detailed information on the implementation of these algorithms, see the
corosync-qdevice
(8) man page.
-
-
The cluster nodes are
node1
andnode2
.
The following procedure configures a quorum device and adds that quorum device to a cluster.
On the node that you will use to host your quorum device, configure the quorum device with the following command. This command configures and starts the quorum device model
net
and configures the device to start on boot.[root@qdevice:~]#
pcs qdevice setup model net --enable --start
Quorum device 'net' initialized quorum device enabled Starting quorum device... quorum device startedAfter configuring the quorum device, you can check its status. This should show that the
corosync-qnetd
daemon is running and, at this point, there are no clients connected to it. The--full
command option provides detailed output.[root@qdevice:~]#
pcs qdevice status net --full
QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 0 Connected clusters: 0 Maximum send/receive size: 32768/32768 bytesEnable the ports on the firewall needed by the
pcsd
daemon and thenet
quorum device by enabling thehigh-availability
service onfirewalld
with following commands.[root@qdevice:~]#
firewall-cmd --permanent --add-service=high-availability
[root@qdevice:~]#firewall-cmd --add-service=high-availability
From one of the nodes in the existing cluster, authenticate user
hacluster
on the node that is hosting the quorum device. This allowspcs
on the cluster to connect topcs
on theqdevice
host, but does not allowpcs
on theqdevice
host to connect topcs
on the cluster.[root@node1:~] #
pcs host auth qdevice
Username: hacluster Password: qdevice: AuthorizedAdd the quorum device to the cluster.
Before adding the quorum device, you can check the current configuration and status for the quorum device for later comparison. The output for these commands indicates that the cluster is not yet using a quorum device, and the
Qdevice
membership status for each node isNR
(Not Registered).[root@node1:~]#
pcs quorum config
Options:[root@node1:~]#
pcs quorum status
Quorum information ------------------ Date: Wed Jun 29 13:15:36 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 1 Flags: 2Node Quorate Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 NR node1 (local) 2 1 NR node2The following command adds the quorum device that you have previously created to the cluster. You cannot use more than one quorum device in a cluster at the same time. However, one quorum device can be used by several clusters at the same time. This example command configures the quorum device to use the
ffsplit
algorithm. For information on the configuration options for the quorum device, see thecorosync-qdevice
(8) man page.[root@node1:~]#
pcs quorum device add model net host=qdevice
\algorithm=ffsplit
Setting up qdevice certificates on nodes... node2: Succeeded node1: Succeeded Enabling corosync-qdevice... node1: corosync-qdevice enabled node2: corosync-qdevice enabled Sending updated corosync.conf to nodes... node1: Succeeded node2: Succeeded Corosync configuration reloaded Starting corosync-qdevice... node1: corosync-qdevice started node2: corosync-qdevice startedCheck the configuration status of the quorum device.
From the cluster side, you can execute the following commands to see how the configuration has changed.
The
pcs quorum config
shows the quorum device that has been configured.[root@node1:~]#
pcs quorum config
Options: Device: Model: net algorithm: ffsplit host: qdeviceThe
pcs quorum status
command shows the quorum runtime status, indicating that the quorum device is in use. The meanings of of theQdevice
membership information status values for each cluster node are as follows:-
A/NA
— The quorum device is alive or not alive, indicating whether there is a heartbeat betweenqdevice
andcorosync
. This should always indicate that the quorum device is alive. -
V/NV
—V
is set when the quorum device has given a vote to a node. In this example, both nodes are set toV
since they can communicate with each other. If the cluster were to split into two single-node clusters, one of the nodes would be set toV
and the other node would be set toNV
. MW/NMW
— The quorum devicemaster_ wins
flag is set or not set. By default the flag is not set and the value isNMW
(Not Master Wins). For information on themaster_wins
flag see thevotequorum_qdevice_master_wins
(3) man page.[root@node1:~]#
pcs quorum status
Quorum information ------------------ Date: Wed Jun 29 13:17:02 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,V,NMW node1 (local) 2 1 A,V,NMW node2 0 1 QdeviceThe
pcs quorum device status
shows the quorum device runtime status.[root@node1:~]#
pcs quorum device status
Qdevice information ------------------- Model: Net Node ID: 1 Configured node list: 0 Node ID = 1 1 Node ID = 2 Membership node list: 1, 2 Qdevice-net information ---------------------- Cluster name: mycluster QNetd host: qdevice:5403 Algorithm: ffsplit Tie-breaker: Node with lowest node ID State: ConnectedFrom the quorum device side, you can execute the following status command, which shows the status of the
corosync-qnetd
daemon.[root@qdevice:~]#
pcs qdevice status net --full
QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 2 Connected clusters: 1 Maximum send/receive size: 32768/32768 bytes Cluster "mycluster": Algorithm: ffsplit Tie-breaker: Node with lowest node ID Node ID 2: Client address: ::ffff:192.168.122.122:50028 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 1: Client address: ::ffff:192.168.122.121:48786 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK)
-
25.5.3. Managing the Quorum Device Service
PCS provides the ability to manage the quorum device service on the local host (corosync-qnetd
), as shown in the following example commands. Note that these commands affect only the corosync-qnetd
service.
[root@qdevice:~]#pcs qdevice start net
[root@qdevice:~]#pcs qdevice stop net
[root@qdevice:~]#pcs qdevice enable net
[root@qdevice:~]#pcs qdevice disable net
[root@qdevice:~]#pcs qdevice kill net
25.5.4. Managing the quorum device settings in a cluster
The following sections describe the PCS commands that you can use to manage the quorum device settings in a cluster.
25.5.4.1. Changing quorum device settings
You can change the setting of a quorum device with the pcs quorum device update
command.
To change the host
option of quorum device model net
, use the pcs quorum device remove
and the pcs quorum device add
commands to set up the configuration properly, unless the old and the new host are the same machine.
The following command changes the quorum device algorithm to lms
.
[root@node1:~]# pcs quorum device update model algorithm=lms
Sending updated corosync.conf to nodes...
node1: Succeeded
node2: Succeeded
Corosync configuration reloaded
Reloading qdevice configuration on nodes...
node1: corosync-qdevice stopped
node2: corosync-qdevice stopped
node1: corosync-qdevice started
node2: corosync-qdevice started
25.5.4.2. Removing a quorum device
Use the following command to remove a quorum device configured on a cluster node.
[root@node1:~]# pcs quorum device remove
Sending updated corosync.conf to nodes...
node1: Succeeded
node2: Succeeded
Corosync configuration reloaded
Disabling corosync-qdevice...
node1: corosync-qdevice disabled
node2: corosync-qdevice disabled
Stopping corosync-qdevice...
node1: corosync-qdevice stopped
node2: corosync-qdevice stopped
Removing qdevice certificates from nodes...
node1: Succeeded
node2: Succeeded
After you have removed a quorum device, you should see the following error message when displaying the quorum device status.
[root@node1:~]# pcs quorum device status
Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QDevice socket (is QDevice running?): No such file or directory
25.5.4.3. Destroying a quorum device
To disable and stop a quorum device on the quorum device host and delete all of its configuration files, use the following command.
[root@qdevice:~]# pcs qdevice destroy net
Stopping quorum device...
quorum device stopped
quorum device disabled
Quorum device 'net' configuration files removed