Appendix B. Configuration Example Using pcs Commands
This appendix provides a step-by-step procedure for configuring a two-node Red Hat Enterprise Linux High Availability Add-On cluster, using the
pcscommand, in Red Hat Enterprise Linux release 6.6 and later. It also describes how to configure an Apache HTTP server in this cluster.
Configuring the cluster provided in this chapter requires that your system include the following components:
- 2 nodes, which will be used to create the cluster. In this example, the nodes used are
- Network switches for the private network, required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
- A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of
B.1. Initial System Setup
This section describes the initial setup of the system that you will use to create the cluster.
B.1.1. Installing the Cluster Software
Use the following procedure to install the cluster software.
- Ensure that
yum install -y pcs pacemaker cman fence-agents
- After installation, to prevent
corosyncfrom starting without
cman, execute the following commands on all nodes in the cluster.
chkconfig corosync off#
chkconfig cman off
B.1.2. Creating and Starting the Cluster
This section provides the procedure for creating the initial cluster, on which you will configure the cluster resources.
- In order to use
pcsto configure the cluster and communicate among the nodes, you must set a password on each node for the user ID
hacluster, which is the
pcsadministration account. It is recommended that the password for user
haclusterbe the same on each node.
passwd haclusterChanging password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.
- Before the cluster can be configured, the
pcsddaemon must be started. This daemon works with the
pcscommand to manage configuration across the nodes in the cluster.On each node in the cluster, execute the following commands to start the
pcsdservice and to enable
pcsdat system start.
service pcsd start#
chkconfig pcsd on
- Authenticate the
haclusterfor each node in the cluster on the node from which you will be running
pcs.The following command authenticates user
z1.example.comfor both of the nodes in the example two-node cluster,
pcs cluster auth z1.example.com z2.example.comUsername: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized
- Execute the following command from
z1.example.comto create the two-node cluster
myclusterthat consists of nodes
z2.example.com. This will propagate the cluster configuration files to both nodes in the cluster. This command includes the
--startoption, which will start the cluster services on both nodes in the cluster.
pcs cluster setup --start --name my_cluster\
z1.example.com z2.example.comz1.example.com: Succeeded z1.example.com: Starting Cluster... z2.example.com: Succeeded z2.example.com: Starting Cluster...
- Optionally, you can enable the cluster services to run on each node in the cluster when the node is booted.
NoteFor your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the
pcs cluster startcommand on that node.
pcs cluster enable --all
You can display the current status of the cluster with the
pcs cluster statuscommand. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the
--startoption of the
pcs cluster setupcommand, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration.
pcs cluster statusCluster Status: Last updated: Thu Jul 25 13:01:26 2013 Last change: Thu Jul 25 13:04:45 2013 via crmd on z2.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 0 Resources configured