Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

Appendix B. Configuration Example Using pcs Commands

This appendix provides a step-by-step procedure for configuring a two-node Red Hat Enterprise Linux High Availability Add-On cluster, using the pcs command, in Red Hat Enterprise Linux release 6.6 and later. It also describes how to configure an Apache HTTP server in this cluster.
Configuring the cluster provided in this chapter requires that your system include the following components:
  • 2 nodes, which will be used to create the cluster. In this example, the nodes used are z1.example.com and z2.example.com.
  • Network switches for the private network, required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
  • A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com.

B.1. Initial System Setup

This section describes the initial setup of the system that you will use to create the cluster.

B.1.1. Installing the Cluster Software

Use the following procedure to install the cluster software.
  1. Ensure that pcs, pacemaker, cman, and fence-agents are installed.
    yum install -y pcs pacemaker cman fence-agents
    
  2. After installation, to prevent corosync from starting without cman, execute the following commands on all nodes in the cluster.
    # chkconfig corosync off
    # chkconfig cman off

B.1.2. Creating and Starting the Cluster

This section provides the procedure for creating the initial cluster, on which you will configure the cluster resources.
  1. In order to use pcs to configure the cluster and communicate among the nodes, you must set a password on each node for the user ID hacluster, which is the pcs administration account. It is recommended that the password for user hacluster be the same on each node.
    # passwd hacluster
    Changing password for user hacluster.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
    
  2. Before the cluster can be configured, the pcsd daemon must be started. This daemon works with the pcs command to manage configuration across the nodes in the cluster.
    On each node in the cluster, execute the following commands to start the pcsd service and to enable pcsd at system start.
    # service pcsd start
    # chkconfig pcsd on
  3. Authenticate the pcs user hacluster for each node in the cluster on the node from which you will be running pcs.
    The following command authenticates user hacluster on z1.example.com for both of the nodes in the example two-node cluster, z1.example.com and z2.example.com.
    root@z1 ~]# pcs cluster auth z1.example.com z2.example.com
    Username: hacluster
    Password:
    z1.example.com: Authorized
    z2.example.com: Authorized
    
  4. Execute the following command from z1.example.com to create the two-node cluster mycluster that consists of nodes z1.example.com and z2.example.com. This will propagate the cluster configuration files to both nodes in the cluster. This command includes the --start option, which will start the cluster services on both nodes in the cluster.
    [root@z1 ~]# pcs cluster setup --start --name my_cluster \
    z1.example.com z2.example.com
    z1.example.com: Succeeded
    z1.example.com: Starting Cluster...
    z2.example.com: Succeeded
    z2.example.com: Starting Cluster...
    
  5. Optionally, you can enable the cluster services to run on each node in the cluster when the node is booted.

    Note

    For your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the pcs cluster start command on that node.
    # pcs cluster enable --all
You can display the current status of the cluster with the pcs cluster status command. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the --start option of the pcs cluster setup command, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration.
[root@z1 ~]# pcs cluster status
Cluster Status:
 Last updated: Thu Jul 25 13:01:26 2013
 Last change: Thu Jul 25 13:04:45 2013 via crmd on z2.example.com
 Stack: corosync
 Current DC: z2.example.com (2) - partition with quorum
 Version: 1.1.10-5.el7-9abe687
 2 Nodes configured
 0 Resources configured