B.3. Configuring an Apache HTTP Server in a Red Hat High Availability Cluster with the pcs Command
pcsto configure cluster resources. In this example case, clients access Apache through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.
- A 2-node Red Hat High Availability cluster with power fencing configured for each node. This procedure uses the cluster example provided in Section B.1.2, “Creating and Starting the Cluster”.
- A public virtual IP address, required for Apache.
- Shared storage for the nodes in the cluster, using iSCSI or Fibre Channel.
- Configure an
ext4file system mounted on the logical volume
my_lv, as described in Section B.3.1, “Configuring an LVM Volume with an ext4 File System”.
- Configure a web server, as described in Section B.3.2, “Web Server Configuration”.
- Ensure that only the cluster is capable of activating the volume group that contains
my_lv, and that the volume group will not be activated outside of the cluster on startup, as described in Section B.3.3, “Exclusive Activation of a Volume Group in a Cluster”.
B.3.1. Configuring an LVM Volume with an ext4 File System
ext4file system on that volume. In this example, the shared partition
/dev/sdb1is used to store the LVM physical volume from which the LVM logical volume will be created.
/dev/sdb1partition is storage that is shared, you perform this procedure on one node only,
- Create an LVM physical volume on partition
pvcreate /dev/sdb1Physical volume "/dev/sdb1" successfully created
- Create the volume group
my_vgthat consists of the physical volume
vgcreate my_vg /dev/sdb1Volume group "my_vg" successfully created
- Create a logical volume using the volume group
lvcreate -L450 -n my_lv my_vgRounding up size to full physical extent 452.00 MiB Logical volume "my_lv" createdYou can use the
lvscommand to display the logical volume.
lvsLV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m ...
- Create an
ext4file system on the logical volume
mkfs.ext4 /dev/my_vg/my_lvmke2fs 1.42.7 (21-Jan-2013) Filesystem label= OS type: Linux ...
B.3.2. Web Server Configuration
- Ensure that the Apache HTTP server is installed on each node in the cluster. You also need the
wgettool installed on the cluster to be able to check the status of Apache.On each node, execute the following command.
yum install -y httpd wget
- In order for the Apache resource agent to get the status of Apache, ensure that the following text is present in the
/etc/httpd/conf/httpd.conffile on each node in the cluster, and ensure that it has not been commented out. If this text is not already present, add the text to the end of the file.
<Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location>
- Create a web page for Apache to serve up. On one node in the cluster, mount the file system you created in Section B.3.1, “Configuring an LVM Volume with an ext4 File System”, create the file
index.htmlon that file system, then unmount the file system.
mount /dev/my_vg/my_lv /var/www/#
restorecon -R /var/www#
cat <<-END >/var/www/html/index.html
B.3.3. Exclusive Activation of a Volume Group in a Cluster
lvmetaddaemon is disabled when using Pacemaker. You can check whether the daemon is disabled and whether any
lvmetadprocesses are running by executing the following commands.
grep use_lvmetad /etc/lvm/lvm.confuse_lvmetad = 0 #
ps -ef | grep -i [l]vmroot 23843 15478 0 11:31 pts/0 00:00:00 grep --color=auto -i lvm
use_lvmetad = 0in the
/etc/lvm/lvm.conffile and stop any running
volume_listentry in the
/etc/lvm/lvm.confconfiguration file. Volume groups listed in the
volume_listentry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the
volume_listentry. Note that this procedure does not require the use of
- Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example.
vgs --noheadings -o vg_namemy_vg rhel_home rhel_root
- Add the volume groups other than
my_vg(the volume group you have just defined for the cluster) as entries to
/etc/lvm/lvm.confconfiguration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the
volume_listline of the
lvm.conffile and add these volume groups as entries to
volume_list = [ "rhel_root", "rhel_home" ]
NoteIf no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the
volume_list = .
- Rebuild the
initramfsboot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the
initramfsdevice with the following command. This command may take up to a minute to complete.
dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
- Reboot the node.
NoteIf you have installed a new Linux kernel since booting the node on which you created the boot image, the new
initrdimage will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct
initrddevice is in use by running the
uname -rcommand before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the
initrdfile after rebooting with the new kernel and then reboot the node.
- When the node has rebooted, check whether the cluster services have started up again on that node by executing the
pcs cluster statuscommand on that node. If this yields the message
Error: cluster is not currently running on this nodethen enter the following command.
pcs cluster startAlternately, you can wait until you have rebooted each node in the cluster and start cluster services on each of the nodes with the following command.
pcs cluster start --all
B.3.4. Creating the Resources and Resource Groups with the pcs Command
apachegroup. The resources to create are as follows, listed in the order in which they will start.
my_lvmthat uses the LVM volume group you created in Section B.3.1, “Configuring an LVM Volume with an ext4 File System”.
my_fs, that uses the filesystem device
/dev/my_vg/my_lvyou created in Section B.3.1, “Configuring an LVM Volume with an ext4 File System”.
IPaddr2resource, which is a floating IP address for the
apachegroupresource group. The IP address must not be one already associated with a physical node. If the
IPaddr2resource's NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP addresses used by the cluster nodes, otherwise the NIC device to assign the floating IP address cannot be properly detected.
Websitethat uses the
index.htmlfile and the Apache configuration you defined in Section B.3.2, “Web Server Configuration”.
apachegroupand the resources that the group contains. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only.
- The following command creates the LVM resource
my_lvm. This command specifies the
exclusive=trueparameter to ensure that only the cluster is capable of activating the LVM logical volume. Because the resource group
apachegroupdoes not yet exist, this command creates the resource group.
pcs resource create my_lvm LVM volgrpname=my_vg\
exclusive=true --group apachegroupWhen you create a resource, the resource is started automatically. You can use the following command to confirm that the resource was created and has started.
pcs resource showResource Group: apachegroup my_lvm (ocf::heartbeat:LVM): StartedYou can manually stop and start an individual resource with the
pcs resource disableand
pcs resource enablecommands.
- The following commands create the remaining resources for the configuration, adding them to the existing resource group
pcs resource create my_fs Filesystem\
device="/dev/my_vg/my_lv" directory="/var/www" fstype="ext4" --group\
pcs resource create VirtualIP IPaddr2 ip=198.51.100.3\
cidr_netmask=24 --group apachegroup[root@z1 ~]#
pcs resource create Website apache\
statusurl="http://127.0.0.1/server-status" --group apachegroup
- After creating the resources and the resource group that contains them, you can check the status of the cluster. Note that all four resources are running on the same node.
pcs statusCluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.comNote that if you have not configured a fencing device for your cluster, as described in Section B.2, “Fencing Configuration”, by default the resources do not start.
- Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2resource to view the sample display, consisting of the simple word "Hello".
HelloIf you find that the resources you configured are not running, you can run the
pcs resource debug-start resourcecommand to test the resource configuration. For information on the
pcs resource debug-startcommand, see the High Availability Add-On Reference manual.
B.3.5. Testing the Resource Configuration
z1.example.com. You can test whether the resource group fails over to node
z2.example.comby using the following procedure to put the first node in
standbymode, after which the node will no longer be able to host resources.
- The following command puts node
pcs cluster standby z1.example.com
- After putting node
z1in standby mode, check the cluster status. Note that the resources should now all be running on
pcs statusCluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.comThe web site at the defined IP address should still display, without interruption.
- To remove
standbymode, enter the following command.
pcs cluster unstandby z1.example.com
NoteRemoving a node from
standbymode does not in itself cause the resources to fail back over to that node. For information on controlling which node resources can run on, see the chapter on configuring cluster resources in the Red Hat High Availability Add-On Reference.