Chapter 5. Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster
The following procedure configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster using
pcs to configure cluster resources. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.
Figure 5.1, “Apache in a Red Hat High Availability Two-Node Cluster” shows a high-level overview of the cluster in which The cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache HTTP server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. In this illustration, the web server is running on Node 1 while Node 2 is available to run the server if Node 1 becomes inoperative.
Figure 5.1. Apache in a Red Hat High Availability Two-Node Cluster
This use case requires that your system include the following components:
- A two-node Red Hat High Availability cluster with power fencing configured for each node. We recommend but do not require a private network. This procedure uses the cluster example provided in Creating a Red Hat High-Availability cluster with Pacemaker.
- A public virtual IP address, required for Apache.
- Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device.
The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will be performing the following procedures:
ext4file system on the logical volume
- Configure a web server.
After performing these steps, you create the resource group and the resources it contains.
5.1. Configuring an LVM volume with an ext4 file system in a Pacemaker cluster
This use case requires that you create an LVM logical volume on storage that is shared between the nodes of the cluster.
LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only.
The following procedure creates an LVM logical volume and then creates an ext4 file system on that volume for use in a Pacemaker cluster. In this example, the shared partition
/dev/sdb1 is used to store the LVM physical volume from which the LVM logical volume will be created.
On both nodes of the cluster, perform the following steps to set the value for the LVM system ID to the value of the
unameidentifier for the system. The LVM system ID will be used to ensure that only the cluster is capable of activating the volume group.
system_id_sourceconfiguration option in the
/etc/lvm/lvm.confconfiguration file to
# Configuration option global/system_id_source. system_id_source = "uname"
Verify that the LVM system ID on the node matches the
unamefor the node.
lvm systemidsystem ID: z1.example.com #
Create the LVM volume and create an ext4 file system on that volume. Since the
/dev/sdb1partition is storage that is shared, you perform this part of the procedure on one node only.
Create an LVM physical volume on partition
pvcreate /dev/sdb1Physical volume "/dev/sdb1" successfully created
Create the volume group
my_vgthat consists of the physical volume
vgcreate my_vg /dev/sdb1Volume group "my_vg" successfully created
Verify that the new volume group has the system ID of the node on which you are running and from which you created the volume group.
vgs -o+systemidVG #PV #LV #SN Attr VSize VFree System ID my_vg 1 0 0 wz--n- <1.82t <1.82t z1.example.com
Create a logical volume using the volume group
lvcreate -L450 -n my_lv my_vgRounding up size to full physical extent 452.00 MiB Logical volume "my_lv" created
You can use the
lvscommand to display the logical volume.
lvsLV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m ...
Create an ext4 file system on the logical volume
mkfs.ext4 /dev/my_vg/my_lvmke2fs 1.44.3 (10-July-2018) Creating filesystem with 462848 1k blocks and 115824 inodes ...
5.2. Configuring an Apache HTTP Server
The following procedure configures an Apache HTTP Server.
Ensure that the Apache HTTP Server is installed on each node in the cluster. You also need the
wgettool installed on the cluster to be able to check the status of the Apache HTTP Server.
On each node, execute the following command.
yum install -y httpd wget
If you are running the
firewallddaemon, on each node in the cluster enable the ports that are required by the Red Hat High Availability Add-On.
firewall-cmd --permanent --add-service=high-availability#
In order for the Apache resource agent to get the status of the Apache HTTP Server, ensure that the following text is present in the
/etc/httpd/conf/httpd.conffile on each node in the cluster, and ensure that it has not been commented out. If this text is not already present, add the text to the end of the file.
<Location /server-status> SetHandler server-status Require local </Location>
When you use the
apacheresource agent to manage Apache, it does not use
systemd. Because of this, you must edit the
logrotatescript supplied with Apache so that it does not use
systemctlto reload Apache.
Remove the following line in the
/etc/logrotate.d/httpdfile on each node in the cluster.
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
Replace the line you removed with the following three lines.
/usr/bin/test -f /var/run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q
/usr/bin/cat /var/run/httpd.pid>/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
Create a web page for Apache to serve up. On one node in the cluster, mount the file system you created in Configuring an LVM volume with an ext4 file system, create the file
index.htmlon that file system, and then unmount the file system.
mount /dev/my_vg/my_lv /var/www/#
restorecon -R /var/www#
cat <<-END >/var/www/html/index.html
5.3. Creating the resources and resource groups
This use case requires that you create four cluster resources. To ensure these resources all run on the same node, they are configured as part of the resource group
apachegroup. The resources to create are as follows, listed in the order in which they will start.
my_lvmthat uses the LVM volume group you created in Configuring an LVM volume with an ext4 file system.
my_fs, that uses the file system device
/dev/my_vg/my_lvyou created in Configuring an LVM volume with an ext4 file system.
IPaddr2resource, which is a floating IP address for the
apachegroupresource group. The IP address must not be one already associated with a physical node. If the
IPaddr2resource’s NIC device is not specified, the floating IP must reside on the same network as one of the node’s statically assigned IP addresses, otherwise the NIC device to assign the floating IP address cannot be properly detected.
Websitethat uses the
index.htmlfile and the Apache configuration you defined in Configuring an Apache HTTP server.
The following procedure creates the resource group
apachegroup and the resources that the group contains. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only.
The following command creates the
my_lvm. Because the resource group
apachegroupdoes not yet exist, this command creates the resource group.Note
Do not configure more than one
LVM-activateresource that uses the same LVM volume group in an active/passive HA configuration, as this could cause data corruption. Additionally, do not configure an
LVM-activateresource as a clone resource in an active/passive HA configuration.
pcs resource create my_lvm ocf:heartbeat:LVM-activate
vg_access_mode=system_id --group apachegroup
When you create a resource, the resource is started automatically. You can use the following command to confirm that the resource was created and has started.
pcs resource statusResource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started
You can manually stop and start an individual resource with the
pcs resource disableand
pcs resource enablecommands.
The following commands create the remaining resources for the configuration, adding them to the existing resource group
pcs resource create my_fs Filesystem\
device="/dev/my_vg/my_lv" directory="/var/www" fstype="ext4"\
--group apachegroup[root@z1 ~]#
pcs resource create VirtualIP IPaddr2 ip=198.51.100.3\
cidr_netmask=24 --group apachegroup[root@z1 ~]#
pcs resource create Website apache\
statusurl="http://127.0.0.1/server-status" --group apachegroup
After creating the resources and the resource group that contains them, you can check the status of the cluster. Note that all four resources are running on the same node.
pcs statusCluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com
Note that if you have not configured a fencing device for your cluster, by default the resources do not start.
Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2resource to view the sample display, consisting of the simple word "Hello".
If you find that the resources you configured are not running, you can run the
pcs resource debug-start resourcecommand to test the resource configuration.
5.4. Testing the resource configuration
In the cluster status display shown in Creating the resources and resource groups, all of the resources are running on node
z1.example.com. You can test whether the resource group fails over to node
z2.example.com by using the following procedure to put the first node in
standby mode, after which the node will no longer be able to host resources.
The following command puts node
pcs node standby z1.example.com
After putting node
standbymode, check the cluster status. Note that the resources should now all be running on
pcs statusCluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com
The web site at the defined IP address should still display, without interruption.
standbymode, enter the following command.
pcs node unstandby z1.example.comNote
Removing a node from
standbymode does not in itself cause the resources to fail back over to that node. This will depend on the
resource-stickinessvalue for the resources. For information on the
resource-stickinessmeta attribute, see Configuring a resource to prefer its current node.