Show Table of Contents
Chapter 6. Configuring a GFS2 File System in a Pacemaker Cluster
The following procedure is an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system.
- On each node in the cluster, install the High Availability and Resilient Storage packages.
#
yum groupinstall 'High Availability' 'Resilient Storage' - Create the Pacemaker cluster and configure fencing for the cluster. For information on configuring a Pacemaker cluster, see Configuring the Red Hat High Availability Add-On with Pacemaker.
- On each node in the cluster, enable the
clvmdservice. If you will be using cluster-mirrored volumes, enable thecmirrordservice.#
chkconfig clvmd on#chkconfig cmirrord onAfter you enable these daemons, when starting and stopping Pacemaker or the cluster through normal means usingpcs cluster start,pcs cluster stop,service pacemaker start, orservice pacemaker stop, theclvmdandcmirrorddaemons will be started and stopped as needed. - On one node in the cluster, perform the following steps:
- Set the global Pacemaker parameter
no_quorum_policytofreeze.Note
By default, the value ofno-quorum-policyis set tostop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.To address this situation, you can set theno-quorum-policy=freezewhen GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.#
pcs property set no-quorum-policy=freeze - After ensuring that the locking type is set to 3 in the
/etc/lvm/lvm.conffile to support clustered locking, Create the clustered LV and format the volume with a GFS2 file system. Ensure that you create enough journals for each of the nodes in your cluster.#
pvcreate /dev/vdb#vgcreate -Ay -cy cluster_vg /dev/vdb#lvcreate -L5G -n cluster_lv cluster_vg#mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:gfs2-demo /dev/cluster_vg/cluster_lv - Configure a
clusterfsresource.You should not add the file system to the/etc/fstabfile because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration withoptions=options. Run thepcs resource describe Filesystemcommand for full configuration options.This cluster resource creation command specifies thenoatimemount option.#
pcs resource create clusterfs Filesystem device="/dev/cluster_vg/cluster_lv" directory="/var/mountpoint" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence clone interleave=true - Verify that GFS2 is mounted as expected.
#
mount |grep /mnt/gfs2-demo/dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2-demo type gfs2 (rw,noatime,seclabel)
- (Optional) Reboot all cluster nodes to verify GFS2 persistence and recovery.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.