Red Hat Training
A Red Hat training course is available for Red Hat Enterprise Linux
Chapter 6. Configuring a GFS2 File System in a Pacemaker Cluster
The following procedure is an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system.
- On each node in the cluster, install the High Availability and Resilient Storage packages.
yum groupinstall 'High Availability' 'Resilient Storage'
- Create the Pacemaker cluster and configure fencing for the cluster. For information on configuring a Pacemaker cluster, see Configuring the Red Hat High Availability Add-On with Pacemaker.
- On each node in the cluster, enable the
clvmdservice. If you will be using cluster-mirrored volumes, enable the
chkconfig clvmd on#
chkconfig cmirrord onAfter you enable these daemons, when starting and stopping Pacemaker or the cluster through normal means using
pcs cluster start,
pcs cluster stop,
service pacemaker start, or
service pacemaker stop, the
cmirrorddaemons will be started and stopped as needed.
- On one node in the cluster, perform the following steps:
- Set the global Pacemaker parameter
NoteBy default, the value of
no-quorum-policyis set to
stop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.To address this situation, you can set the
no-quorum-policy=freezewhen GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.
pcs property set no-quorum-policy=freeze
- After ensuring that the locking type is set to 3 in the
/etc/lvm/lvm.conffile to support clustered locking, Create the clustered LV and format the volume with a GFS2 file system. Ensure that you create enough journals for each of the nodes in your cluster.
vgcreate -Ay -cy cluster_vg /dev/vdb#
lvcreate -L5G -n cluster_lv cluster_vg#
mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:gfs2-demo /dev/cluster_vg/cluster_lv
- Configure a
clusterfsresource.You should not add the file system to the
/etc/fstabfile because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with
options=options. Run the
pcs resource describe Filesystemcommand for full configuration options.This cluster resource creation command specifies the
pcs resource create clusterfs Filesystem device="/dev/cluster_vg/cluster_lv" directory="/var/mountpoint" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence clone interleave=true
- Verify that GFS2 is mounted as expected.
mount |grep /mnt/gfs2-demo/dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2-demo type gfs2 (rw,noatime,seclabel)
- (Optional) Reboot all cluster nodes to verify GFS2 persistence and recovery.