Red Hat Training
A Red Hat training course is available for Red Hat Enterprise Linux
4.2. Configuring a Clustered LVM Volume with a GFS2 File System
This use case requires that you create a clustered LVM logical volume on storage that is shared between the nodes of the cluster.
This section describes how to create a clustered LVM logical volume with a GFS2 file system on that volume. In this example, the shared partition
/dev/vdbis used to store the LVM physical volume from which the LVM logical volume will be created.
LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only.
Before starting this procedure, install the
gfs2-utilspackages, which are part of the Resilient Storage channel, on both nodes of the cluster.
yum install lvm2-cluster gfs2-utils
/dev/vdbpartition is storage that is shared, you perform this procedure on one node only,
- Set the global Pacemaker parameter
freeze. This prevents the entire cluster from being fenced every time quorum is lost. For further information on setting this policy, see Global File System 2.
pcs property set no-quorum-policy=freeze
- Set up a
dlmresource. This is a required dependency for the
clvmdservice and the GFS2 file system.
pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
- Set up
clvmdas a cluster resource.
pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=trueNote that the
ocf:heartbeat:clvmresource agent, as part of the start procedure, sets the
locking_typeparameter in the
3and disables the
- Set up
dlmdependency and start up order. The
clvmdresource must start after the
dlmresource and must run on the same node as the
pcs constraint order start dlm-clone then clvmd-cloneAdding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]#
pcs constraint colocation add clvmd-clone with dlm-clone
- Verify that the
clvmdresources are running on all nodes.
pcs status... Full list of resources: ... Clone Set: dlm-clone [dlm] Started: [ z1 z2 ] Clone Set: clvmd-clone [clvmd] Started: [ z1 z2 ]
- Create the clustered logical volume
pvcreate /dev/vdb[root@z1 ~]#
vgcreate -Ay -cy cluster_vg /dev/vdb[root@z1 ~]#
lvcreate -L4G -n cluster_lv cluster_vg
- Optionally, to verify that the volume was created successfully you can use the
lvscommand to display the logical volume.
lvsLV VG Attr LSize ... cluster_lv cluster_vg -wi-ao---- 4.00g ...
- Format the volume with a GFS2 file system. In this example,
my_clusteris the cluster name. This example specifies
-j 2to indicate two journals because the number of journals you configure must equal the number of nodes in the cluster.
mkfs.gfs2 -p lock_dlm -j 2 -t my_cluster:samba /dev/cluster_vg/cluster_lv
- Create a
Filesystemresource, which configures Pacemaker to mount and manage the file system. This example creates a
fs, and creates the
/mnt/gfs2shareon both nodes of the cluster.
pcs resource create fs ocf:heartbeat:Filesystem device="/dev/cluster_vg/cluster_lv" directory="/mnt/gfs2share" fstype="gfs2" --clone
- Configure a dependency and a startup order for the GFS2 file system and the
clvmdservice. GFS2 must start after
clvmdand must run on the same node as
pcs constraint order start clvmd-clone then fs-cloneAdding clvmd-clone fs-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]#
pcs constraint colocation add fs-clone with clvmd-clone
- Verify that the GFS2 file system is mounted as expected.
mount |grep /mnt/gfs2share/dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2share type gfs2 (rw,noatime,seclabel)