2.3. Exclusive Activation of a Volume Group in a Cluster
volume_listentry in the
/etc/lvm/lvm.confconfiguration file. Volume groups listed in the
volume_listentry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the
volume_listentry. Note that this procedure does not require the use of
- Execute the following command to ensure that
locking_typeis set to 1 and that
use_lvmetadis set to 0 in the
/etc/lvm/lvm.conffile. This command also disables and stops any
lvmconf --enable-halvm --services --startstopservices
- Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example.
vgs --noheadings -o vg_namemy_vg rhel_home rhel_root
- Add the volume groups other than
my_vg(the volume group you have just defined for the cluster) as entries to
/etc/lvm/lvm.confconfiguration file.For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the
volume_listline of the
lvm.conffile and add these volume groups as entries to
volume_listas follows. Note that the volume group you have just defined for the cluster (
my_vgin this example) is not in this list.
volume_list = [ "rhel_root", "rhel_home" ]
NoteIf no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the
volume_list = .
- Rebuild the
initramfsboot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the
initramfsdevice with the following command. This command may take up to a minute to complete.
dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
- Reboot the node.
NoteIf you have installed a new Linux kernel since booting the node on which you created the boot image, the new
initrdimage will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct
initrddevice is in use by running the
uname -rcommand before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the
initrdfile after rebooting with the new kernel and then reboot the node.
- When the node has rebooted, check whether the cluster services have started up again on that node by executing the
pcs cluster statuscommand on that node. If this yields the message
Error: cluster is not currently running on this nodethen enter the following command.
pcs cluster startAlternately, you can wait until you have rebooted each node in the cluster and start cluster services on each of the nodes with the following command.
pcs cluster start --all