How to extend a logical volume in a cluster that is using "HA LVM"?

Solution Verified - Updated -

Environment

  • Red Hat Enterprise Linux Server 5, 6, 7, 8 and 9 (with the High Availability and Resilient Storage Add Ons)
  • HA-LVM tagging and system_id variant

Issue

  • How to extend a logical volume in a cluster that is using High Availability LVM (HA-LVM)?

Resolution

Assuming that HA-LVM is configured according to instructions in What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?, the steps below can be followed :

  1. If there were any changes to storage, first follow the article How to rescan the SCSI bus to add or remove a SCSI device without rebooting the computer on all cluster nodes. This is to ensure that there are no discrepancies in how the nodes in the cluster see the storage. All shared storage should be visible in the same way across the whole cluster.

  2. To prevent the cluster from interfering while making manual changes to a cluster managed logical volume and filesystem, follow the steps below :

    • For Pacemaker Clusters (RHEL 6, 7, 8, 9):

      • Put the cluster into maintenance mode. This is a safety step that instructs the cluster to stop monitoring resources while the logical volume and filesystem are being extended :

        # pcs property set maintenance-mode=true
        
    • For old rgmanager cluster setup (RHEL 5, 6):

      • Freeze the service corresponding to the logical volume using the command clusvcadm -Z. This prevents any chance of rgmanager deciding to relocate or recover the service while the logical volume and filesystem are being extended :

        # clusvcadm -Z <service>
        
  3. After the cluster is prepared for maintenance, proceed with the extension by following the instructions in How to extend a logical volume and its filesystem online in Red Hat Enterprise Linux?. These commands must be executed on the node where the LVM or LVM-activate resource is in the Started state :

    • For example, in the following cluster, the LV resize and filesystem extension should be carried out on ha-node2 :

      # pcs status
      Cluster name: testcluster
      ...
      Node List:
      * Online: [ ha-node1 ha-node2 ]
      ...
      Full List of Resources:
      * Resource Group: testgroup:
        * my_lvm  (ocf::heartbeat:LVM-activate):   Started ha-node2
        * my_fs   (ocf::heartbeat:Filesystem):     Started ha-node2
      
  4. After you have successfully extended the LV and filesystem, take the cluster out of maintenance using steps below :

    • For Pacemaker clusters:

      • Take the cluster out of maintenance mode so that the cluster resumes normal monitoring of its resources :

        # pcs property unset maintenance-mode
        
    • For rgmanager clusters:

      • The previously frozen service can be unfrozen with the command clusvcadm -U as follows :

        # clusvcadm -U <service>
        

Possible errors that could be encountered with RHEL8 and RHEL9 clusters with /etc/lvm/devices/system.devices

  • Support for the lvmdevices command set was made available in Rhel 8.5+ and later. This command set is used to limit logical volume manager access to only LV's attached to devices listed in /etc/lvm/devices/system.devices file. If you are using an LVM devices file, and use_devicesfile is enabled in lvm.conf, additionally add the shared device to the devices file on the second node of the cluster.

  • The lvmdevices is enabled by default in RHEL 9 while in RHEL 8.5+ it is disabled by default. When lvmdevices is enabled, ensure that the shared devices are added to lvmdevices on passive node as well else it will result in issue as detailed in following article:

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments