LVM-activate resource fails to start properly after storage expansion

Solution Verified - Updated -

Issue

This issue can occur after expanding the cluster's LVM storage in both "active/passive" clusters using a LVM based Filesystem resource as well as an "active/active" GFS2 cluster.

On an "active/passive" cluster, the issue would appear after running a pcs resource move or when the resource automatically fails over to the other node. The following error is seen:

Failed Resource Actions:
  * my_lvm start on fastvm-rhel-9-1-65 returned 'error' (Volume group[vg_shared] doesn't exist, or not visible on this node!) at Sat Nov 19 16:45:24 2022 after 234ms

On an "active/active" cluster, the issue would appear immediately as all active cluster nodes activate the shared LVM VG at the same time. The cloned LVM-activate resource on all other nodes would go into a stopped state and show the vg with missing devices, similiar to this :

Failed Resource Actions:
  * my_lv start on fastvm-rhel-9-1-65 returned 'error' (Volume group [data_vg] has devices missing.  Consider majority_pvs=true) at Tue Jul 15 01:47:05 2025 after 236ms

Environment

  • Red Hat Enterprise Linux 9 with High-Availability Add-On
  • RHEL 8.5 and later with use_devicesfile = 1

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content