lvm resource fails to start with "<node> owns <vg> and is still a cluster member" after a reboot in RHEL 6

Solution Unverified - Updated -

Issue

  • If I freeze my services and reboot all nodes, lvm resources fail to start back up with "nodeX owns VG and is still a cluster member"
  • During a scheduled maintenance two node cluster was rebooted, but the cluster services failed to start automatically and following error messages were observed:
Dec 19 02:57:19 node1 rgmanager[8944]: [lvm]   node2 owns myVG and is still a cluster member
Dec 19 02:57:25 node1 rgmanager[4367]: Services Initialized
Dec 19 02:57:26 node1 rgmanager[9285]: [lvm] Starting volume group, myVG
Dec 19 02:57:26 node1 rgmanager[9322]: [lvm]   node2 owns myVG and is still a cluster member
Dec 19 02:57:26 node1 rgmanager[9344]: [lvm] Someone else owns this volume group
Dec 19 02:57:26 node1 rgmanager[4367]: start on lvm "myVG" returned 1 (generic error)
Dec 19 02:57:26 node1 rgmanager[4367]: #68: Failed to start service:myService; return value: 1

Environment

  • Red Hat Enterprise Linux (RHEL) 6 with the High Availability Add On
  • rgmanager
  • HA-LVM with tagging
    • locking_type=1 in /etc/lvm/lvm.conf
    • <lvm/> resource in /etc/cluster/cluster.conf
    • lv_name blank or unspecified in <lvm/> resource
  • Services frozen with clusvcadm -Z before reboot or shutdown of rgmanager, or nodes were shut down "uncleanly" without properly stopping service containing <lvm/> resource

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content