Cluster service containing lvm resource with lv_name fails status check with error "WARNING: <vg>/<lv> should not be active" after updating rgmanager in RHEL 5 Update 9 or 10
Issue
- After updating to RHEL 5 Update 10 and restarting my services, those with an
lvmresource start just fine, but fail the next status check an hour later. - After being up for almost an hour our cluster gives the message:
Oct 2 13:35:31 node1 clurgmgrd: [7302]: <notice> Getting status
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <err> WARNING: vg/lv should not be active
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <err> WARNING: node1 does not own vg/lv
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <err> WARNING: Attempting shutdown of vg/lv
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <notice> Making resilient : lvchange -an vg/lv
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <notice> Resilient command: lvchange -an vg/lv --config devices{filter=["a|/dev/mpath/mpathbp1|","a|/dev/mpath/mpathcp1|","a|/dev/mpath/mpathdp1|","a|/dev/mpath/mpathe|","r|.*|"]}
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <err> lv_exec_resilient failed
Oct 2 13:35:32 node1 clurgmgrd: [7302]: <err> lv_activate_resilient stop failed on vg/lv
Oct 2 13:35:32 node1 clurgmgrd[7302]: <notice> status on lvm "myLV" returned 1 (generic error)
Environment
- Red Hat Enterprise Linux (RHEL) 5 Update 10 with the High Availability Add On
rgmanager-2.0.52-47.el5orrgmanager-2.0.52-37.el5_9.5- HA-LVM
<lvm>resource withlv_namespecified- Using tagging (
locking_type = 1andvolume_listcontains a nodename-based tag in/etc/lvm/lvm.conf)
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
