Cluster service with lvm resources using HA-LVM tagging fails a status check with "[lvm] WARNING: <vg> should not be active" after a node rejoins the cluster in RHEL 6
Issue
- Immediately after a node rejoins the cluster, another node that is running a service with an
lvm
resource fails a status check:
Jun 19 16:29:19 rgmanager [lvm] Getting status
Jun 19 16:29:21 rgmanager [lvm] WARNING: myVG should not be active
Jun 19 16:29:22 rgmanager [lvm] WARNING: node1 does not own myVG
Jun 19 16:29:22 rgmanager [lvm] WARNING: Attempting shutdown of myVG
Jun 19 16:29:22 rgmanager status on lvm "lvm-myVG" returned 1 (generic error)
- When
rgmanager
is initializing resources on a node that is starting up, I see it stripping LVM tags from a volume that is part of an active service on another node:
Jun 19 16:19:09 rgmanager [lvm] Stripping tag, node1
- Shortly after a cluster node joins the cluster and
rgmanager
service starts, cluster services already running in other nodes are failed over.
Environment
- Red Hat Enterprise Linux (RHEL) 6 with the High Availability Add On
resource-agents
prior to release3.9.2-21.el6_4.3
orresource-agents-3.9.2-40.el6
- Entries in
/etc/hosts
for cluster node names have canonical names preceding short node names:
X.X.X.X node1hb.example.com node1hb
OR the entry for this host has various names that are not directly related (as in shortname vs FQDN):
X.X.X.X node1 apphost1 node1-hb
- HA-LVM using tags.
volume_list
in/etc/lvm/lvm.conf
references the nodes' short namesclusternode name
in/etc/cluster/cluster.conf
references the nodes' short names
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.