clvmd - Error locking on node : Volume group for uuid not found
Hi there,
I continuously have problem with my logical volume. When I do some operations on my GFS lvol I see
Error locking on node xxxxx: Volume group for uuid not found.I know that restarting the clvmd will fix it, sometimes with server reboot, but this situation happens all the time (lvextend, vgchange etc).
For my stop script (removing node from cluster):
/etc/init.d/rgmanager stop
/etc/init.d/gfs stop
vgchange -aln <- this one causes this messages again
/etc/init.d/clvmd stop
fence_tool leave
sleep 2
cman_tool leave -w
killall ccsd
Have someone met this problem. What can be done to fix it permanently. Is it related to metdata in lvm ? Maybe it is related to configuration, rather lack of configuration ?
Thanks in advance for some info, BR
Piotr
Responses
Hello,
What version of lvm2-cluster are you running? You can check with
# rpm -q lvm2-cluster
Many times the error you listed can be attributed to a physical volume not being seen on all nodes in the cluster. In versions prior to lvm2-cluster-2.02.56-7.el5, after making a new device available to cluster nodes (such as a new LUN or partition), you would need to run
# clvmd -R
in order to make all cluster nodes add it to their in-memory device cache. Without doing this, eventually certain LVM commands that required access to this new device (such as vgchange) would not be able to find the specified UUID, and thus may spit this error.
So, if you are running a version of lvm2-cluster prior to that which is specified above, make sure you run 'clvmd -R' after adding any new devices, and see if that resolves your error.
This error could also mean that a device was not presented to all nodes in the cluster, however if you say that restarting clvmd fixes the issue, then this doesn't seem likely.
Let us know if that helps.
Regards,
John Ruemker, RHCA
Red Hat Technical Account Manager
Online User Groups Moderator
Hi,
In general, rolling upgrades (where you take one node out, update it, bring it back in, move to next node, repeat..) is possible with RHEL cluster. That said, I encourage you to open a support ticket so that we may review your configuration and layout to ensure there would be no issues with this plan.
Taking the entire cluster down during an outage window is always the safest way to update. However if this is not feasible, then we can help you find the best way to get around the need for an outage.
Regards,
John Ruemker, RHCA
Red Hat Technical Account Manager
Online User Groups Moderator
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
