clvmd - Error locking on node : Volume group for uuid not found

Latest response

Hi there,


I continuously have problem with my logical volume. When I do some operations on my GFS lvol I see 

Error locking on node xxxxx: Volume group for uuid not found.I know that restarting the clvmd  will fix it, sometimes with server reboot, but this situation happens all the time (lvextend, vgchange etc).

For my stop script (removing node from cluster):

/etc/init.d/rgmanager stop
/etc/init.d/gfs stop
vgchange -aln                  <- this one causes this messages again
/etc/init.d/clvmd stop
fence_tool leave
sleep 2
cman_tool leave -w
killall ccsd

Have someone met this problem. What can be done to fix it permanently. Is it related to metdata in lvm ? Maybe it is related to configuration, rather lack of configuration ?

Thanks in advance for some info, BR




What version of lvm2-cluster are you running?  You can check with


  # rpm -q lvm2-cluster


Many times the error you listed can be attributed to a physical volume not being seen on all nodes in the cluster.  In versions prior to lvm2-cluster-2.02.56-7.el5, after making a new device available to cluster nodes (such as a new LUN or partition), you would need to run


   # clvmd -R


in order to make all cluster nodes add it to their in-memory device cache.  Without doing this, eventually certain LVM commands that required access to this new device (such as vgchange) would not be able to find the specified UUID, and thus may spit this error.


So, if you are running a version of lvm2-cluster prior to that which is specified above, make sure you run 'clvmd -R' after adding any new devices, and see if that resolves your error.


This error could also mean that a device was not presented to all nodes in the cluster, however if you say that restarting clvmd fixes the issue, then this doesn't seem likely.


Let us know if that helps.



John Ruemker, RHCA

Red Hat Technical Account Manager

Online User Groups Moderator



thank You for the answer. In fact I havve really old version of cluster components.

# rpm -q lvm2-cluster    :     lvm2-cluster-2.02.26-1.el5  (RHEL 5.1)


The problem is, that after adding new nodes, or any other operation on device, each time clvmd was restarted on each node in cluster. I know that within this version clvmd -R is not efficient. In fact after the restart uuid are seen properly (for a while) but afetr somtime it gets mismatch again.


I think it's high time for an upgrade. Is it possible to upgrade systems in cluster one by one (by excluding one node, upgrade it and include to cluster again). This would be not only cluster/storage package  group upgrade but also system (kernel etc ...). The desired version is 5.5. Is it possible to do this that way. I found some docs on RHN but it only metions about upgrading dedicated packages for clustering/storage. Not about upgradeing whole system. Whats more, can I run diffrent version of cman and other cluster/storage packages  in one cluster ?


Best regards,

Piotr Wieczorek


In general, rolling upgrades (where you take one node out, update it, bring it back in, move to next node, repeat..) is possible with RHEL cluster.  That said, I encourage you to open a support ticket so that we may review your configuration and layout to ensure there would be no issues with this plan. 


Taking the entire cluster down during an outage window is always the safest way to update.  However if this is not feasible, then we can help you find the best way to get around the need for an outage.



John Ruemker, RHCA

Red Hat Technical Account Manager

Online User Groups Moderator

I met same problem, the issue is global_filter in lvm.conf filter out the device

hi i am facing error during creation lvm as " Error locking on node 40a800f4: Volume group for uuid not found: 6hxpHbh8rWMQERdPYju9haRmtDGfSP9xq3T3UvmtZkrZUkgSK8FiTt0aAMlYdmXV" Failed to activate new LV.