- When a cluster node crashes and nfs client mount service fails over to another clusternode, files are still locked (probably on NFS level) and the application running in this package fails to run correctly. After removing and recreating this lock file, there is no issue anymore.
- Red Hat Enterprise Linux 5 (with the High Availability Add on, Resilient Storage Add on or some other 3rd party cluster product)
- NFS server version 3 sharing one or more NFS shares to 2 or more cluster nodes.
- Active/passive cluster service consisting of NFS client mount (netfs resource in Red Hat High Availability).
- Client is crashed while one or more files are locked on the NFS mount
- The nfs client mount resource fails over to the backup node
- The nfs backup node is unable to lock the files due to previous locks from the failed node.
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.