Cluster service fails to start due to fs or clusterfs error 'Cannot mount <device> on <mountpoint>, the device or mount point is already in use!' in a RHEL 6 High Availability cluster with rgmanager

Solution Unverified - Updated -

Issue

  • When my service fails and tries to recover on node 2, the clusterfs resource won't start:
  Feb 12 00:03:31 node2 rgmanager[6623]: Taking over service service:nfs_export from down member node1.example.com
  Feb 12 00:03:31 node2 rgmanager[16712]: [clusterfs] Cannot mount /dev/dm-13 on /global/archive, the device or mount point is already in use!
  Feb 12 00:03:40 node2 rgmanager[6623]: start on clusterfs "gfs2_archive" returned 2 (invalid argument(s))
  Feb 12 00:03:40 nfs2 rgmanager[6623]: #68: Failed to start service:nfs_export; return value: 1
  • An fs resource fails to start with "mount piont is already in use!"

Environment

  • Red Hat Enterprise Linux (RHEL) 6 with the Resilient Storage Add On
  • rgmanager
  • One or more fs or clusterfs resources in a service in /etc/cluster/cluster.conf

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content