If your GFS2 file system hangs and does not return commands run against it, requiring that you reboot all nodes in the cluster before using it, check for the following issues.
You may have had a failed fence. GFS2 file systems will freeze to ensure data integrity in the event of a failed fence. Check the messages logs to see if there are any failed fences at the time of the hang. Ensure that fencing is configured correctly.
The GFS2 file system may have withdrawn. Check through the messages logs for the word
withdraw and check for any messages and call traces from GFS2 indicating that the file system has been withdrawn. A withdraw is indicative of file system corruption, a storage failure, or a bug. At the earliest time when it is convenient to unmount the file system, you should perform the following procedure:
Reboot the node on which the withdraw occurred.
Stop the file system resource to unmount the GFS2 file system on all nodes.
pcs resource disable --wait=100 mydata_fs
Capture the metadata with the
gfs2_edit savemeta... command. You should ensure that there is sufficient space for the file, which in some cases may be large. In this example, the metadata is saved to a file in the
gfs2_edit savemeta /dev/vg_mydata/mydata /root/gfs2metadata.gz
sudo yum update gfs2-utils
On one node, run the
fsck.gfs2 command on the file system to ensure file system integrity and repair any damage.
fsck.gfs2 -y /dev/vg_mydata/mydata > /tmp/fsck.out
fsck.gfs2 command has completed, re-enable the file system resource to return it to service:
pcs resource enable --wait=100 mydata_fs
Open a support ticket with Red Hat Support. Inform them you experienced a GFS2 withdraw and provide logs and the debugging information generated by the
gfs2_edit savemeta commands.
In some instances of a GFS2 withdraw, commands can hang that are trying to access the file system or its block device. In these cases a hard reboot is required to reboot the cluster.