Gfs2 File system is corrupted!

Posted on

Hello,

I have Dell Power Valt SAN 3800 Series, and dell power edge R530 server installed with RED enterprise 7.2.

we have connected our san storage with Linux system and configured PCS clustering between the two power edge in order to access common drive of SAN storage.
there are 4 LUNs (i.e. /ndasdb, /backup, /process, /cdr) assigned to RedHat system,
recently we had to restart our server and when the system came up, two mounted drive couldn't come up.

there was some error into the pcs status regarding these two drive i.e. backup and cdr.

we tried to repair the partition through this command:-
for CDR:-
fsck.gf2 -y /dev/dm-8
The system master directory seems to be destroyed.
Okay to rebuild it? (y/n)y
Trying to rebuild the master directory.
Error 22 building jindex

when we tried to repair the backup mounting point:-
fsck.gfs2 /dev/dm-7
Initializing fsck
The gfs2 system per_node directory inode is missing, so we might not be
able to rebuild missing journals this run.
The gfs2 system rindex inode is missing. Okay to rebuild it? (y/n) y
Validating resource group index.
Level 1 resource group check: Checking if all rgrp and rindex values are good.
Segmentation fault (core dumped)

when we create resource for these particular drives (pcs resource create backup-fs_res Filesystem device="/dev/vg_backup/lv_backup" directory="/backup" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence --group GFS2group
) it creates without any error but we check the

after that when we check the pcs status it shows:-

[root@Af/]# pcs status
Cluster name: ClusterAfricell
Stack: corosync
Current DC: AfricellSrv2 (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Thu Dec 14 12:47:09 2017 Last change: Thu Dec 14 12:44:13 2017 by root via cibadmin on AfricellSrv1

2 nodes and 11 resources configured

Online: [ Srv1 ASrv2 ]

Full list of resources:

scsi-stonith-device (stonith:fence_scsi): Started Srv2
Clone Set: GFS2group-clone [GFS2group]
Stopped: [ Srv1 Srv2 ]

Failed Actions:
* backup-fs_res_start_0 on Srv2 'not installed' (5): call=125, status=complete, exitreason='Couldn't find device [/dev/vg_backup/lv_backup]. Expected /dev/??? to exist',
last-rc-change='Thu Dec 14 12:41:13 2017', queued=0ms, exec=36ms
* backup-fs_res_start_0 on Srv1 'not installed' (5): call=99, status=complete, exitreason='Couldn't find device [/dev/vg_backup/lv_backup]. Expected /dev/??? to exist',
last-rc-change='Thu Dec 14 12:44:13 2017', queued=0ms, exec=38ms

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

Responses