Brick is offline and fails to start for gluster_shared_storage volume

Solution Verified - Updated -

Issue

  • Brick is offline and fails to start for gluster_shared_storage volume despite force start
Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterA.:/var/lib/glusterd/ss
_brick                                      N/A       N/A        N       N/A  
NFS Server on localhost                     2049      0          Y       32143
Self-heal Daemon on localhost               N/A       N/A        Y       32320
NFS Server on gfsnode3.win.wol              2049      0          Y       16787
Self-heal Daemon on gfsnode3.win.wol        N/A       N/A        Y       16937

Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks
  • Brick logs for gluster shared storage on all nodes show below error.
[2023-01-16 01:59:27.903925] E [MSGID: 1] [index.c:2392:init] 0-gluster_shared_storage-index: Failed to find parent dir (/var/lib/glusterd/ss_brick/.glusterfs) of index basepath /var/lib/glusterd/ss_brick/.glusterfs/indices. [No such file or directory]
[2023-01-16 01:59:27.903952] E [MSGID: 1] [xlator.c:629:xlator_init] 0-gluster_shared_storage-index: Initialization of volume 'gluster_shared_storage-index' failed, review your volfile again
[2023-01-16 01:59:27.903961] E [MSGID: 1] [graph.c:362:glusterfs_graph_init] 0-gluster_shared_storage-index: initializing translator failed
[2023-01-16 01:59:27.903971] E [MSGID: 1] [graph.c:725:glusterfs_graph_activate] 0-graph: init failed
[2023-01-16 01:59:27.904022] I [io-stats.] 0-gluster_shared_storage-io-stats: io-stats translator unloaded
[2023-01-16 01:59:27.904615] W [glusterfsd.c:1583:cleanup_and_exit] (-->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0xxxxx) [0x56200xxxxx] -->/usr/sbin/glusterfsd(glusterfs_process_volfp+0xxxx) [0x5620xxxxxx] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x562009xxxxxxx] ) 0-: received signum (-1), shutting down

Environment

  • Red Hat Gluster Storage 3.5.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content