Why are the messages 'Resource temporarily unavailable' or 'iobuf not found' showing up in Gluster brick logs?

Solution Verified - Updated -

Issue

  • Reviewing brick logs at /var/log/glusterfs/bricks/mount-path.log, the following error messages are observed:

    [2020-08-06 12:30:40.668645] I [glusterfsd-mgmt.c:977:glusterfs_handle_attach] 0-glusterfs: got attach for /var/lib/glusterd/vols/testvol/testvol.var-lib-heketi-mounts-vg_XXXXX-brick_XXXXX
    [2020-08-06 12:30:40.698926] W [MSGID: 138009] [index.c:2440:init]testvol-index: Failed to create worker thread, aborting [Resource temporarily unavailable]
    [2020-08-06 12:30:40.698951] E [MSGID: 101019] [xlator.c:504:xlator_init] testvol-index: Initialization of volume 'testvol' failed, review your volfile again
    [2020-08-06 12:30:40.698960] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-testvol-index: initializing translator failed
    [2020-08-06 12:30:40.698968] W [graph.c:1250:glusterfs_graph_attach] 0-glusterfs: failed to initialize graph for xlator /var/lib/heketi/mounts/vg_XXXXX/brick_XXXXX/brick
    [2020-08-06 12:30:40.698989] E [rpcsvc.c:628:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
    
  • Alternatively, io-bufs related error messages are also a key symptom of this problem:

    [2020-08-10 11:23:10.228735] E [rpcsvc.c:1573:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0xe72f7f, Program: GlusterFS 3.3, ProgVers: 330, Proc: 13) to rpc-transport (tcp-testvol-server)
    [2020-08-10 11:23:10.228803] E [server.c:195:server_submit_reply] (-->/usr/lib64/glusterfs/3.12.2/xlator/debug/io-stats.so(+0x15a7a) [0x7f6ad8d68a7a] -->/usr/lib64/glusterfs/3.12.2/xlator/protocol/server.so(+0x21183) [0x7f6ad88cb183] -->/usr/lib64/glusterfs/3.12.2/xlator/protocol/server.so(+0x9326) [0x7f6ad88b3326] ) 0-: Reply submission failed
    [2020-08-10 11:23:10.228856] E [iobuf.c:123:__iobuf_arena_destroy_iobufs] (-->/lib64/libglusterfs.so.0(__iobuf_arena_alloc+0x16e) [0x7f6aedfa7d2e] -->/lib64/libglusterfs.so.0(__iobuf_arena_destroy+0x2a) [0x7f6aedfa7b2a] -->/lib64/libglusterfs.so.0(__iobuf_arena_destroy_iobufs+0x19c) [0x7f6aedfa7afc] ) 0-testvol-server: iobufs not found
    
  • In glusterd log file, available at /var/log/glusterfs/glusterd.log, there are traces about bricks being disconnected:

    [2020-08-06 12:30:40.667838] I [glusterd-utils.c:5671:attach_brick] 0-management: add brick /var/lib/heketi/mounts/vg_XXXXX/brick_XXXXX/brick to existing process for /var/lib/heketi/mounts/vg_XXXXX/brick_XXXXX/brick
    [2020-08-06 12:30:41.853075] I [MSGID: 106005] [glusterd-handler.c:6171:__glusterd_brick_rpc_notify] 0-management: Brick 10.71.0.20:/var/lib/heketi/mounts/vg_XXXXX/brick_XXXXX/brick has disconnected from glusterd.
    [2020-08-06 12:30:41.855717] I [MSGID: 106144] [glusterd-pmap.c:383:pmap_registry_remove] 0-pmap: removing brick /var/lib/heketi/mounts/vg_XXXXX/brick_XXXXX/brick on port 49153
    
  • In the output of the command gluster v status VOLNAME, the brick above is showing as 'N/A'.

  • How to solve this issue?

Environment

  • Red Hat Gluster Storage versions 3.x until 3.5
  • Red Hat Openshift Container Storage versions 3.X until 3.11.5

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content