Why are Gluster bricks failing to start with the error 'Too many opened files'?

Solution Verified - Updated -


  • In the output of the gluster v status commands, a bunch of bricks are down. Their status is N/A:

    Status of volume: vol_ba3db22a379d1cb58515337e1eedd4e3
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    fced2cd326324331e97152da8c40/brick          N/A       N/A        N       N/A
    76d88a4749ac73d78a4cf0fbeb5/brick           49157     0          Y       550
    251e012be0251b17ea8c9938b6a/brick           N/A       N/A        N       N/A
    Self-heal Daemon on localhost               N/A       N/A        Y       2762
    Self-heal Daemon on             N/A       N/A        Y       2144
    Self-heal Daemon on             N/A       N/A        Y       2795
  • Executing the command gluster v start VOLNAME force doesn't help. Some bricks are recovered, but some other bricks go down instead.

  • Reviewing the brick logs, available at /var/log/glusterfs/bricks, there are lots of Too many opened files error messages:

     W [MSGID: 113075] [posix-helpers.c:2113:posix_fs_health_check] 0-vol_ba3db22a379d1cb58515337e1eedd4e3-posix: open_for_write() on /var/lib/heketi/mounts/vg_63eca5a563dbd4f735d93fac0766d0f2/brick_fd83f251e012be0251b17ea8c9938b6a/brick/.glusterfs/health_check returned ret is -1 error is Too many open files [Too many open files]
  • Why is this issue occurring? How to solve this problem?


  • Red Hat Gluster Storage version 3.x
  • Red Hat Openshift Container storage version 3.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content