Why are Gluster bricks failing to start with the error 'Too many opened files'?
Issue
-
In the output of the
gluster v statuscommands, a bunch of bricks are down. Their status isN/A:Status of volume: vol_ba3db22a379d1cb58515337e1eedd4e3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.1.1:/var/lib/heketi/mounts/vg_ 39ccc70fabf2e144a5b0909eacffef29/brick_099f fced2cd326324331e97152da8c40/brick N/A N/A N N/A Brick 192.168.1.2:/var/lib/heketi/mounts/vg_7 40c54c786f55628a7b177dec970c5fe/brick_a9f05 76d88a4749ac73d78a4cf0fbeb5/brick 49157 0 Y 550 Brick 192.168.1.3:/var/lib/heketi/mounts/vg_6 3eca5a563dbd4f735d93fac0766d0f2/brick_fd83f 251e012be0251b17ea8c9938b6a/brick N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 2762 Self-heal Daemon on 192.168.1.3 N/A N/A Y 2144 Self-heal Daemon on 192.168.1.1 N/A N/A Y 2795 -
Executing the command
gluster v start VOLNAME forcedoesn't help. Some bricks are recovered, but some other bricks go down instead. -
Reviewing the brick logs, available at
/var/log/glusterfs/bricks, there are lots ofToo many opened fileserror messages:W [MSGID: 113075] [posix-helpers.c:2113:posix_fs_health_check] 0-vol_ba3db22a379d1cb58515337e1eedd4e3-posix: open_for_write() on /var/lib/heketi/mounts/vg_63eca5a563dbd4f735d93fac0766d0f2/brick_fd83f251e012be0251b17ea8c9938b6a/brick/.glusterfs/health_check returned ret is -1 error is Too many open files [Too many open files] -
Why is this issue occurring? How to solve this problem?
Environment
- Red Hat Gluster Storage version 3.x
- Red Hat Openshift Container storage version 3.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.