Why are Bricks Failing to Start With 'Port is already in use' Error?

Solution Verified - Updated -

Issue

  • After stopping and starting a Gluster volume, a brick fails to start. Checking the brick logs at /var/log/glusterfs/bricks/mount-path.log, the error message showing up is Port is already in use. This is a snippet of the errors observed.

    • The brick process starts at this point. From the starting message, the port used is 49155:

      [2020-07-03 13:21:37.219415] I [MSGID: 100030] [glusterfsd.c:2858:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.0 (args: /usr/sbin/glusterfsd -s server --volfile-id Volume_Name -p /var/run/gluster/vols/Volume_Name/brick-name.pid -S /var/run/gluster/1e685dbf8c862bf6.socket --brick-name <brick-name> -l <log-file> --xlator-option *-posix.glusterd-uuid=8ddcc851-f613-44ef-83e3-eac341719594 --process-name brick --brick-port 49155 --xlator-option Vol_Name-server.listen-port=49155)
      
    • However, immediately after, the start fails because this port is already used:

      [2020-07-03 13:21:37.237208] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
      [2020-07-03 13:21:37.237307] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
      [2020-07-03 13:21:38.226871] I    [rpcsvc.c:2695:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
      [2020-07-03 13:21:38.227222] E [socket.c:972:__socket_server_bind] 0-tcp.Volume_Name-server: binding to  failed: Address already in use  
      [2020-07-03 13:21:38.227243] E [socket.c:974:__socket_server_bind] 0-tcp.Volume_Name-server: Port is already in use                      
      [2020-07-03 13:21:38.227259] E [socket.c:3778:socket_listen] 0-tcp.Volume_nName-server: __socket_server_bind failed;closing socket 11     
      [2020-07-03 13:21:38.227281] W  [rpcsvc.c:1997:rpcsvc_create_listener] 0-rpc-service: listening on transport failed     
      
  • As a consequence, the next bricks in alphabetical order ( after the brick that failed to start ), will not start either.

  • Why does this issue occur? How to start the volume then?

Environment

Red Hat Gluster Storage 3.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content