Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

19.9. Displaying Volume Status

You can display the status information about a specific volume, brick, or all volumes, as needed. Status information can be used to understand the current status of the brick, NFS processes, self-heal daemon and overall file system. Status information can also be used to monitor and debug the volume information. You can view status of the volume along with the details:
  • detail - Displays additional information about the bricks.
  • clients - Displays the list of clients connected to the volume.
  • mem - Displays the memory usage and memory pool details of the bricks.
  • inode - Displays the inode tables of the volume.
  • fd - Displays the open file descriptor tables of the volume.
  • callpool - Displays the pending calls of the volume.
Setting Timeout Period

When you try to obtain information of a specific volume, the command may get timed out from the CLI if the originator glusterd takes longer than 120 seconds, the default time out, to aggregate the results from all the other glusterds and report back to CLI.

You can use the --timeout option to ensure that the commands do not get timed out by 120 seconds.
For example,
# gluster volume status --timeout=500  VOLNAME  inode
It is recommended to use --timeout option when obtaining information about the inodes or clients or details as they frequently get timed out.
Display information about a specific volume using the following command:
# gluster volume status --timeout=value_in_seconds [all|VOLNAME [nfs | shd | BRICKNAME]] [detail |clients | mem | inode | fd |callpool]
For example, to display information about test-volume:
# gluster volume status test-volume
Status of volume: test-volume
Gluster process                        Port    Online   Pid
------------------------------------------------------------
Brick Server1:/rhgs/brick0/rep1        24010   Y       18474
Brick Server1:/rhgs/brick0/rep2        24011   Y       18479
NFS Server on localhost                38467   Y       18486
Self-heal Daemon on localhost          N/A     Y       18491
The self-heal daemon status will be displayed only for replicated volumes.
Display information about all volumes using the command:
# gluster volume status all
# gluster volume status all
Status of volume: test
Gluster process                       Port    Online   Pid
-----------------------------------------------------------
Brick Server1:/rhgs/brick0/test       24009   Y       29197
NFS Server on localhost               38467   Y       18486

Status of volume: test-volume
Gluster process                       Port    Online   Pid
------------------------------------------------------------
Brick Server1:/rhgs/brick0/rep1       24010   Y       18474
Brick Server1:/rhgs/brick0/rep2       24011   Y       18479
NFS Server on localhost               38467   Y       18486
Self-heal Daemon on localhost         N/A     Y       18491
Display additional information about the bricks using the command:
# gluster volume status VOLNAME detail
For example, to display additional information about the bricks of test-volume:
# gluster volume status test-volume detail
Status of volume: test-vol
------------------------------------------------------------------------------
Brick                : Brick Server1:/rhgs/test
Port                 : 24012
Online               : Y
Pid                  : 18649
File System          : ext4
Device               : /dev/sda1
Mount Options        : rw,relatime,user_xattr,acl,commit=600,barrier=1,data=ordered
Inode Size           : 256
Disk Space Free      : 22.1GB
Total Disk Space     : 46.5GB
Inode Count          : 3055616
Free Inodes          : 2577164
Detailed information is not available for NFS and the self-heal daemon.
Display the list of clients accessing the volumes using the command:
# gluster volume status VOLNAME clients
For example, to display the list of clients connected to test-volume:
# gluster volume status test-volume clients
Brick : Server1:/rhgs/brick0/1
Clients connected : 2
Hostname          Bytes Read   BytesWritten
--------          ---------    ------------
127.0.0.1:1013    776          676
127.0.0.1:1012    50440        51200
Client information is not available for the self-heal daemon.
Display the memory usage and memory pool details of the bricks on a volume using the command:
# gluster volume status VOLNAME mem
For example, to display the memory usage and memory pool details for the bricks on test-volume:
# gluster volume status test-volume mem
Memory status for volume : test-volume
----------------------------------------------
Brick : Server1:/rhgs/brick0/1
Mallinfo
--------
Arena    : 434176
Ordblks  : 2
Smblks   : 0
Hblks    : 12
Hblkhd   : 40861696
Usmblks  : 0
Fsmblks  : 0
Uordblks : 332416
Fordblks : 101760
Keepcost : 100400

Mempool Stats
-------------
Name                               HotCount ColdCount PaddedSizeof AllocCount MaxAlloc
----                               -------- --------- ------------ ---------- --------
test-volume-server:fd_t                0     16384           92         57        5
test-volume-server:dentry_t           59       965           84         59       59
test-volume-server:inode_t            60       964          148         60       60
test-volume-server:rpcsvc_request_t    0       525         6372        351        2
glusterfs:struct saved_frame           0      4096          124          2        2
glusterfs:struct rpc_req               0      4096         2236          2        2
glusterfs:rpcsvc_request_t             1       524         6372          2        1
glusterfs:call_stub_t                  0      1024         1220        288        1
glusterfs:call_stack_t                 0      8192         2084        290        2
glusterfs:call_frame_t                 0     16384          172       1728        6
Display the inode tables of the volume using the command:
# gluster volume status VOLNAME inode
For example, to display the inode tables of test-volume:
# gluster volume status test-volume inode
inode tables for volume test-volume
----------------------------------------------
Brick : Server1:/rhgs/brick0/1
Active inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
6f3fe173-e07a-4209-abb6-484091d75499                  1              9         2
370d35d7-657e-44dc-bac4-d6dd800ec3d3                  1              1         2

LRU inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
80f98abe-cdcf-4c1d-b917-ae564cf55763                  1              0         1
3a58973d-d549-4ea6-9977-9aa218f233de                  1              0         1
2ce0197d-87a9-451b-9094-9baa38121155                  1              0         2
Display the open file descriptor tables of the volume using the command:
# gluster volume status VOLNAME fd
For example, to display the open file descriptor tables of test-volume:
# gluster volume status test-volume fd

FD tables for volume test-volume
----------------------------------------------
Brick : Server1:/rhgs/brick0/1
Connection 1:
RefCount = 0  MaxFDs = 128  FirstFree = 4
FD Entry            PID                 RefCount            Flags
--------            ---                 --------            -----
0                   26311               1                   2
1                   26310               3                   2
2                   26310               1                   2
3                   26311               3                   2

Connection 2:
RefCount = 0  MaxFDs = 128  FirstFree = 0
No open fds

Connection 3:
RefCount = 0  MaxFDs = 128  FirstFree = 0
No open fds
FD information is not available for NFS and the self-heal daemon.
Display the pending calls of the volume using the command:
# gluster volume status VOLNAME callpool
Note, each call has a call stack containing call frames.
For example, to display the pending calls of test-volume:
# gluster volume status test-volume callpool

Pending calls for volume test-volume
----------------------------------------------
Brick : Server1:/rhgs/brick0/1
Pending calls: 2
Call Stack1
 UID    : 0
 GID    : 0
 PID    : 26338
 Unique : 192138
 Frames : 7
 Frame 1
  Ref Count   = 1
  Translator  = test-volume-server
  Completed   = No
 Frame 2
  Ref Count   = 0
  Translator  = test-volume-posix
  Completed   = No
  Parent      = test-volume-access-control
  Wind From   = default_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 3
  Ref Count   = 1
  Translator  = test-volume-access-control
  Completed   = No
  Parent      = repl-locks
  Wind From   = default_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 4
  Ref Count   = 1
  Translator  = test-volume-locks
  Completed   = No
  Parent      = test-volume-io-threads
  Wind From   = iot_fsync_wrapper
  Wind To     = FIRST_CHILD (this)->fops->fsync
 Frame 5
  Ref Count   = 1
  Translator  = test-volume-io-threads
  Completed   = No
  Parent      = test-volume-marker
  Wind From   = default_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 6
  Ref Count   = 1
  Translator  = test-volume-marker
  Completed   = No
  Parent      = /export/1
  Wind From   = io_stats_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 7
  Ref Count   = 1
  Translator  = /export/1
  Completed   = No
  Parent      = test-volume-server
  Wind From   = server_fsync_resume
  Wind To     = bound_xl->fops->fsync