19.3. gstatus Command
19.3.1. gstatus Command
gstatusprovides an overview of the health of a Red Hat Gluster Storage trusted storage pool for distributed, replicated, distributed-replicated, dispersed, and distributed-dispersed volumes.
gstatuscommand provides an easy-to-use, high-level view of the health of a trusted storage pool with a single command. By executing the glusterFS commands, it gathers information about the statuses of the Red Hat Gluster Storage nodes, volumes, and bricks. The checks are performed across the trusted storage pool and the status is displayed. This data can be analyzed to add further checks and incorporate deployment best-practices and free-space triggers.
- Python 2.6 or above
19.3.2. Executing the gstatus command
gstatuscommand can be invoked in different ways. The table below shows the optional switches that can be used with gstatus.
# gstatus -h Usage: gstatus [options]
Table 19.1. gstatus Command Options
|--version||Displays the program's version number and exits.|
|-h, --help||Displays the help message and exits.|
|-s, --state||Displays the high level health of the Red Hat Gluster Storage trusted storage pool.|
|-v, --volume||Displays volume information of all the volumes, by default. Specify a volume name to display the volume information of a specific volume.|
|-b, --backlog||Probes the self heal state.|
|-a, --all||Displays the detailed status of volume health. (This output is aggregation of -s and -v).|
|-l, --layout||Displays the brick layout when used in combination with -v, or -a .|
|-o OUTPUT_MODE, --output-mode=OUTPUT_MODE||Produces outputs in various formats such as - json, keyvalue, or console(default).|
|-D, --debug||Enables the debug mode.|
|-w, --without-progress||Disables progress updates during data gathering.|
|-u UNITS, --units=UNITS||Displays capacity units in decimal or binary format (GB vs GiB).|
|-t TIMEOUT, --timeout=TIMEOUT||Specify the command timeout value in seconds.|
Table 19.2. Commonly used gstatus Commands
| ||An overview of the trusted storage pool.|
| ||View detailed status of the volume health.|
| ||View the volume details, including the brick layout.|
| ||View the summary output for Nagios and Logstash.|
gstatusprovides a header section, which provides a high level view of the state of the Red Hat Gluster Storage trusted storage pool. The Status field within the header offers two states;
Unhealthy. When problems are detected, the status field changes to Unhealthy(n), where n denotes the total number of issues that have been detected.
gstatuscommand output for both healthy and unhealthy Red Hat Gluster Storage environments.
Example 19.1. Example 1: Trusted Storage Pool is in a healthy state; all nodes, volumes and bricks are online
# gstatus -a Product: RHGS Server v3.2.0 Capacity: 36.00 GiB(raw bricks) Status: HEALTHY 7.00 GiB(raw used) Glusterfs: 3.7.1 18.00 GiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 4/ 4 Volumes: 1 Up Self Heal: 4/ 4 0 Up(Degraded) Bricks : 4/ 4 0 Up(Partial) Connections : 5 / 20 0 Down Volume Information splunk UP - 4/4 bricks up - Distributed-Replicate Capacity: (18% used) 3.00 GiB/18.00 GiB (used/total) Snapshots: 0 Self Heal: 4/ 4 Tasks Active: None Protocols: glusterfs:on NFS:on SMB:off Gluster Connectivty: 5 hosts, 20 tcp connections Status Messages - Cluster is HEALTHY, all_bricks checks successful
Example 19.2. Example 2: A node is down within the trusted pool
# gstatus -al Product: RHGS Server v3.1.1 Capacity: 27.00 GiB(raw bricks) Status: UNHEALTHY(4) 5.00 GiB(raw used) Glusterfs: 3.7.1 18.00 GiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 3/ 4 Volumes: 0 Up Self Heal: 3/ 4 1 Up(Degraded) Bricks : 3/ 4 0 Up(Partial) Connections : 5/ 20 0 Down Volume Information splunk UP(DEGRADED) - 3/4 bricks up - Distributed-Replicate Capacity: (18% used) 3.00 GiB/18.00 GiB (used/total) Snapshots: 0 Self Heal: 3/ 4 Tasks Active: None Protocols: glusterfs:on NFS:on SMB:off Gluster Connectivty: 5 hosts, 20 tcp connections splunk---------- + | Distribute (dht) | +-- Repl Set 0 (afr) | | | +--splunk-rhs1:/rhgs/brick1/splunk(UP) 2.00 GiB/9.00 GiB | | | +--splunk-rhs2:/rhgs/brick1/splunk(UP) 2.00 GiB/9.00 GiB | +-- Repl Set 1 (afr) | +--splunk-rhs3:/rhgs/brick1/splunk(DOWN) 0.00 KiB/0.00 KiB | +--splunk-rhs4:/rhgs/brick1/splunk(UP) 2.00 GiB/9.00 GiB Status Messages - Cluster is UNHEALTHY - One of the nodes in the cluster is down - Brick splunk-rhs3:/rhgs/brick1/splunk in volume 'splunk' is down/unavailable - INFO -> Not all bricks are online, so capacity provided is NOT accurate
-loption is used. The
brick layoutmode shows the brick and node relationships. This provides a simple means of checking the replication relationships for bricks across nodes is as intended.
Table 19.3. Field Descriptions of the
gstatus command output
|Volume State||Up – The volume is started and available, and all the bricks are up .|
|Up (Degraded) - This state is specific to replicated volumes, where at least one brick is down within a replica set. Data is still 100% available due to the alternate replicas, but the resilience of the volume to further failures within the same replica set flags this volume as |
|Up (Partial) - Effectively, this means that all though some bricks in the volume are online, there are others that are down to a point where areas of the file system will be missing. For a distributed volume, this state is seen if any brick is down, whereas for a replicated volume a complete replica set needs to be down before the volume state transitions to |
|Down - Bricks are down, or the volume is yet to be started.|
|Capacity Information|| This information is derived from the brick information taken from the |
|Over-commit Status||The physical file system used by a brick could be re-used by multiple volumes, this field indicates whether a brick is used by multiple volumes. But this exposes the system to capacity conflicts across different volumes when the quota feature is not in use. Reusing a brick for multiple volumes is not recommended.|
|Connections||Displays a count of connections made to the trusted pool and each of the volumes.|
|Nodes / Self Heal / Bricks X/Y||This indicates that X components of Y total/expected components within the trusted pool are online. In Example 2, note that 3/4 is displayed against all of these fields, indicating 3 nodes are available out of 4 nodes. A node, brick, and the self-heal daemon are also unavailable.|
|Tasks Active||Active background tasks such as rebalance, remove-brick are displayed here against individual volumes.|
|Protocols||Displays which protocols have been enabled for the volume.|
|Snapshots|| Displays a count of the number of snapshots taken for the volume. The snapshot count for each volume is |
|Status Messages|| After the information is gathered, any errors detected are reported in the |