Chapter 2. Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment
This chapter lists out the various operations that can be performed on a Red Hat Gluster Storage pod (gluster pod):
To list the pods, execute the following command :
# oc get pods -n <storage_project_name>
For example:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE storage-project-router-1-v89qc 1/1 Running 0 1d glusterfs-dc-node1.example.com 1/1 Running 0 1d glusterfs-dc-node2.example.com 1/1 Running 1 1d glusterfs-dc-node3.example.com 1/1 Running 0 1d heketi-1-k1u14 1/1 Running 0 23m
Following are the gluster pods from the above example:
glusterfs-dc-node1.example.com glusterfs-dc-node2.example.com glusterfs-dc-node3.example.com
NoteThe topology.json file will provide the details of the nodes in a given Trusted Storage Pool (TSP) . In the above example all the 3 Red Hat Gluster Storage nodes are from the same TSP.
To enter the gluster pod shell, execute the following command:
# oc rsh <gluster_pod_name> -n <storage_project_name>
For example:
# oc rsh glusterfs-dc-node1.example.com -n storage-project sh-4.2#
To get the peer status, execute the following command:
# gluster peer status
For example:
# gluster peer status Number of Peers: 2 Hostname: node2.example.com Uuid: 9f3f84d2-ef8e-4d6e-aa2c-5e0370a99620 State: Peer in Cluster (Connected) Other names: node1.example.com Hostname: node3.example.com Uuid: 38621acd-eb76-4bd8-8162-9c2374affbbd State: Peer in Cluster (Connected)
To list the gluster volumes on the Trusted Storage Pool, execute the following command:
# gluster volume info
For example:
Volume Name: heketidbstorage Type: Distributed-Replicate Volume ID: 2fa53b28-121d-4842-9d2f-dce1b0458fda Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 192.168.121.172:/var/lib/heketi/mounts/vg_1be433737b71419dc9b395e221255fb3/brick_c67fb97f74649d990c5743090e0c9176/brick Brick2: 192.168.121.233:/var/lib/heketi/mounts/vg_0013ee200cdefaeb6dfedd28e50fd261/brick_6ebf1ee62a8e9e7a0f88e4551d4b2386/brick Brick3: 192.168.121.168:/var/lib/heketi/mounts/vg_e4b32535c55c88f9190da7b7efd1fcab/brick_df5db97aa002d572a0fec6bcf2101aad/brick Brick4: 192.168.121.233:/var/lib/heketi/mounts/vg_0013ee200cdefaeb6dfedd28e50fd261/brick_acc82e56236df912e9a1948f594415a7/brick Brick5: 192.168.121.168:/var/lib/heketi/mounts/vg_e4b32535c55c88f9190da7b7efd1fcab/brick_65dceb1f749ec417533ddeae9535e8be/brick Brick6: 192.168.121.172:/var/lib/heketi/mounts/vg_7ad961dbd24e16d62cabe10fd8bf8909/brick_f258450fc6f025f99952a6edea203859/brick Options Reconfigured: performance.readdir-ahead: on Volume Name: vol_9e86c0493f6b1be648c9deee1dc226a6 Type: Distributed-Replicate Volume ID: 940177c3-d866-4e5e-9aa0-fc9be94fc0f4 Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 192.168.121.168:/var/lib/heketi/mounts/vg_3fa141bf2d09d30b899f2f260c494376/brick_9fb4a5206bdd8ac70170d00f304f99a5/brick Brick2: 192.168.121.172:/var/lib/heketi/mounts/vg_7ad961dbd24e16d62cabe10fd8bf8909/brick_dae2422d518915241f74fd90b426a379/brick Brick3: 192.168.121.233:/var/lib/heketi/mounts/vg_5c6428c439eb6686c5e4cee56532bacf/brick_b3768ba8e80863724c9ec42446ea4812/brick Brick4: 192.168.121.172:/var/lib/heketi/mounts/vg_7ad961dbd24e16d62cabe10fd8bf8909/brick_0a13958525c6343c4a7951acec199da0/brick Brick5: 192.168.121.168:/var/lib/heketi/mounts/vg_17fbc98d84df86756e7826326fb33aa4/brick_af42af87ad87ab4f01e8ca153abbbee9/brick Brick6: 192.168.121.233:/var/lib/heketi/mounts/vg_5c6428c439eb6686c5e4cee56532bacf/brick_ef41e04ca648efaf04178e64d25dbdcb/brick Options Reconfigured: performance.readdir-ahead: on
To get the volume status, execute the following command:
# gluster volume status <volname>
For example:
# gluster volume status vol_9e86c0493f6b1be648c9deee1dc226a6 Status of volume: vol_9e86c0493f6b1be648c9deee1dc226a6 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.121.168:/var/lib/heketi/mounts/v g_3fa141bf2d09d30b899f2f260c494376/brick_9f b4a5206bdd8ac70170d00f304f99a5/brick 49154 0 Y 3462 Brick 192.168.121.172:/var/lib/heketi/mounts/v g_7ad961dbd24e16d62cabe10fd8bf8909/brick_da e2422d518915241f74fd90b426a379/brick 49154 0 Y 115939 Brick 192.168.121.233:/var/lib/heketi/mounts/v g_5c6428c439eb6686c5e4cee56532bacf/brick_b3 768ba8e80863724c9ec42446ea4812/brick 49154 0 Y 116134 Brick 192.168.121.172:/var/lib/heketi/mounts/v g_7ad961dbd24e16d62cabe10fd8bf8909/brick_0a 13958525c6343c4a7951acec199da0/brick 49155 0 Y 115958 Brick 192.168.121.168:/var/lib/heketi/mounts/v g_17fbc98d84df86756e7826326fb33aa4/brick_af 42af87ad87ab4f01e8ca153abbbee9/brick 49155 0 Y 3481 Brick 192.168.121.233:/var/lib/heketi/mounts/v g_5c6428c439eb6686c5e4cee56532bacf/brick_ef 41e04ca648efaf04178e64d25dbdcb/brick 49155 0 Y 116153 NFS Server on localhost 2049 0 Y 116173 Self-heal Daemon on localhost N/A N/A Y 116181 NFS Server on node1.example.com 2049 0 Y 3501 Self-heal Daemon on node1.example.com N/A N/A Y 3509 NFS Server on 192.168.121.172 2049 0 Y 115978 Self-heal Daemon on 192.168.121.172 N/A N/A Y 115986 Task Status of Volume vol_9e86c0493f6b1be648c9deee1dc226a6 ------------------------------------------------------------------------------ There are no active volume tasks
To use the snapshot feature, load the snapshot module using the following command on one of the nodes:
# modprobe dm_snapshot
ImportantRestrictions for using Snapshot
- After a snapshot is created, it must be accessed through the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.
- Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
- Taking consistent snapshots of gluster-block based PVs is not possible.
To take the snapshot of the gluster volume, execute the following command:
# gluster snapshot create <snapname> <volname>
For example:
# gluster snapshot create snap1 vol_9e86c0493f6b1be648c9deee1dc226a6 snapshot create: success: Snap snap1_GMT-2016.07.29-13.05.46 created successfully
To list the snapshots, execute the following command:
# gluster snapshot list
For example:
# gluster snapshot list snap1_GMT-2016.07.29-13.05.46 snap2_GMT-2016.07.29-13.06.13 snap3_GMT-2016.07.29-13.06.18 snap4_GMT-2016.07.29-13.06.22 snap5_GMT-2016.07.29-13.06.26
To delete a snapshot, execute the following command:
# gluster snap delete <snapname>
For example:
# gluster snap delete snap1_GMT-2016.07.29-13.05.46 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1_GMT-2016.07.29-13.05.46: snap removed successfully
For more information about managing snapshots, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#chap-Managing_Snapshots.
You can set up Red Hat Openshift Container Storage volumes for geo-replication to a non-Red Hat Openshift Container Storage remote site. Geo-replication uses a master–slave model. Here, the Red Hat Openshift Container Storage volume acts as the master volume. To set up geo-replication, you must run the geo-replication commands on gluster pods. To enter the gluster pod shell, execute the following command:
# oc rsh <gluster_pod_name> -n <storage_project_name>
For more information about setting up geo-replication, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_geo-replication.
Brick multiplexing is a feature that allows including multiple bricks into one process. This reduces resource consumption, allowing you to run more bricks than earlier with the same memory consumption.
Brick multiplexing is enabled by default from Container-Native Storage 3.6. If you want to turn it off, execute the following command:
# gluster volume set all cluster.brick-multiplex off
The
auto_unmount
option in glusterfs libfuse, when enabled, ensures that the file system is unmounted at FUSE server termination by running a separate monitor process that performs the unmount.The GlusterFS plugin in Openshift enables the
auto_unmount
option for gluster mounts.
2.1. Maintenance on nodes
2.1.1. Necessary steps to be followed before maintenance
Remove the label glusterfs or equivalent one which is the selector for the
glusterfs daemonset
. Wait for the pod to terminate.Run the following command to get the
node selector
.# oc get ds
For example:
# oc get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE glusterfs-storage 3 3 3 3 3 NODE SELECTOR AGE glusterfs=storage-host 12d
Remove the glusterfs label using the following command.
# oc label node <storge_node1> glusterfs-
For example:
# oc label node <storge_node1> glusterfs- node/<storage_node1> labeled
Wait for glusterfs pod to be terminated. Verify using the below command.
# oc get pods -l glusterfs
For example:
# oc get pods -l glusterfs NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner 1/1 Running 0 7m glusterfs-storage-4tc9c 1/1 Terminating 0 5m glusterfs-storage-htrfg 1/1 Running 0 1d glusterfs-storage-z75bc 1/1 Running 0 1d heketi-storage-1-shgrr 1/1 Running 0 1d
Make the node unschedulable using the below command.
# oc adm manage-node --schedulable=false <storage_node1>
For example:
# oc adm manage-node --schedulable=false <storage_node1> NAME STATUS ROLES AGE VERSION storage_node1 Ready,SchedulingDisabled compute 12d v1.11.0+d4cacc0
Drain the node using the below command.
# oc adm drain --ignore-daemonsets <storage_node1>
NotePerform the maintenance and reboot if required
2.1.2. Necessary steps to be followed after maintenance
Make the node schedulable using the below command.
# oc adm manage-node --schedulable=true <storage_node1>
For example:
# oc adm manage-node --schedulable=true <storage_node1> NAME STATUS ROLES AGE VERSION node1 Ready compute 12d v1.11.0+d4cacc0
Add the label glusterfs or equivalent which is the selector for the
glusterfs daemonset
. Wait for the pod to be ready.Run the following command to get the
node selector
.# oc get ds
For example:
# oc get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE glusterfs-storage 3 3 3 3 3 NODE SELECTOR AGE glusterfs=storage-host 12d
Label the glusterfs node using the above node selector and the below command.
# oc label node <storage_node1> glusterfs=storage-host
For example:
# oc label node <storage_node1> glusterfs=storage-host node/<storage_node1> labeled
Wait for the pod to come up to Ready State.
# oc get pods
For example:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner 1/1 Running 0 3m glusterfs-storage-4tc9c 0/1 Running 0 50s glusterfs-storage-htrfg 1/1 Running 0 1d glusterfs-storage-z75bc 1/1 Running 0 1d heketi-storage-1-shgrr 1/1 Running 0 1d
Wait for the pod to be in 1/1 Ready State.
For example:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner 1/1 Running 0 3m glusterfs-storage-4tc9c 1/1 Running 0 58s glusterfs-storage-htrfg 1/1 Running 0 1d glusterfs-storage-z75bc 1/1 Running 0 1d heketi-storage-1-shgrr 1/1 Running 0 1d
Wait for heal to complete, use oc rsh to obtain shell of glusterfs pod and monitor heal using the below command, and wait for Number of entries to be zero(0).
# for each_volume in
gluster volume list
; do gluster volume heal $each_volume info ; doneFor example:
# for each_volume in
gluster volume list
; do gluster volume heal $each_volume info ; done Brick 10.70.46.210:/var/lib/heketi/mounts/vg_64e90b4b94174f19802a8026f652f6d7/brick_564f7725cef192f0fd2ba1422ecbf590/brick Status: Connected Number of entries: 0 Brick 10.70.46.243:/var/lib/heketi/mounts/vg_4fadbf84bbc67873543472655e9660ec/brick_9c9c8c64c48d24c91948bc810219c945/brick Status: Connected Number of entries: 0 Brick 10.70.46.224:/var/lib/heketi/mounts/vg_9fbaf0c06495e66f5087a51ad64e54c3/brick_75e40df81383a03b1778399dc342e794/brick Status: Connected Number of entries: 0 Brick 10.70.46.224:/var/lib/heketi/mounts/vg_9fbaf0c06495e66f5087a51ad64e54c3/brick_e0058f65155769142cec81798962b9a7/brick Status: Connected Number of entries: 0 Brick 10.70.46.210:/var/lib/heketi/mounts/vg_64e90b4b94174f19802a8026f652f6d7/brick_3cf035275dc93e0437fdfaea509a3a44/brick Status: Connected Number of entries: 0 Brick 10.70.46.243:/var/lib/heketi/mounts/vg_4fadbf84bbc67873543472655e9660ec/brick_2cfd11ce587e622fe800dfaec101e463/brick Status: Connected Number of entries: 0