Show Table of Contents
6.4. Upgrading the Red Hat Gluster Storage Pods
The following commands must be executed on the client machine. .
Following are the steps for updating a DaemonSet for glusterfs:
- Execute the following command to find the DaemonSet name for gluster
# oc get ds
- Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=false
Using--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deleted
- Execute the following commands to verify all the old pods are up:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
- If CNS 3.9 is deployed via cns-deploy, then execute the following command to delete the old glusterfs template.
# oc delete templates glusterfs
For example,# oc delete templates glusterfs template “glusterfs” deleted
- If CNS 3.9 is deployed via Ansible, then execute the following command to edit the old glusterfs template.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
- For OCP 3.10:
# oc edit template glusterfs - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7:v3.10
- For OCP 3.9:
# oc edit template glusterfs - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7 - displayName: GlusterFS container image version name: IMAGE_VERSION required: true value: v3.10
- Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labels
If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfslabel, then label the nodes as shown in step ii.If CNS 3.9 was deployed via Ansible, then the label isglusterfs=storage-host - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> storagenode=glusterfs
If CNS 3.9 was deployed via Ansible, then:# oc label nodes <node name> glusterfs=storage-host
- Execute the following command to register new gluster template. This step is not applicable if CNS 3.9 was deployed via Ansible:
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
For example,# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created
- Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -
For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
- Execute the following command to identify the old gluster pods that needs to be deleted:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
# oc delete pod <gluster_pod>
For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
# oc rsh <gluster_pod_name>
- Run the following command to obtain the volume names:
# gluster volume list
- Run the following command on each volume to check the self-heal status:
# gluster volume heal <volname> info
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d … # oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-pqfs6 0/1 ContainerCreating 0 1s … # oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-pqfs6 1/1 Running 0 2m
- Execute the following command to verify that the pods are running:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-j241c 1/1 Running 0 4m glusterfs-pqfs6 1/1 Running 0 7m glusterfs-wrn6n 1/1 Running 0 12m heketi-1-zpw4d 1/1 Running 0 4h storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following command to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_pod_name> glusterd --version
For example:# oc rsh glusterfs-storage-6zdhn glusterd --version glusterfs 3.12.2
Note
- If the setup also has registry deployed, the templates have to be modified accordingly to upgrade the pods in the registry namespace. The same steps for gluster pod upgrade can be followed for glusterfs registry pod upgrade by making necessary changes to the parameters accordingly.
- Edit the
multipath.conffile as shown below and then restart multipathd.cat << /etc/multipath.conf >>EOF # LIO iSCSI devices { device { vendor "LIO-ORG" user_friendly_names "yes" # names like mpatha path_grouping_policy "failover" # one path per group hardware_handler "1 alua" path_selector "round-robin 0" failback immediate path_checker "tur" prio "alua" no_path_retry 120 } } EOF # systemctl restart multipathdThis has to be executed on all the Openshift nodes.
- Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
Note
If the setup has registry configured using glusterfs, then the glusterfs registry pods should also be upgraded before setting cluster.op-version# gluster vol get all cluster.op-version
- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.# gluster volume set all cluster.op-version 31302
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>
For example:# oc delete dc glusterblock-provisioner-dc
- If CNS 3.9 is deployed via cns-deploy, then execute the following commands to deploy the gluster-block provisioner:
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
For example:# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
- If CNS 3.9 is deployed via Ansible, depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
- For OCP 3.10:
# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs
- For OCP 3.9:
# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs
After editing the template, execute the following command to create the deployment configuration:# oc process <gluster_block_provisioner_template> | oc create -f -
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
# oc rsh <gluster_pod_name>
- Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex on
Note
You can check the brick multiplex status by executing the following command:# gluster v get all all
For example:# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
- List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...
Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/container-native_storage/3.9/html-single/container-native_storage_for_openshift_container_platform/#S3_Object_Store.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.