Chapter 8. Upgrading Your Red Hat Openshift Container Storage in Independent Mode

This chapter describes the procedures to follow to upgrade your independent mode environment.

Note

New registry name registry.redhat.io is used throughout in this Guide.

However, if you have not migrated to the new registry yet then replace all occurrences of registry.redhat.io with registry.access.redhat.com where ever applicable.

Note

Follow the same upgrade procedure to upgrade your environment from Red Hat Openshift Container Storage in Independent Mode 3.11.0 and above to Red Hat Openshift Container Storage in Independent Mode 3.11.5. Ensure that the correct image and version numbers are configured before you start the upgrade process.

The valid images for Red Hat Openshift Container Storage 3.11.5 are:

  • registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.6
  • registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.6
  • registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.6
  • registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.6

8.1. Prerequisites

Ensure the following prerequisites are met:

8.2. Upgrading nodes and pods in glusterfs group

Follow the steps in the sections ahead to upgrade your independent mode Setup.

8.2.1. Upgrading the Red Hat Gluster Storage Cluster

To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.

8.2.2. Upgrading/Migration of Heketi in RHGS node

Note

If Heketi is in an Openshift node, then skip this section and see Section 8.2.4.1, “Upgrading Heketi in Openshift node” instead.

Important
  • In OCS 3.11, upgrade of Heketi in RHGS node is not supported. Hence, you have to migrate heketi to a new heketi pod.
  • Ensure to migrate to the supported heketi deployment now, as there might not be a migration path in the future versions.
  • Ensure that cns-deploy rpm is installed in the master node. This provides template files necessary to setup heketi pod.

    # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# yum install cns-deploy
  1. Use the newly created containerized Red Hat Gluster Storage project on the master node:

    # oc project <project-name>

    For example:

    # oc project gluster
  2. Execute the following command on the master node to create the service account:

    # oc create -f /usr/share/heketi/templates/heketi-service-account.yaml
    serviceaccount/heketi-service-account created
  3. Execute the following command on the master node to install the heketi template:

    # oc create -f /usr/share/heketi/templates/heketi-template.yaml
    template.template.openshift.io/heketi created
  4. Verify if the templates are created

    # oc get templates
    
    NAME            DESCRIPTION                          PARAMETERS    OBJECTS
    heketi          Heketi service deployment template   5 (3 blank)   3
  5. Execute the following command on the master node to grant the heketi Service Account the necessary privileges:

    # oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account
    role "edit" added: "system:serviceaccount:gluster:heketi-service-account"
    # oc adm policy add-scc-to-user privileged -z heketi-service-account
    scc "privileged" added to: ["system:serviceaccount:gluster:heketi-service-account"]
  6. On the RHGS node, where heketi is running, execute the following commands:

    1. Create the heketidbstorage volume:

      # heketi-cli volume create --size=2 --name=heketidbstorage
    2. Mount the volume:

      # mount  -t glusterfs 192.168.11.192:heketidbstorage /mnt/

      where 192.168.11.192 is one of the RHGS node.

    3. Stop the heketi service:

      # systemctl stop heketi
    4. Disable the heketi service:

      # systemctl disable heketi
    5. Copy the heketi db to the heketidbstorage volume:

      # cp /var/lib/heketi/heketi.db /mnt/
    6. Unmount the volume:

      # umount /mnt
    7. Copy the following files from the heketi node to the master node:

      # scp   /etc/heketi/heketi.json  topology.json   /etc/heketi/heketi_key  OCP_master_node:/root/

      where OCP_master_node is the hostname of the master node.

  7. On the master node, set the environment variables for the following three files that were copied from the heketi node. Add the following lines to ~/.bashrc file and run the bash command to apply and save the changes:

    export SSH_KEYFILE=heketi_key
    export TOPOLOGY=topology.json
    export HEKETI_CONFIG=heketi.json
    Note

    If you have changed the value for "keyfile" in /etc/heketi/heketi.json to a different value, change here accordingly.

  8. Execute the following command to create a secret to hold the configuration file:

    # oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY}
    
    secret/heketi-config-secret created
  9. Execute the following command to label the secret:

    # oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
    
    secret/heketi-config-secret labeled
  10. Get the IP addresses of all the glusterfs nodes, from the heketi-gluster-endpoints.ymlfile. For example:

    # cat heketi-gluster-endpoints.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: heketi-storage-endpoints
    subsets:
    - addresses:
      - ip: 192.168.11.208
      ports:
      - port: 1
    - addresses:
      - ip: 192.168.11.176
      ports:
      - port: 1
    - addresses:
      - ip: 192.168.11.192
      ports:
      - port: 1

    In the above example, 192.168.11.208, 192.168.11.176, 192.168.11.192 are the glusterfs nodes.

  11. Execute the following command to create the endpoints:

    # oc create -f ./heketi-gluster-endpoints.yaml
    # cat heketi-gluster-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: heketi-storage-endpoints
    spec:
      ports:
      - port: 1
  12. Execute the following command to create the service:

    # oc create -f ./heketi-gluster-service.yaml
  13. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    service/heketi created
    route.route.openshift.io/heketi created
    deploymentconfig.apps.openshift.io/heketi created
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  14. To verify if Heketi is migrated execute the following command on the master node:

    # oc rsh po/<heketi-pod-name>

    For example:

    # oc rsh po/heketi-1-p65c6
  15. Execute the following command to check the cluster IDs

    # heketi-cli cluster list

    From the output verify if the cluster ID matches with the old cluster.

8.2.3. Upgrading if existing version deployed using cns-deploy

8.2.3.1. Upgrading Heketi in Openshift node

The following commands must be executed on the client machine.

  1. Execute the following command to update the heketi client and cns-deploy packages:

    # yum update cns-deploy -y
    # yum update heketi-client -y
  2. Backup the Heketi database file

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
  3. Execute the following command to get the current HEKETI_ADMIN_KEY.

    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

    oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echo

    Where <heketi-admin-secret-name> is the name of the heketi admin secret created by the user.

  4. Execute the following command to delete the heketi template.

    # oc delete templates heketi
  5. Execute the following command to install the heketi template.

    # oc create -f /usr/share/heketi/templates/heketi-template.yaml
    template "heketi" created
    • Execute the following command to grant the heketi Service Account the necessary privileges.

      # oc policy add-role-to-user edit system:serviceaccount: <project_name>:heketi-service-account
      # oc adm policy add-scc-to-user privileged -z heketi-service-account

      For example,

      # oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
      # oc adm policy add-scc-to-user privileged -z heketi-service-account
      Note

      The service account used in heketi pod needs to be privileged because Heketi/rhgs-volmanager pod mounts the heketidb storage Gluster volume as a "glusterfs" volume type and not as a PersistentVolume (PV).
      As per the security-context-constraints regulations in OpenShift, ability to mount volumes which are not of the type configMap, downwardAPI, emptyDir, hostPath, nfs, persistentVolumeClaim, secret is granted only to accounts with privileged Security Context Constraint (SCC).

  6. Execute the following command to generate a new heketi configuration file.

    # sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
    • The BLOCK_HOST_SIZE parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required.
    • Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.

      Note

      JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).

  7. Execute the following command to create a secret to hold the configuration file.

    # oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json
    Note

    If the heketi-config-secret file already exists, then delete the file and run the following command.

  8. Execute the following command to delete the deployment configuration, service, and route for heketi:

    # oc delete deploymentconfig,service,route heketi
  9. Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, and HEKETI_EXECUTOR parameters.

    # oc edit template heketi
    parameters:
      - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: ssh
      - description: Set the fstab path, file that is populated with bricks that heketi creates
        displayName: heketi fstab path
        name: HEKETI_FSTAB
        value: /etc/fstab
      - description: Set the hostname for the route URL
          displayName: heketi route name
          name: HEKETI_ROUTE
          value: heketi-storage
        - displayName: heketi container image name
          name: IMAGE_NAME
          required: true
          value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.6
        - description: A unique name to identify this heketi service, useful for running multiple
            heketi instances
          displayName: GlusterFS cluster name
          name: CLUSTER_NAME
          value: storage
Note

If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  1. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    
    service "heketi" created
    route "heketi" created
    deploymentconfig "heketi" created
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  2. Execute the following command to verify that the containers are running:

    # oc get pods

    For example:

    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      heketi-1-zpw4d                   1/1       Running   0          3h
      storage-project-router-2-db2wl   1/1       Running   0          4d

8.2.3.2. Upgrading Gluster Block

Execute the following steps to upgrade gluster block.

Note

The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:

# uname -r

Reboot the node for the latest kernel update to take effect.

  1. Execute the following command to upgrade the gluster block:

    # yum update gluster-block
  2. Enable and start the gluster block service:

    # systemctl enable gluster-blockd
    # systemctl start gluster-blockd
  3. To use gluster block, add the following two parameters to the glusterfs section in the heketi configuration file at /etc/heketi/heketi.JSON:

    auto_create_block_hosting_volume
    block_hosting_volume_size

    Where:

    auto_create_block_hosting_volume: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value to true.

    block_hosting_volume_size: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.

    For example:

    .....
      .....
      "glusterfs" : {
          "executor" : "ssh",
    
          "db" : "/var/lib/heketi/heketi.db",
    
          "sshexec" : {
          "rebalance_on_expansion": true,
          "keyfile" : "/etc/heketi/private_key"
          },
    
          "auto_create_block_hosting_volume": true,
    
          "block_hosting_volume_size": 500G
        },
      .....
    .....
  4. Restart the Heketi service:

    # systemctl restart heketi
    Note

    This step is not applicable if heketi is running as a pod in the Openshift cluster.

  5. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:

    # oc delete dc <gluster-block-dc>

    For example:

    # oc delete dc glusterblock-provisioner-dc
  6. Delete the following resources from the old pod

    If you have glusterfs pods:

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-provisioner
    serviceaccount "glusterblock-provisioner" deleted
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner

    If you have registry pods:

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-provisioner
    serviceaccount "glusterblock-provisioner" deleted
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
  7. Execute the following commands to deploy the gluster-block provisioner:

    # sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner

    For example:

    # sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner

8.2.4. Upgrading if existing version deployed using Ansible

8.2.4.1. Upgrading Heketi in Openshift node

The following commands must be executed on the client machine.

  1. Execute the following command to update the heketi client:

    # yum update heketi-client -y
  2. Backup the Heketi database file:

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
  3. Execute the following command to get the current HEKETI_ADMIN_KEY:

    The OCS administrator can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

    oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
  4. Execute the following command to delete the heketi template.

     # oc delete templates heketi
  5. Execute the following command to install the heketi template.

    # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
    template "heketi" created
  6. Execute the following step to edit the template:

    # oc get templates
      	NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
      	glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
      				  template
      	glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
      				  template
      	heketi			  Heketi service deployment  7 (3 blank)	3
    template

If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION ,CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.

Note

The value of the HEKETI_LVM_WRAPPER parameter points to the wrapper command for LVM. In independent mode setups wrapper is not required, change the value to an empty string as shown below.

# oc edit template heketi
parameters:
- description: Set secret for those creating volumes as type user
  displayName: Heketi User Secret
  name: HEKETI_USER_KEY
  value: <heketiuserkey>
- description: Set secret for administration of the Heketi service as user admin
  displayName: Heketi Administrator Secret
  name: HEKETI_ADMIN_KEY
  value: <adminkey>
- description: Set the executor type, kubernetes or ssh
  displayName: heketi executor type
  name: HEKETI_EXECUTOR
  value: ssh
- description: Set the fstab path, file that is populated with bricks that heketi creates
  displayName: heketi fstab path
  name: HEKETI_FSTAB
  value: /etc/fstab
- description: Set the hostname for the route URL
  displayName: heketi route name
  name: HEKETI_ROUTE
  value: heketi-storage
- displayName: heketi container image name
  name: IMAGE_NAME
  required: true
  value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
- displayName: heketi container image version
  name: IMAGE_VERSION
  required: true
  value: v3.11.6
- description: A unique name to identify this heketi service, useful for running multiple heketi instances
  displayName: GlusterFS cluster name
  name: CLUSTER_NAME
  value: storage
- description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container.
  name: HEKETI_LVM_WRAPPER
  displayName: Wrapper for executing LVM commands
  value: ""

If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.

parameters:
  - description: Set secret for those creating volumes as type user
    displayName: Heketi User Secret
    name: HEKETI_USER_KEY
    value: <heketiuserkey>
  - description: Set secret for administration of the Heketi service as user admin
    displayName: Heketi Administrator Secret
    name: HEKETI_ADMIN_KEY
    value: <adminkey>
  - description: Set the executor type, kubernetes or ssh
    displayName: heketi executor type
    name: HEKETI_EXECUTOR
    value: ssh
  - description: Set the fstab path, file that is populated with bricks that heketi creates
    displayName: heketi fstab path
    name: HEKETI_FSTAB
    value: /etc/fstab
  - description: Set the hostname for the route URL
    displayName: heketi route name
    name: HEKETI_ROUTE
    value: heketi-storage
  - displayName: heketi container image name
    name: IMAGE_NAME
    required: true
    value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.6
  - description: A unique name to identify this heketi service, useful for running multiple heketi instances
    displayName: GlusterFS cluster name
    name: CLUSTER_NAME
    value: storage
  - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
    name: HEKETI_LVM_WRAPPER
    displayName: Wrapper for executing LVM commands
    value: ""
Note

If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  1. Execute the following command to delete the deployment configuration, service, and route for heketi:

    # oc delete deploymentconfig,service,route heketi-storage
  2. Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    service "heketi" created
    route "heketi" created
    deploymentconfig "heketi" created
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  3. Execute the following command to verify that the containers are running:

    # oc get pods

    For example:

    # oc get pods
      NAME                              READY  STATUS  RESTARTS AGE
      glusterfs-registry-0h68l           1/1   Running   0      3d
      glusterfs-registry-0vcf3           1/1   Running   0      3d
      glusterfs-registry-gr9gh           1/1   Running   0      3d
      heketi-registry-1-zpw4d            1/1   Running   0      3h
      storage-project-router-2-db2wl     1/1   Running   0      4d

8.2.4.2. Upgrading Gluster Block if Deployed by Using Ansible

Execute the following steps to upgrade gluster block.

Note

The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:

# uname -r

Reboot the node for the latest kernel update to take effect.

  1. Execute the following command to upgrade the gluster block:

    # yum update gluster-block
  2. Enable and start the gluster block service:

    # systemctl enable gluster-blockd
    # systemctl start gluster-blockd
  3. Execute the following command to update the heketi client

    # yum update heketi-client -y
  4. Restart the Heketi service:

    # systemctl restart heketi
    Note

    This step is not applicable if heketi is running as a pod in the Openshift cluster.

  5. Execute the following command to delete the old glusterblock provisioner template.

     # oc delete templates glusterblock-provisioner
  6. Execute the following command to register new glusterblock provisioner template, see Templates on GitHub. Copy and paste to new-block-prov.yaml. For example,

    # oc create -f new-block-prov.yaml
    template.template.openshift.io/glusterblock-provisioner created
  7. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands.

    For glusterfs namespace:

    # oc delete dc glusterblock-storage-provisioner-dc

    For glusterfs-registry namespace:

    oc delete dc glusterblock-registry-provisioner-dc
  8. Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.

    # oc get templates
    NAME			  DESCRIPTION		               PARAMETERS	OBJECTS
    glusterblock-provisioner  glusterblock provisioner template    3 (2 blank)	4
    glusterfs		  GlusterFS DaemonSet template 	       5 (1 blank)	1
    heketi  Heketi service deployment template   7 (3 blank)3

    If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:

    # oc edit template glusterblock-provisioner
    - displayName: glusterblock provisioner container image name
      name: IMAGE_NAME
      required: true
      value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
    - displayName: glusterblock provisioner container image version
      name: IMAGE_VERSION
      required: true
      value: v3.11.6
    - description: The namespace in which these resources are being created
      displayName: glusterblock provisioner namespace
      name: NAMESPACE
      required: true
      value: glusterfs
    - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
    value: storage

    If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:

    # oc edit template glusterblock-provisioner
    - displayName: glusterblock provisioner container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.6
    - description: The namespace in which these resources are being created
      displayName: glusterblock provisioner namespace
      name: NAMESPACE
      required: true
      value: glusterfs
    - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
    value: storage
  9. Delete the following resources from the old pod.

    If you have glusterfs pods:

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-storage-provisioner
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisioner

    If you have registry pods:

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-registry-provisioner
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
  10. Before running oc process determine the correct provisioner name. If there are more than one gluster block provisioner running in your cluster the names must differ from all other provisioners.
    For example,

    • If there are 2 or more provisioners the name should be gluster.org/glusterblock-<namespace> where, namespace is replaced by the namespace that the provisioner is deployed in.
    • If there is only one provisioner, installed prior to 3.11.5, gluster.org/glusterblock is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
  11. After editing the template, execute the following command to create the deployment configuration:

    # oc process -p PROVISIONER_NAME=<provisioner-name> glusterblock-provisioner -o yaml | oc create -f -

    For example:

     # oc process -p PROVISIONER_NAME=gluster.org/glusterblock-app-storage glusterblock-provisioner -o yaml | oc create -f -
     clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created
     serviceaccount/glusterblock-storage-provisioner created
     clusterrolebinding.authorization.openshift.io/glusterblock-storage-provisioner created
     deploymentconfig.apps.openshift.io/glusterblock-storage-provisioner-dc created
  12. All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a block provisioner, in a given namespace, run the following command:

    # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>

    Example:

    # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage
    glusterfs-storage-block   gluster.org/glusterblock-app-storage   app-storage

    Check each storage class provisioner name, if it does not match the block provisioner name configured for that namespace it must be updated. If the block provisioner name already matches the configured provisioner name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
    For every storage class in this list do the following:

    # oc get sc  -o yaml <storageclass>  > storageclass-to-edit.yaml
    # oc delete sc  <storageclass>
    # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -

    Example:

    # oc get sc  -o yaml gluster-storage-block  > storageclass-to-edit.yaml
    # oc delete sc  gluster-storage-block
    # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block created

8.2.5. Enabling S3 Compatible Object store

Support for S3 compatible Object Store is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#S3_Object_Store.

Note

8.3. Upgrading nodes and pods in glusterfs registry group

Follow the steps in the sections to upgrade your gluster nodes and heketi pods in glusterfs registry namespace.

8.3.1. Upgrading the Red Hat Gluster Storage Registry Cluster

To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.

8.3.1.1. Upgrading Heketi Registry pod

Note

If Heketi is not in an Openshift node, then you have to migrate Heketi in RHGS node to Openshift node. For more information on how to migrate, refer Section 8.2.2, “Upgrading/Migration of Heketi in RHGS node”.

To upgrade the Heketi registry pods, perform the following steps:

The following commands must be executed on the client machine.

  1. Execute the following command to update the heketi client:

    # yum update heketi-client -y
  2. Backup the Heketi registry database file:

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
  3. Execute the following command to get the current HEKETI_ADMIN_KEY:

    The OCS administrator can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

    # oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
  4. Execute the following command to delete the heketi template.

    # oc delete templates heketi
  5. Execute the following command to install the heketi template.

    # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
    template "heketi" created
    • Execute the following step to edit the template:

      # oc get templates
      NAME			     DESCRIPTION		           PARAMETERS	OBJECTS
      glusterblock-  glusterblock provisioner  3 (2 blank)	4
      provisioner    template
      heketi         Heketi service deployment 7 (3 blank)	3
      template

If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION,CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the following example:

Note

The value of the HEKETI_LVM_WRAPPER parameter points to the wrapper command for LVM. In independent mode setups wrapper is not required, change the value to an empty string as shown below.

# oc edit template heketi
parameters:
- description: Set secret for those creating volumes as type _user_
  displayName: Heketi User Secret
  name: HEKETI_USER_KEY
  value: heketiuserkey
- description: Set secret for administration of the Heketi service as
  user _admin_
  displayName: Heketi Administrator Secret
  name: HEKETI_ADMIN_KEY
  value: adminkey
- description: Set the executor type, kubernetes or ssh
  displayName: heketi executor type
  name: HEKETI_EXECUTOR
  value: ssh
- description: Set the fstab path, file that is populated with bricks
  that heketi creates
  displayName: heketi fstab path
  name: HEKETI_FSTAB
  value: /etc/fstab
- description: Set the hostname for the route URL
  displayName: heketi route name
  name: HEKETI_ROUTE
  value: heketi-registry
- displayName: heketi container image name
  name: IMAGE_NAME
  required: true
  value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
- displayName: heketi container image version
  name: IMAGE_VERSION
  required: true
  value: v3.11.6
- description: A unique name to identify this heketi service, useful
  for running multiple heketi instances
  displayName: GlusterFS cluster name
  name: CLUSTER_NAME
  value: registry
- description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
  name: HEKETI_LVM_WRAPPER
  displayName: Wrapper for executing LVM commands
  value: ""

If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the following example:

parameters:
  - description: Set secret for those creating volumes as type user
   displayName: Heketi User Secret
   name: HEKETI_USER_KEY
   value: heketiuserkey
 - description: Set secret for administration of the Heketi service as
    user admin
    displayName: Heketi Administrator Secret
    name: HEKETI_ADMIN_KEY
    value: adminkey
  - description: Set the executor type, kubernetes or ssh
    displayName: heketi executor type
    name: HEKETI_EXECUTOR
    value: ssh
  - description: Set the fstab path, file that is populated with bricks that heketi creates
    displayName: heketi fstab path
    name: HEKETI_FSTAB
    value: /etc/fstab
  - description: Set the hostname for the route URL
    displayName: heketi route name
    name: HEKETI_ROUTE
    value: heketi-registry
  - displayName: heketi container image name
    name: IMAGE_NAME
    required: true
    value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.6
  - description: A unique name to identify this heketi service, useful for running multiple heketi instances
    displayName: GlusterFS cluster name
    name: CLUSTER_NAME
    value: registry
  - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
    name: HEKETI_LVM_WRAPPER
    displayName: Wrapper for executing LVM commands
    value:""
Note

If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  1. Execute the following command to delete the deployment configuration, service, and route for heketi:

    # oc delete deploymentconfig,service,route heketi-registry
  2. Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    service "heketi-registry" created route "heketi-registry" created deploymentconfig "heketi-registry" created
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  3. Execute the following command to verify that the containers are running:

    # oc get pods

    For example:

    # oc get pods
    NAME                              READY  STATUS  RESTARTS AGE
    heketi-registry-1-zpw4d            1/1   Running   0      3h
    glusterblock-registry-provisioner- 1/1   Running   0      21h
    dc-1-c59rn

8.3.2. Upgrading glusterblock-provisioner Pod

To upgrade the glusterblock-provisioner pods, perform the following steps:

  1. Execute the following command to delete the old glusterblock provisioner template.

    # oc delete templates glusterblock-provisioner
  2. Execute the following command to register new glusterblock provisioner template, see Templates on GitHub. Copy and paste to new-block-prov.yaml. For example,

    # oc create -f new-block-prov.yaml
    template.template.openshift.io/glusterblock-provisioner created
  3. If a glusterblock-provisoner pod already exists then delete it by executing the following commands:

    # oc delete dc <gluster-block-registry-dc>

    For example:

    # oc delete dc glusterblock-registry-provisioner-dc
  4. Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION, and NAMESPACE.

    # oc get templates
    NAME           DESCRIPTION            PARAMETERS  OBJECTS
    glusterblock-  glusterblock           3 (2 blank)   4
    provisioner    provisioner template
    heketi         Heketi service         7 (3 blank)   3
    deployment template

    If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as follows:

    oc edit template glusterblock-provisioner
      - displayName: glusterblock provisioner container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: glusterblock provisioner container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.6
      - description: The namespace in which these resources are being created
        displayName: glusterblock provisioner namespace
        name: NAMESPACE
        required: true
        value: glusterfs-registry
      - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
    value: registry

    If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as follows:

    # oc edit template glusterblock-provisioner
      - displayName: glusterblock provisioner container image name
        name: IMAGE_NAME
        required: true
        value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.6
      - description: The namespace in which these resources are being created
      - displayName: glusterblock provisioner namespace
        name: NAMESPACE
        required: true
        value: glusterfs-registry
      - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
    value: registry
  5. Delete the following resources from the old pod:

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-registry-provisioner
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
  6. Before running oc process determine the correct provisioner name. If there are more than one gluster block provisioner running in your cluster the names must differ from all other provisioners.
    For example,

    • If there are 2 or more provisioners the name should be gluster.org/glusterblock-<namespace> where, namespace is replaced by the namespace that the provisioner is deployed in.
    • If there is only one provisioner, installed prior to 3.11.5, gluster.org/glusterblock is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
  7. After editing the template, execute the following command to create the deployment configuration:

    # oc process -p PROVISIONER_NAME=<provisioner-name> glusterblock-provisioner -o yaml | oc create -f -

    For example:

     # oc process -p PROVISIONER_NAME=gluster.org/glusterblock-infra-storage glusterblock-provisioner -o yaml | oc create -f -
      clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created
      serviceaccount/glusterblock-registry-provisioner created
      clusterrolebinding.authorization.openshift.io/glusterblock-registry-provisioner created
      deploymentconfig.apps.openshift.io/glusterblock-registry-provisioner-dc created
  8. All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a block provisioner, in a given namespace, run the following command:

    # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>

    Example:

    # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage
      glusterfs-registry-block   gluster.org/glusterblock               infra-storage

    Check each storage class provisioner name, if it does not match the block provisioner name configured for that namespace it must be updated. If the block provisioner name already matches the configured provisioner name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
    For every storage class in this list do the following:

    # oc get sc  -o yaml <storageclass>  > storageclass-to-edit.yaml
    # oc delete sc  <storageclass>
    # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -

    Example:

    # oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml
    # oc delete sc glusterfs-registry-block
    storageclass.storage.k8s.io "glusterfs-registry-block" deleted
    # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f -
    storageclass.storage.k8s.io/glusterfs-registry-block created

8.3.3. Upgrading Gluster Block

To upgrade the gluster block, perform the following steps:

  1. Execute the following command to upgrade the gluster block:

    # yum update gluster-block
    • Enable and start the gluster block service:

      # systemctl enable gluster-blockd
      # systemctl start gluster-blockd

8.4. Upgrading the client on Red Hat OpenShift Container Platform nodes

Execute the following commands on each of the nodes:

  1. To drain the pod, execute the following command on the master node (or any node with cluster-admin access):

    # oc adm drain <node_name> --ignore-daemonsets
  2. To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access):

    # oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
  3. Execute the following command on the node to upgrade the client on the node:

    # yum update glusterfs-client
  4. To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):

    # oc adm manage-node --schedulable=true <node_name>
    • Create and add the following content to the multipath.conf file:

      Note

      Make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.

      # cat >> /etc/multipath.conf <<EOF
      # LIO iSCSI
      devices {
        device {
          vendor "LIO-ORG"
          user_friendly_names "yes" # names like mpatha
          path_grouping_policy "failover" # one path per group
          hardware_handler "1 alua"
          path_selector "round-robin 0"
          failback immediate
          path_checker "tur"
          prio "alua"
          no_path_retry 120
        }
      }
      EOF
  5. Execute the following commands to start multipath daemon and [re]load the multipath configuration:

    # systemctl start multipathd
    # systemctl reload multipathd