Chapter 4. Updating OpenShift Container Storage in external mode

Use the following procedures to update your OpenShift Container Storage cluster deployed in external mode.

Important

Upgrading Red Hat OpenShift Container Storage Operator does not upgrade the external Red Hat Ceph Storage cluster. It only upgrades the Red Hat OpenShift Container Storage Services running on the OpenShift Container Platform.

To upgrade the external Red Hat Ceph Storage cluster contact your Red Hat Ceph Storage administrator.

4.1. Creating a new object store user to interact with the Ceph Object Store Administrative API

For clusters that operate in external mode and consume object storage, when updating from a previous OpenShift Container Storage version to 4.8, you must create a new object store user. This user interacts with the Ceph Object Store Administrative API to ensure all operations work as expected.

Prerequisites

  • Update the OpenShift Container Platform cluster to the latest stable release of 4.8.z, see Updating Clusters.
  • Under Block and File in the Status card, confirm that the Storage Cluster has a green tick mark.
  • Under Object in the Status card, confirm that both Object Service and Data Resiliency are in Ready state (green tick).

Procedure

  1. Run the following command on the Red Hat Ceph Storage cluster:

    radosgw-admin user create --uid rgw-admin-ops-user --display-name "Rook RGW Admin Ops user" --caps "buckets=*;users=*;usage=read;metadata=read;zone=read"
  2. From the output, get the access_key and secret_key.
  3. Run the following command on the OpenShift Container Platform cluster:

      oc -n openshift-storage \
      create \
      secret \
      generic \
      --type="kubernetes.io/rook" \
      "rgw-admin-ops-user" \
      --from-literal=accessKey="$RGW_ADMIN_OPS_USER_ACCESS_KEY" \
      --from-literal=secretKey="$RGW_ADMIN_OPS_USER_SECRET_KEY"

    Where RGW_ADMIN_OPS_USER_ACCESS_KEY and RGW_ADMIN_OPS_USER_SECRET_KEY are variables that contain the user’s access and secret key.

  4. Proceed with the upgrade.

4.2. Enabling automatic updates for OpenShift Container Storage operator in external mode

Use this procedure to enable automatic update approval for updating OpenShift Container Storage operator in OpenShift Container Platform.

Note

Updating OpenShift Container Storage will not update the external Red Hat Ceph Storage cluster.

Prerequisites

  • Red Hat Ceph Storage version 4.2z1 or later is required for the external cluster. For more information, see this knowledge base article on Red Hat Ceph Storage releases and corresponding Ceph package versions.
  • Update the OpenShift Container Platform cluster to the latest stable release of version 4.8.X, see Updating Clusters.
  • Switch the Red Hat OpenShift Container Storage channel from stable-4.7 to stable-4.8. For details about channels, see OpenShift Container Storage upgrade channels and releases.

    Note

    You are required to switch channels only when you are updating minor versions (for example, updating from 4.7 to 4.8) and not when updating between batch updates of 4.8 (for example, updating from 4.8.0 to 4.8.1).

  • Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace.

    To view the state of the pods, click Workloads → Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop down list.

  • Under Block and File in the Status card, confirm that the Storage Cluster has a green tick mark.
  • Under Object in the Status card, confirm that both Object Service and Data Resiliency are in Ready state (green tick).
  • Ensure that you have sufficient time to complete the Openshift Container Storage update process.

Procedure

  1. Log in to OpenShift Web Console.
  2. Click OperatorsInstalled Operators
  3. Select the openshift-storage project.
  4. Click on the OpenShift Container Storage operator name.
  5. Click the Subscription tab and click the link under Approval.
  6. Select Automatic (default) and click Save.
  7. Perform one of the following depending on the Upgrade Status:

    • Upgrade Status shows requires approval.

      Note

      Upgrade status shows requires approval if the new OpenShift Container Storage version is already detected in the channel, and approval strategy was changed from Manual to Automatic at the time of update.

      1. Click the Install Plan link.
      2. On the InstallPlan Details page, click Preview Install Plan.
      3. Review the install plan and click Approve.
      4. Wait for the Status to change from Unknown to Created.
      5. Click OperatorsInstalled Operators
      6. Select the openshift-storage project.
      7. Wait for the Status to change to Up to date
    • Upgrade Status does not show requires approval:

      1. Wait for the update to initiate. This may take up to 20 minutes.
      2. Click OperatorsInstalled Operators
      3. Select the openshift-storage project.
      4. Wait for the Status to change to Up to date

Verification steps

  1. On the OpenShift Web Console, navigate to Storage → Overview → Object tab.

    • In the Status card, verify that both Object Service and Data Resiliency are in Ready state (green tick).
  2. On the OpenShift Web Console, navigate to Storage → Overview → Block and File tab.

    • In the Status card, verify that the Storage Cluster has a green tick mark.
  3. Click OperatorsInstalled OperatorsOpenShift Container Storage Operator. Under Storage Cluster, verify that the cluster service status in Ready.

    Note

    Once updated from OpenShift Container Storage version 4.7 to 4.8, the Version field here will still display 4.7. This is because the ocs-operator does not update the string represented in this field.

  4. Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace.

    To view the state of the pods, click Workloads → Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop down list.

  5. If verification steps fail, contact Red Hat Support.

Additional Resources

If you face any issues while updating OpenShift Container Storage, see the Commonly required logs for troubleshooting section in the Troubleshooting guide.

4.3. Manually updating OpenShift Container Storage operator in external mode

Use this procedure to update OpenShift Container Storage operator by providing manual approval to the install plan.

Note

Updating OpenShift Container Storage will not update the external Red Hat Ceph Storage cluster.

Prerequisites

  • Red Hat Ceph Storage version 4.2z1 or later is required for the external cluster. For more information, see this knowledge base article on Red Hat Ceph Storage releases and corresponding Ceph package versions.
  • Update the OpenShift Container Platform cluster to the latest stable release of version 4.8.X, see Updating Clusters.
  • Switch the Red Hat OpenShift Container Storage channel from stable-4.7 to stable-4.8. For details about channels, see OpenShift Container Storage upgrade channels and releases.

    Note

    You are required to switch channels only when you are updating minor versions (for example, updating from 4.7 to 4.8) and not when updating between batch updates of 4.8 (for example, updating from 4.8.0 to 4.8.1).

  • Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace.

    To view the state of the pods, click WorkloadsPods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop down list.

  • Under Block and File in the Status card, confirm that the Storage Cluster has a green tick mark.
  • Under Object in the Status card, confirm that both Object Service and Data Resiliency are in Ready state (green tick).
  • Ensure that you have sufficient time to complete the Openshift Container Storage update process.

Procedure

  1. Log in to OpenShift Web Console.
  2. Click OperatorsInstalled Operators
  3. Select the openshift-storage project.
  4. Click the OpenShift Container Storage operator name.
  5. Click Subscription tab and click the link under Approval.
  6. Select Manual and click Save.
  7. Wait for the Upgrade Status to change to Upgrading.
  8. If the Upgrade Status shows requires approval, click on requires approval.
  9. On the InstallPlan Details page, click Preview Install Plan.
  10. Review the install plan and click Approve.
  11. Wait for the Status to change from Unknown to Created.
  12. Click OperatorsInstalled Operators
  13. Select the openshift-storage project.
  14. Wait for the Status to change to Up to date

Verification steps

  1. On the OpenShift Web Console, navigate to Storage → Overview → Object tab.

    • In the Status card, verify that both Object Service and Data Resiliency are in Ready state (green tick).
  2. On the OpenShift Web Console, navigate to Storage → Overview → Block and File tab.

    • In the Status card, verify that the Storage Cluster has a green tick mark.
  3. Click OperatorsInstalled OperatorsOpenShift Container Storage Operator. Under Storage Cluster, verify that the cluster service status in Ready.
  4. Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace.

    To view the state of the pods, click WorkloadsPods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop down list.

    Note

    Once updated from OpenShift Container Storage version 4.7 to 4.8, the Version field here will still display 4.7. This is because the ocs-operator does not update the string represented in this field.

  5. If verification steps fail, contact Red Hat Support.

Additional Resources

If you face any issues while updating OpenShift Container Storage, see the Commonly required logs for troubleshooting section in the Troubleshooting guide.

4.4. Updating the OpenShift Container Storage external secret

Update the OpenShift Container Storage external secret after updating to the latest version of OpenShift Container Storage.

Note

Updating the external secret is not required for batch updates. For example, when updating from OpenShift Container Storage 4.8.X to 4.8.Y.

Prerequisites

Procedure

  1. Download the OpenShift Container Storage version of the ceph-external-cluster-details-exporter.py python script.

    # oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py
  2. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.

    # python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user=<client_name_used_for_OCS_4.7_install>
    --run-as-user

    The client name used during the OpenShift Container Storage 4.7 deployment. If this option was not used during the deployment of OpenShift Container Storage 4.7, the default client name client.healthchecker is set.

    The updated permissions for the user are set as:

    caps: [mgr] allow command config
    caps: [mon] allow r, allow command quorum_status, allow command version
    caps: [osd] allow rwx pool=RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
  3. Run the previously downloaded python script and save the JSON output that gets generated, from the external Red Hat Ceph Storage cluster.

    1. Run the previously downloaded python script:

      # python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <client_name_used_for_OCS_4.7_install>  [optional arguments]
      --rbd-data-pool-name
      Is a mandatory parameter used for providing block storage in OpenShift Container Storage.
      --rgw-endpoint
      Is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Container Storage. Provide the endpoint in the following format: <ip_address>:<port>.
      --monitoring-endpoint
      Is optional. It is the IP address of the active ceph-mgr reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
      --monitoring-endpoint-port
      Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint. If not provided, the value is automatically populated.
      --run-as-user

      The client name used during the OpenShift Container Storage 4.7 deployment. If this option was not used during the deployment of OpenShift Container Storage 4.7, the default client name client.healthchecker is set.

      Note

      Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port, are the same parameters that were used during the deployment of OpenShift Container Storage 4.7 in external mode.

    2. Save the JSON output generated after running the script in the previous step.

      Example output:

      [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.healthchecker", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "ceph-rbd"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}]
  4. Upload the generated JSON file.

    1. Log in to the OpenShift Web Console.
    2. Click Workloads → Secrets.
    3. Set project to openshift-storage.
    4. Click rook-ceph-external-cluster-details.
    5. Click Actions (⋮) → Edit Secret.
    6. Click Browse and upload the JSON file.
    7. Click Save.

Verification steps

  1. On the OpenShift Web Console, navigate to Storage → Overview → Object tab.

    • In the Status card, verify that both Object Service and Data Resiliency are in Ready state (green tick).
  2. On the OpenShift Web Console, navigate to Storage → Overview → Block and File tab.

    • In the Status card, verify that the Storage Cluster has a green tick mark.