Menu Close
Chapter 6. Updating the OpenShift Data Foundation external secret
Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation.
Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.9.X to 4.9.Y.
Prerequisites
- Update the OpenShift Container Platform cluster to the latest stable release of 4.9.z, see Updating Clusters.
- The OpenShift Container Storage operator has been upgraded to OpenShift Data Foundation version 4.9. See Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation for more information.
Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage → OpenShift Data foundation → Storage Systems tab and then click on the storage system name.
- On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy.
- Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
- Red Hat Ceph Storage must have a Ceph dashboard installed and configured.
Procedure
Download the OpenShift Data Foundation version of the
ceph-external-cluster-details-exporter.py
python script.# oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py
Update permission caps on the external Red Hat Ceph Storage cluster by running
ceph-external-cluster-details-exporter.py
on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.# python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user=<ocs_client_name>
--run-as-user
The client name used during OpenShift Data Foundation cluster deployment. Use the default client name
client.healthchecker
if a different client name was not set.The updated permissions for the user are set as:
caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
Run the previously downloaded python script and save the JSON output that gets generated, from the external Red Hat Ceph Storage cluster.
Run the previously downloaded python script:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]
--rbd-data-pool-name
- Is a mandatory parameter used for providing block storage in OpenShift Data Foundation.
--rgw-endpoint
-
Is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>
. --monitoring-endpoint
- Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
--monitoring-endpoint-port
-
Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by
--monitoring-endpoint
. If not provided, the value is automatically populated. --run-as-user
The client name used during OpenShift Data Foundation cluster deployment. Use the default client name
client.healthchecker
if a different client name was not set.NoteEnsure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port, are the same as what was used during the deployment of OpenShift Data Foundation in external mode.
Save the JSON output generated after running the script in the previous step.
Example output:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
Upload the generated JSON file.
- Log in to the OpenShift Web Console.
- Click Workloads → Secrets.
-
Set project to
openshift-storage
. - Click rook-ceph-external-cluster-details.
- Click Actions (⋮) → Edit Secret.
- Click Browse and upload the JSON file.
- Click Save.
Verification steps
To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage → OpenShift Data foundation → Storage Systems tab and then click on the storage system name.
- On the Overview → Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy.
- Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.