Chapter 5. Uninstalling OpenShift Data Foundation from external storage system

Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster.

Uninstall Annotations

Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:

  • uninstall.ocs.openshift.io/cleanup-policy: delete
  • uninstall.ocs.openshift.io/mode: graceful
Note

The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode.

The below table provides information on the different values that can used with these annotations:

Table 5.1. uninstall.ocs.openshift.io uninstall annotations descriptions

AnnotationValueDefaultBehavior

cleanup-policy

delete

Yes

Rook cleans up the physical drives and the DataDirHostPath

cleanup-policy

retain

No

Rook does not clean up the physical drives and the DataDirHostPath

mode

graceful

Yes

Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user

mode

forced

No

Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively

You can change the uninstall mode by editing the value of the annotation by using the following commands:

$ oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode="forced" --overwrite
storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated

Prerequisites

  • Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation.
  • Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation.

Procedure

  1. Delete the volume snapshots that are using OpenShift Data Foundation.

    1. List the volume snapshots from all the namespaces

      $ oc get volumesnapshot --all-namespaces
    2. From the output of the previous command, identify and delete the volume snapshots that are using OpenShift Data Foundation.

      $ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>
  2. Delete PVCs and OBCs that are using OpenShift Data Foundation.

    In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted.

    If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system.

    1. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation.

      See Removing monitoring stack from OpenShift Data Foundation

    2. Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation.

      Removing OpenShift Container Platform registry from OpenShift Data Foundation

    3. Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation.

      Removing the cluster logging operator from OpenShift Data Foundation

    4. Delete other PVCs and OBCs provisioned using OpenShift Data Foundation.

      • Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation.

        #!/bin/bash
        
        RBD_PROVISIONER="openshift-storage.rbd.csi.ceph.com"
        CEPHFS_PROVISIONER="openshift-storage.cephfs.csi.ceph.com"
        NOOBAA_PROVISIONER="openshift-storage.noobaa.io/obc"
        RGW_PROVISIONER="openshift-storage.ceph.rook.io/bucket"
        
        NOOBAA_DB_PVC="noobaa-db"
        NOOBAA_BACKINGSTORE_PVC="noobaa-default-backing-store-noobaa-pvc"
        
        # Find all the OCS StorageClasses
        OCS_STORAGECLASSES=$(oc get storageclasses | grep -e "$RBD_PROVISIONER" -e "$CEPHFS_PROVISIONER" -e "$NOOBAA_PROVISIONER" -e "$RGW_PROVISIONER" | awk '{print $1}')
        
        # List PVCs in each of the StorageClasses
        for SC in $OCS_STORAGECLASSES
        do
                echo "======================================================================"
                echo "$SC StorageClass PVCs and OBCs"
                echo "======================================================================"
                oc get pvc  --all-namespaces --no-headers 2>/dev/null | grep $SC | grep -v -e "$NOOBAA_DB_PVC" -e "$NOOBAA_BACKINGSTORE_PVC"
                oc get obc  --all-namespaces --no-headers 2>/dev/null | grep $SC
                echo
        done
      • Delete the OBCs.

        $ oc delete obc <obc name> -n <project name>
      • Delete the PVCs.

        $ oc delete pvc <pvc name> -n <project-name>

        Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster.

  3. Delete the Storage Cluster object and wait for the removal of the associated resources.

    $ oc delete -n openshift-storage storagesystem --all --wait=true
  4. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project.

    For example:

    $ oc project default
    $ oc delete project openshift-storage --wait=true --timeout=5m

    The project is deleted if the following command returns a NotFound error.

    $ oc get project openshift-storage
    Note

    While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.

  5. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it.

    $ oc get pv
    $ oc delete pv <pv name>
  6. Remove CustomResourceDefinitions.

    $ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m
  7. To ensure that OpenShift Data Foundation is uninstalled completely:

    1. In the OpenShift Container Platform Web Console, click Storage.
    2. Verify that OpenShift Data Foundation no longer appears under Storage.

5.1. Removing monitoring stack from OpenShift Data Foundation

Use this section to clean up the monitoring stack from OpenShift Data Foundation.

The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.

Prerequisites

Procedure

  1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/alertmanager-main-0         3/3     Running   0          8d
    pod/alertmanager-main-1         3/3     Running   0          8d
    pod/alertmanager-main-2         3/3     Running   0          8d
    pod/cluster-monitoring-
    operator-84457656d-pkrxm        1/1     Running   0          8d
    pod/grafana-79ccf6689f-2ll28    2/2     Running   0          8d
    pod/kube-state-metrics-
    7d86fb966-rvd9w                 3/3     Running   0          8d
    pod/node-exporter-25894         2/2     Running   0          8d
    pod/node-exporter-4dsd7         2/2     Running   0          8d
    pod/node-exporter-6p4zc         2/2     Running   0          8d
    pod/node-exporter-jbjvg         2/2     Running   0          8d
    pod/node-exporter-jj4t5         2/2     Running   0          6d18h
    pod/node-exporter-k856s         2/2     Running   0          6d18h
    pod/node-exporter-rf8gn         2/2     Running   0          8d
    pod/node-exporter-rmb5m         2/2     Running   0          6d18h
    pod/node-exporter-zj7kx         2/2     Running   0          8d
    pod/openshift-state-metrics-
    59dbd4f654-4clng                3/3     Running   0          8d
    pod/prometheus-adapter-
    5df5865596-k8dzn                1/1     Running   0          7d23h
    pod/prometheus-adapter-
    5df5865596-n2gj9                1/1     Running   0          7d23h
    pod/prometheus-k8s-0            6/6     Running   1          8d
    pod/prometheus-k8s-1            6/6     Running   1          8d
    pod/prometheus-operator-
    55cfb858c9-c4zd9                1/1     Running   0          6d21h
    pod/telemeter-client-
    78fc8fc97d-2rgfp                3/3     Running   0          8d
    
    NAME                                                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0   Bound    pvc-0d519c4f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1   Bound    pvc-0d5a9825-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2   Bound    pvc-0d6413dc-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0        Bound    pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1        Bound    pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
  2. Edit the monitoring configmap.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config

    Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it.

    Before editing

    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
        alertmanagerMain:
          volumeClaimTemplate:
            metadata:
              name: my-alertmanager-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-external-storagecluster-ceph-rbd
        prometheusK8s:
          volumeClaimTemplate:
            metadata:
              name: my-prometheus-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-external-storagecluster-ceph-rbd
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-12-02T07:47:29Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "22110"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: fd6d988b-14d7-11ea-84ff-066035b9efa8
    
    
    .
    .
    .

    After editing

    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-11-21T13:07:05Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "404352"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: d12c796a-0c5f-11ea-9832-063cd735b81c
    .
    .
    .

    In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs.

  3. List the pods consuming the PVC.

    In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                                               READY   STATUS      RESTARTS AGE
    pod/alertmanager-main-0                            3/3     Terminating   0      10h
    pod/alertmanager-main-1                            3/3     Terminating   0      10h
    pod/alertmanager-main-2                            3/3     Terminating   0      10h
    pod/cluster-monitoring-operator-84cd9df668-zhjfn   1/1     Running       0      18h
    pod/grafana-5db6fd97f8-pmtbf                       2/2     Running       0      10h
    pod/kube-state-metrics-895899678-z2r9q             3/3     Running       0      10h
    pod/node-exporter-4njxv                            2/2     Running       0      18h
    pod/node-exporter-b8ckz                            2/2     Running       0      11h
    pod/node-exporter-c2vp5                            2/2     Running       0      18h
    pod/node-exporter-cq65n                            2/2     Running       0      18h
    pod/node-exporter-f5sm7                            2/2     Running       0      11h
    pod/node-exporter-f852c                            2/2     Running       0      18h
    pod/node-exporter-l9zn7                            2/2     Running       0      11h
    pod/node-exporter-ngbs8                            2/2     Running       0      18h
    pod/node-exporter-rv4v9                            2/2     Running       0      18h
    pod/openshift-state-metrics-77d5f699d8-69q5x       3/3     Running       0      10h
    pod/prometheus-adapter-765465b56-4tbxx             1/1     Running       0      10h
    pod/prometheus-adapter-765465b56-s2qg2             1/1     Running       0      10h
    pod/prometheus-k8s-0                               6/6     Terminating   1      9m47s
    pod/prometheus-k8s-1                               6/6     Terminating   1      9m47s
    pod/prometheus-operator-cbfd89f9-ldnwc             1/1     Running       0      43m
    pod/telemeter-client-7b5ddb4489-2xfpz              3/3     Running       0      10h
    
    NAME                                                      STATUS  VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0   Bound    pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1   Bound    pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2   Bound    pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0        Bound    pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1        Bound    pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
  4. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.

    $ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m

5.2. Removing OpenShift Container Platform registry from OpenShift Data Foundation

Use this section to clean up OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry

The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.

Prerequisites

  • The image registry should have been configured to use an OpenShift Data Foundation PVC.

Procedure

  1. Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section.

    $ oc edit configs.imageregistry.operator.openshift.io

    Before editing

    .
    .
    .
    storage:
      pvc:
        claim: registry-cephfs-rwx-pvc
    .
    .
    .

    After editing

    .
    .
    .
    storage:
      emptyDir: {}
    .
    .
    .

    In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

  2. Delete the PVC.

    $ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m

5.3. Removing the cluster logging operator from OpenShift Data Foundation

Use this section to clean up the cluster logging operator from OpenShift Data Foundation.

The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace.

Prerequisites

  • The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs.

Procedure

  1. Remove the ClusterLogging instance in the namespace.

    $ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m

    The PVCs in the openshift-logging namespace are now safe to delete.

  2. Delete the PVCs.

    $ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
    <pvc-name>
    Is the name of the PVC

5.4. Removing external IBM FlashSystem secret

You need to clean up the FlashSystem secret from OpenShift Data Foundation while uninstalling. This secret is created when you configure the external IBM FlashSystem Storage. See Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage.

Procedure

  • Remove the IBM FlashSystem secret by using the following command:

    $ oc delete secret -n openshift-storage ibm-flashsystem-storage