Chapter 3. Uninstalling OpenShift Container Storage

Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface.

Prerequisites

  • Make sure that the OpenShift Container Storage cluster is in healthy state. The deletion might fail if some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in unhealthy state, you should contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
  • Delete any applications that are consuming persistent volume claims (PVCs) or object bucket claims (OBCs) based on the OpenShift Container Storage storage classes and then delete PVCs and OBCs that are using OpenShift Container Storage storage classes.

Procedure

  1. List the storage classes and take a note of the storage classes with the following storage class provisioners:

    • openshift-storage.rbd.csi.ceph.com
    • openshift-storage.cephfs.csi.ceph.com
    • openshift-storage.noobaa.io/obc

      For example

      $ oc get storageclasses
      NAME                         PROVISIONER                             AGE
      gp2 (default)                kubernetes.io/aws-ebs                   23h
      ocs-storagecluster-ceph-rbd  openshift-storage.rbd.csi.ceph.com      22h
      ocs-storagecluster-cephfs    openshift-storage.cephfs.csi.ceph.com   22h
      openshift-storage.noobaa.io  openshift-storage.noobaa.io/obc         22h

  2. Query for PVCs and OBCs that are using the storage class provisioners listed in the previous step.

    $ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )'
    $ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    $ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    Note

    Ignore any NooBaa PVCs in the openshift-storage namespace.

  3. Follow these instructions to ensure that the PVCs listed in the previous step are deleted:

    1. Determine the pod that is consuming the PVC.
    2. Identify the controlling object such as a Deployment, StatefulSet, DeamonSet, Job, or a custom controller.

      Each object has a metadata field known as OwnerReference. This is a list of associated objects. The OwnerReference with the controller field set to true will point to controlling objects such as ReplicaSet, StatefulSet, DaemonSet and so on.

    3. Ensure that the object is safe to delete by asking the owner of the project and then delete it.
    4. Delete the PVCs and OBCs.

      $ oc delete pvc <pvc name> -n <project-name>
      $ oc delete obc <obc name> -n <project name>

      If you have created any PVCs as a part of configuring the monitoring stack, cluster logging operator, or prometheus registry, then you must perform the clean up steps provided in the following sections as required:

  4. List and note the backing local volume objects. If no results found, then skip step 8 & 9.

    for sc in $(oc get storageclass|grep 'kubernetes.io/no-provisioner' |grep -E $(oc get storagecluster -n openshift-storage -o jsonpath='{ .items[*].spec.monPVCTemplate.spec.storageClassName } { .items[*].spec.storageDeviceSets[*].dataPVCTemplate.spec.storageClassName}' | sed 's/ /|/g')| awk '{ print $1 }');
    do
        echo -n "StorageClass: $sc ";
        oc get storageclass $sc -o jsonpath=" { 'LocalVolume: ' }{ .metadata.labels['local\.storage\.openshift\.io/owner-name'] } { '\n' }";
    done

    Example output

    StorageClass: localfile  LocalVolume: local-file
    StorageClass: localblock  LocalVolume: local-block

  5. Delete the StorageCluster object.

    $ oc delete -n openshift-storage storagecluster --all --wait=true
  6. Delete the namespace and wait till the deletion is complete.

    $ oc delete project openshift-storage --wait=true --timeout=5m
    Note

    You will need to switch to another project if openshift-storage was the active project.

    For example

    $ oc project default

  7. Clean up the storage operator artifacts on each node.

    $ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done

    Ensure you see removed directory '/var/lib/rook' in the output.

    Example output

    Starting pod/ip-10-0-134-65us-east-2computeinternal-debug ...
    To use host binaries, run `chroot /host`
    removed '/var/lib/rook/openshift-storage/log/ocs-deviceset-2-0-gk22s/ceph-volume.log'
    removed directory '/var/lib/rook/openshift-storage/log/ocs-deviceset-2-0-gk22s'
    removed '/var/lib/rook/openshift-storage/log/ceph-osd.2.log'
    removed '/var/lib/rook/openshift-storage/log/ceph-volume.log'
    removed directory '/var/lib/rook/openshift-storage/log'
    removed directory '/var/lib/rook/openshift-storage/crash/posted'
    removed directory '/var/lib/rook/openshift-storage/crash'
    removed '/var/lib/rook/openshift-storage/client.admin.keyring'
    removed '/var/lib/rook/openshift-storage/openshift-storage.config'
    removed directory '/var/lib/rook/openshift-storage'
    removed '/var/lib/rook/osd2/openshift-storage.config'
    removed directory '/var/lib/rook/osd2'
    removed directory '/var/lib/rook'
    
    Removing debug pod ...
    Starting pod/ip-10-0-155-149us-east-2computeinternal-debug ...
    .
    .
    removed directory '/var/lib/rook'
    
    Removing debug pod ...
    Starting pod/ip-10-0-162-89us-east-2computeinternal-debug ...
    .
    .
    removed directory '/var/lib/rook'
    
    Removing debug pod ...

  8. Delete the local volume created during the deployment and for each of the local volumes listed in step 4.

    For each of the local volumes, do the following:

    1. Set the variable LV to the name of the LocalVolume and variable SC to name of the StorageClass.

      For example

      $ LV=local-block
      $ SC=localblock

    2. List and note the devices to be cleaned up later.

      $ oc get localvolume -n local-storage $LV -o jsonpath='{ .spec.storageClassDevices[*].devicePaths[*] }'

      Example output

      /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol078f5cdde09efc165 /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0defc1d5e2dd07f9e /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0c8e82a3beeb7b7e5

    3. Delete the local volume resource.

      $ oc delete localvolume -n local-storage --wait=true $LV
    4. Delete the remaining PVs and StorageClasses if they exist.

      $ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m
      $ oc delete storageclass $SC --wait --timeout=5m
    5. Clean up the artifacts from the storage nodes for that resource.

      $ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done

      Example ouptut

      Starting pod/ip-10-0-141-2us-east-2computeinternal-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
      Starting pod/ip-10-0-144-55us-east-2computeinternal-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
      Starting pod/ip-10-0-175-34us-east-2computeinternal-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...

  9. Wipe the disks for each of the local volumes listed in step 4 so that they can be reused.

    1. List the storage nodes.

      $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=

      Example output

      NAME                                         STATUS   ROLES    AGE     VERSION
      ip-10-0-134-65.us-east-2.compute.internal    Ready    worker   4h45m   v1.17.1
      ip-10-0-155-149.us-east-2.compute.internal   Ready    worker   4h46m   v1.17.1
      ip-10-0-162-89.us-east-2.compute.internal    Ready    worker   4h45m   v1.17.1

    2. Obtain the node console and execute chroot /host command when the prompt appears.

      $ oc debug node/ip-10-0-134-65.us-east-2.compute.internal
      Starting pod/ip-10-0-134-65us-east-2computeinternal-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: 10.0.134.65
      If you don't see a command prompt, try pressing enter.
      sh-4.2# chroot /host
    3. Store the disk paths gathered in step 8(ii) in the DISKS variable within quotes.

      sh-4.2# DISKS="/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol078f5cdde09efc165 /dev/disk/by-id/nvme-Amazon_Elasti_Block_Store_vol0defc1d5e2dd07f9e /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0c8e82a3beeb7b7e5"
    4. Run sgdisk --zap-all on all the disks:

      sh-4.3# for disk in $DISKS; do sgdisk --zap-all $disk;done

      Example output

      Problem opening /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol078f5cdde09efc165 for reading! Error is 2.
      The specified file does not exist!
      Problem opening '' for writing! Program will now terminate.
      Warning! MBR not overwritten! Error is 2!
      Problem opening /dev/disk/by-id/nvme-Amazon_Elasti_Block_Store_vol0defc1d5e2dd07f9e for reading! Error is 2.
      The specified file does not exist!
      Problem opening '' for writing! Program will now terminate.
      Warning! MBR not overwritten! Error is 2!
      Creating new GPT entries.
      GPT data structures destroyed! You may now partition the disk using fdisk or
      other utilities.

      Note

      Ignore file-not-found warnings as they refer to disks that are on other machines.

    5. Exit the shell and repeat for the other nodes.

      sh-4.3# exit
      exit
      sh-4.2# exit
      exit
      
      Removing debug pod ...
  10. Delete the storage classes with an openshift-storage provisioner listed in step 1.

    $ oc delete storageclass <storageclass-name> --wait=true --timeout=5m

    For example:

    $ oc delete storageclass ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io --wait=true --timeout=5m
  11. Unlabel the storage nodes.

    $ oc label nodes  --all cluster.ocs.openshift.io/openshift-storage-
    $ oc label nodes  --all topology.rook.io/rack-
    Note

    You can ignore the warnings displayed for the unlabeled nodes such as label <label> not found.

  12. Remove CustomResourceDefinitions.

    $ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io  storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io  --wait=true --timeout=5m
  13. To make sure that OpenShift Container Storage is uninstalled, verify that the openshift-storage namespace no longer exists and the storage dashboard no longer appears in the UI.
Note

While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article https://access.redhat.com/solutions/3881901 to identify objects that are blocking the namespace from being terminated. OpenShift objects such as Cephcluster, StorageCluster, NooBaa, and PVC that have the finalizers might be the cause for the namespace to be in Terminating state. If PVC has a finalizer, force delete the associated pod to remove the finalizer.

3.1. Removing monitoring stack from OpenShift Container Storage

Use this section to clean up monitoring stack from OpenShift Container Storage.

The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.

Prerequisites

Procedure

  1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/alertmanager-main-0         3/3     Running   0          8d
    pod/alertmanager-main-1         3/3     Running   0          8d
    pod/alertmanager-main-2         3/3     Running   0          8d
    pod/cluster-monitoring-
    operator-84457656d-pkrxm        1/1     Running   0          8d
    pod/grafana-79ccf6689f-2ll28    2/2     Running   0          8d
    pod/kube-state-metrics-
    7d86fb966-rvd9w                 3/3     Running   0          8d
    pod/node-exporter-25894         2/2     Running   0          8d
    pod/node-exporter-4dsd7         2/2     Running   0          8d
    pod/node-exporter-6p4zc         2/2     Running   0          8d
    pod/node-exporter-jbjvg         2/2     Running   0          8d
    pod/node-exporter-jj4t5         2/2     Running   0          6d18h
    pod/node-exporter-k856s         2/2     Running   0          6d18h
    pod/node-exporter-rf8gn         2/2     Running   0          8d
    pod/node-exporter-rmb5m         2/2     Running   0          6d18h
    pod/node-exporter-zj7kx         2/2     Running   0          8d
    pod/openshift-state-metrics-
    59dbd4f654-4clng                3/3     Running   0          8d
    pod/prometheus-adapter-
    5df5865596-k8dzn                1/1     Running   0          7d23h
    pod/prometheus-adapter-
    5df5865596-n2gj9                1/1     Running   0          7d23h
    pod/prometheus-k8s-0            6/6     Running   1          8d
    pod/prometheus-k8s-1            6/6     Running   1          8d
    pod/prometheus-operator-
    55cfb858c9-c4zd9                1/1     Running   0          6d21h
    pod/telemeter-client-
    78fc8fc97d-2rgfp                3/3     Running   0          8d
    
    NAME                                                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0   Bound    pvc-0d519c4f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1   Bound    pvc-0d5a9825-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2   Bound    pvc-0d6413dc-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0        Bound    pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1        Bound    pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
  2. Edit the monitoring configmap.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  3. Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.

    Before editing

    .
    .
    apiVersion: v1
    data:
      config.yaml: |
        alertmanagerMain:
          volumeClaimTemplate:
            metadata:
              name: my-alertmanager-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-storagecluster-ceph-rbd
        prometheusK8s:
          volumeClaimTemplate:
            metadata:
              name: my-prometheus-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-storagecluster-ceph-rbd
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-12-02T07:47:29Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "22110"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: fd6d988b-14d7-11ea-84ff-066035b9efa8
    
    .
    .

    After editing

    .
    .
    apiVersion: v1
    data:
      config.yaml: |
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-11-21T13:07:05Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "404352"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: d12c796a-0c5f-11ea-9832-063cd735b81c
    .
    .

    In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs.

  4. List the pods consuming the PVC.

    In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Container Storage PVC.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                                               READY   STATUS      RESTARTS AGE
    pod/alertmanager-main-0                            3/3     Terminating   0      10h
    pod/alertmanager-main-1                            3/3     Terminating   0      10h
    pod/alertmanager-main-2                            3/3     Terminating   0      10h
    pod/cluster-monitoring-operator-84cd9df668-zhjfn   1/1     Running       0      18h
    pod/grafana-5db6fd97f8-pmtbf                       2/2     Running       0      10h
    pod/kube-state-metrics-895899678-z2r9q             3/3     Running       0      10h
    pod/node-exporter-4njxv                            2/2     Running       0      18h
    pod/node-exporter-b8ckz                            2/2     Running       0      11h
    pod/node-exporter-c2vp5                            2/2     Running       0      18h
    pod/node-exporter-cq65n                            2/2     Running       0      18h
    pod/node-exporter-f5sm7                            2/2     Running       0      11h
    pod/node-exporter-f852c                            2/2     Running       0      18h
    pod/node-exporter-l9zn7                            2/2     Running       0      11h
    pod/node-exporter-ngbs8                            2/2     Running       0      18h
    pod/node-exporter-rv4v9                            2/2     Running       0      18h
    pod/openshift-state-metrics-77d5f699d8-69q5x       3/3     Running       0      10h
    pod/prometheus-adapter-765465b56-4tbxx             1/1     Running       0      10h
    pod/prometheus-adapter-765465b56-s2qg2             1/1     Running       0      10h
    pod/prometheus-k8s-0                               6/6     Terminating   1      9m47s
    pod/prometheus-k8s-1                               6/6     Terminating   1      9m47s
    pod/prometheus-operator-cbfd89f9-ldnwc             1/1     Running       0      43m
    pod/telemeter-client-7b5ddb4489-2xfpz              3/3     Running       0      10h
    
    NAME                                                      STATUS  VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0   Bound    pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1   Bound    pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2   Bound    pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0        Bound    pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1        Bound    pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-storagecluster-ceph-rbd   19h
  5. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.

    $ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m

3.2. Removing OpenShift Container Platform registry from OpenShift Container Storage

Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/registry/architecture-component-imageregistry

The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.

Prerequisites

  • The image registry should have been configured to use an OpenShift Container Storage PVC.

Procedure

  1. Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section.

    $ oc edit configs.imageregistry.operator.openshift.io
    • For AWS:

      Before editing

      .
      .
      storage:
        pvc:
          claim: registry-cephfs-rwx-pvc
      .
      .

      After editing

      .
      .
      storage:
      .
      .

      In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

    • For VMware:

      Before editing

      .
      .
      storage:
        pvc:
          claim: registry-cephfs-rwx-pvc
      .
      .

      After editing

      .
      .
      storage:
        emptyDir: {}
      .
      .

      In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

  2. Delete the PVC.

    $ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m

3.3. Removing the cluster logging operator from OpenShift Container Storage

Use this section to clean up the cluster logging operator from OpenShift Container Storage.

The PVCs that are created as a part of configuring cluster logging operator are in openshift-logging namespace.

Prerequisites

  • The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.

Procedure

  1. Remove the ClusterLogging instance in the namespace.

    $ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m

    The PVCs in the openshift-logging namespace are now safe to delete.

  2. Delete PVCs.

    $ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m