Chapter 4. Verifying OpenShift Data Foundation deployment

To verify that OpenShift Data Foundation is deployed correctly:

4.1. Verifying the state of the pods

Procedure

  1. Click Workloads → Pods from the OpenShift Web Console.
  2. Select openshift-storage from the Project drop-down list.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

    For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 4.1, “Pods corresponding to OpenShift Data Foundation cluster”.

  3. Click the Running and Completed tabs to verify that the following pods are in Running and Completed state:

Table 4.1. Pods corresponding to OpenShift Data Foundation cluster

ComponentCorresponding pods

OpenShift Data Foundation Operator

  • ocs-operator-* (1 pod on any worker node)
  • ocs-metrics-exporter-* (1 pod on any worker node)
  • odf-operator-controller-manager-* (1 pod on any worker node)
  • odf-console-* (1 pod on any worker node)

Rook-ceph Operator

rook-ceph-operator-*

(1 pod on any worker node)

Multicloud Object Gateway

  • noobaa-operator-* (1 pod on any worker node)
  • noobaa-core-* (1 pod on any storage node)
  • nooba-db-* (1 pod on any storage node)
  • noobaa-endpoint-* (1 pod on any storage node)

MON

rook-ceph-mon-*

(5 pods are distributed across 3 zones, 2 per data-center zones and 1 in arbiter zone)

MGR

rook-ceph-mgr-*

(2 pods on any storage node)

MDS

rook-ceph-mds-ocs-storagecluster-cephfilesystem-*

(2 pods are distributed across 2 data-center zones)

RGW

rook-ceph-rgw-ocs-storagecluster-cephobjectstore-*

(2 pods are distributed across 2 data-center zones)

CSI

  • cephfs

    • csi-cephfsplugin-* (1 pod on each worker node)
    • csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes)
  • rbd

    • csi-rbdplugin-* (1 pod on each worker node)
    • csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes)

rook-ceph-crashcollector

rook-ceph-crashcollector-*

(1 pod on each storage node and 1 pod in arbiter zone)

OSD

  • rook-ceph-osd-* (1 pod for each device)
  • rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device)

4.2. Verifying the OpenShift Data Foundation cluster is healthy

Procedure

  1. In the OpenShift Web Console, click StorageOpenShift Data Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
  3. In the Status card of the Block and File tab, verify that Storage Cluster has a green tick.
  4. In the Details card, verify that the cluster information is displayed.

For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.

4.3. Verifying the Multicloud Object Gateway is healthy

Procedure

  1. In the OpenShift Web Console, click StorageOpenShift Data Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.

    1. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
    2. In the Details card, verify that the MCG information is displayed.

For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation.

4.4. Verifying that the OpenShift Data Foundation specific storage classes exist

Procedure

  1. Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
  2. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
    • openshift-storage.noobaa.io
    • ocs-storagecluster-ceph-rgw