Chapter 4. Verifying OpenShift Container Storage deployment

Use this section to verify that OpenShift Container Storage is deployed correctly.

4.1. Verifying the state of the pods

To verify that the pods of OpenShift Containers Storage are in running state, follow the below procedure:

Procedure

  1. Log in to OpenShift Web Console.
  2. Click Workloads → Pods from the left pane of the OpenShift Web Console.
  3. Select openshift-storage from the Project drop down list.

    For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 4.1, “Pods corresponding to OpenShift Container storage cluster”.

  4. Click on the Running and Completed tabs to verify that the pods are running and in a completed state:

    Table 4.1. Pods corresponding to OpenShift Container storage cluster

    ComponentCorresponding pods

    OpenShift Container Storage Operator

    • ocs-operator-* (1 pod on any worker node)
    • ocs-metrics-exporter-*

    Rook-ceph Operator

    rook-ceph-operator-*

    (1 pod on any worker node)

    Multicloud Object Gateway

    • noobaa-operator-* (1 pod on any worker node)
    • noobaa-core-* (1 pod on any storage node)
    • noobaa-db-pg-* (1 pod on any storage node)
    • noobaa-endpoint-* (1 pod on any storage node)

    MON

    rook-ceph-mon-*

    (3 pods distributed across storage nodes)

    MGR

    rook-ceph-mgr-*

    (1 pod on any storage node)

    MDS

    rook-ceph-mds-ocs-storagecluster-cephfilesystem-*

    (2 pods distributed across storage nodes)

    RGW

    rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node)

    CSI

    • cephfs

      • csi-cephfsplugin-* (1 pod on each worker node)
      • csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes)
    • rbd

      • csi-rbdplugin-* (1 pod on each worker node)
      • csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes)

    rook-ceph-crashcollector

    rook-ceph-crashcollector-*

    (1 pod on each storage node)

    OSD

    • rook-ceph-osd-* (1 pod for each device)
    • rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device)

4.2. Verifying the OpenShift Container Storage cluster is healthy

To verify that the cluster of OpenShift Container Storage is healthy, follow the steps in the procedure.

Procedure

  1. Click Storage → Overview and click the Block and File tab.
  2. In the Status card, verify that Storage Cluster and Data Resiliency has a green tick mark.
  3. In the Details card, verify that the cluster information is displayed.

For more information on the health of the OpenShift Container Storage clusters using the Block and File dashboard, see Monitoring OpenShift Container Storage.

4.3. Verifying the Multicloud Object Gateway is healthy

To verify that the OpenShift Container Storage Multicloud Object Gateway is healthy, follow the steps in the procedure.

Procedure

  1. Click Storage → Overview from the OpenShift Web Console and click the Object tab.
  2. In the Status card, verify that both Object Service and Data Resiliency are in Ready state (green tick).
  3. In the Details card, verify that the Multicloud Object Gateway information is displayed.

For more information on the health of the OpenShift Container Storage cluster using the object service dashboard, see Monitoring OpenShift Container Storage.

4.4. Verifying that the OpenShift Container Storage specific storage classes exist

To verify the storage classes exists in the cluster, follow the steps in the procedure.

Procedure

  1. Click Storage → Storage Classes from the OpenShift Web Console.
  2. Verify that the following storage classes are created with the OpenShift Container Storage cluster creation:

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
    • openshift-storage.noobaa.io
    • ocs-storagecluster-ceph-rgw

4.5. Verifying the Multus networking

To determine if Multus is working in the cluster, verify the Multus networking.

Procedure

  1. Based on your network configuration choices, the OpenShift Container Storage operator does one of the following:

    • If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster) is selected for the Public Network Interface, the traffic between the application pods and the OpenShift Container Storage cluster happens on this network. Additionally, the cluster is also self configured to use this network for the replication and rebalancing traffic between OSDs.
    • If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster) are selected for the Public Network Interface and the Cluster Network Interface respectively during the storage cluster installation, then client storage traffic is on the public network and cluster network is for the replication and rebalancing traffic between OSDs.
  2. To verify the network configuration is correct, follow the steps:

    1. In the OpenShift console, click Installed OperatorsStorage Clusterocs-storagecluster.
    2. In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic.

      Sample output:

      [..]
      spec:
          [..]
          network:
          provider: multus
          selectors:
            cluster: openshift-storage/ocs-cluster
            public: openshift-storage/ocs-public
          [..]
  3. To verify the network configuration is correct using the command line interface, run the following commands:

    $ oc get storagecluster ocs-storagecluster \
    -n openshift-storage \
    -o=jsonpath='{.spec.network}{"\n"}'

    Sample output:

    {"provider":"multus","selectors":{"cluster":"openshift-storage/ocs-cluster","public":"openshift-storage/ocs-public"}}
  4. Confirm the OSD pods are using correct network:

    1. In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic.

      Note

      Only the OSD pods connects to both Multus public and cluster networks if both are created. All other OCS pods connect to the Multus public network.

      $ oc get -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}'

      Sample output:

      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.2.30"
          ],
          "default": true,
          "dns": {}
      },{
          "name": "openshift-storage/ocs-cluster",
          "interface": "net1",
          "ips": [
              "192.168.2.1"
          ],
          "mac": "e2:04:c6:81:52:f1",
          "dns": {}
      },{
          "name": "openshift-storage/ocs-public",
          "interface": "net2",
          "ips": [
              "192.168.1.1"
          ],
          "mac": "ee:a0:b6:a4:07:94",
          "dns": {}
      }]
  5. To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility):

    $ oc get -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}' | jq -r '.[].name'

    Sample output:

    openshift-sdn
    openshift-storage/ocs-cluster
    openshift-storage/ocs-public