Chapter 5. Known issues

This section describes known issues in Red Hat OpenShift Container Storage 4.6.

Issue with nooba-db

noobaa-core-0 does not migrate to other nodes when a node goes down. NooBaa will not work when a node is down as migration of noobaa-core pod is blocked.

(BZ#1783961)

PodDisruptionBudget alert continuously shown

The PodDisruptionBudget alert, which is an OpenShift Container Platform alert, is continuously shown for object storage devices (OSDs). This alert can be ignored. You can choose to silence this alert by following the instructions in the Managing cluster alerts section of the Red Hat Openshift Container Platform documentation.

For more information, refer to this Red Hat Knowledgebase article.

(BZ#1788126)

Restore Snapshot/Clone operations with greater size than parent PVC results in endless loop

Ceph CSI doesn’t support restoring a snapshot or creating clones with a size greater than the parent PVC. Therefore, Restore Snapshot/Clone operations with a greater size results in an endless loop. To workaround this issue, delete the pending PVC. In order to get a larger PVC, complete one of the following based on the operation you are using:

  • If using Snapshots, restore the existing snapshot to create a volume of the same size as the parent PVC, then attach it to a pod and expand the PVC to the required size. For more information, see Volume snapshots.
  • If using Clone, clone the parent PVC to create a volume of the same size as the parent PVC, then attach it to a pod and expand the PVC to the required size. For more information, see Volume cloning.

(BZ#1870334)

Prometheus listens only on port 9283

The Prometheus service on ceph-mgr in an external cluster is expected to listen on port 9283. Other ports are not supported. Red Hat Ceph Storage administrators must use only port 9283 for the Prometheus exporter.

(BZ#1890971)

Ceph status is HEALTH_WARN after disk replacement

After disk replacement, a warning 1 daemons have recently crashed is seen even if all OSD pods are up and running. This warning causes a change in Ceph’s status. The Ceph status should be HEALTH_OK instead of HEALTH_WARN. To workaround this issue, rsh to the ceph-tools pod and silence the warning, the Ceph health will then be back to HEALTH_OK.

(BZ#1896810)

You cannot create a PVC from a volume snapshot in the absence of volume snapshotclass

Deleting volume snapshotclass changes volume snapshot status to Error. Therefore, you cannot restore a PVC from the volume snapshot. To workaround this issue, recreate the volume snapshotclass.

(BZ#1902711)

Tainted nodes cannot be discovered by the device discovery wizard for Local Storage based deployments

Local Storage based deployments can now be deployed using the user interface on Openshift Container Platform v4.6. During the storage cluster creation, nodes cannot be discovered if Red Hat OpenShift Container Storage nodes have the taint node.ocs.openshift.io/storage="true":NoSchedule as localvolumeset and localvolumediscovery custom resources do not have the required toleration. For a workaround see Red Hat Knowledgebase Solution.

(BZ#1905373)

Device replacement action cannot be performed via UI for an encrypted OpenShift Container Storage cluster

On an encrypted OpenShift Container Storage cluster, the discovery result CR discovers the device backed by a Ceph OSD (Object Storage Daemon) differently from the one reported in the Ceph alerts. When clicking the alert, the user is presented with Disk not found message. Due to the mismatch, console UI cannot enable the disk replacement option for an OCS user. To workaround this issue, use the CLI procedure for failed device replacement.

(BZ#1906002)