Chapter 3. Bug fixes

This section describes notable bug fixes introduced in Red Hat OpenShift Container Storage 4.5.

-mon- pods no longer stuck in init state after reboot

Previously, after a node reboot in AWS environments, the -mon- pods were stuck in the init state for an extended period. With the release of OpenShift Container Storage 4.5, this issue no longer occurs.

(BZ#1769322)

MDS pods can no longer be set on the same node

Previously, Red Hat Ceph Storage Metadata Server (MDS) pods were not being properly distributed. This meant both MDS pods could be schedules on the same node, which negated the high-availability of having multiple MDS pods. With this update, a required PodAntiAffinity is set on the MDS pods so they can no longer be scheduled on the same node.

(BZ#1774087)

crash-collector runs smoothly on OpenShift Container Platform

Previously, the crash-collector deployment lacked permissions to run on OpenShift Container Platform. The appropriate security context has been added to allow accessing a path on the host.

(BZ#1834939)

Node replacement no longer leads to Ceph HEALTH_WARN state

Previously, after node replacement, the Ceph CRUSH map tree still contained the stale hostname entry of the removed node in the particular rack. While replacing a node in a different rack, if any node with the same old hostname was added back to the cluster, it received a new rack label from the ocs-operator, but was inserted into its old place in the CRUSH map, resulting in an indefinite Ceph HEALTH_WARN state. With this release, this bug has been fixed and node replacement behaves as expected.

(BZ#1842456)

No CrashLoopBackOff during upgrade

Previously, if a BackingStore was referencing a secret with an empty name, upgrade would cause a CrashLoopBackOff error. As of OpenShift Container Storage 4.5, empty name cases are handled correctly, and upgrade proceeds as expected.

(BZ#1823775)

RGW server no longer crashes or leaks memory

Previously, an incorrect code construction in the final "bucket link" step of the RADOS Gateway (RGW) bucket create lead to undefined behavior in some instances. The RGW server could crash, or occasionally leak memory. This bug has been fixed, and the RGW server behaves as expected.

(BZ#1809545)