Chapter 3. Enhancements

This section describes major enhancements introduced in Red Hat OpenShift Container Storage 4.8.

Added a new alert to improve notification to the users in case one or more OSD requests are taking a long time to process

This alert is important to notify OpenShift Container Storage administrators about the slow operations which can be an indication of extreme load, a slow storage device, or a software bug. Users can check ceph status to find out the cause for slowness.

ClusterObjectStoreState alert message is generated when RADOS Object Gateway (RGW) is not available or is unhealthy.

Previously, the ClusterObjectStoreState alert message was not generated if the RADOS Object Gateway (RGW) was not available or was unhealthy. With a fix implemented in the OpenShift Container Storage operator, users can now see the ClusterObjectStoreState alert when RADOS Object Gateway (RGW) is not available or is unhealthy.

Ability to enable or disable compression in a pool

With OpenShift Container Storage 4.8 onwards, you can enable or disable the compression in a pool as a day two operation using the user interface.

Added ability to create namespace buckets using the OpenShift Container Platform user interface

Namespace buckets can be added using the OpenShift Container Platform user interface. Namespace buckets provide an aggregated view of existing object buckets in the cloud or S3 compatible storage on premise. For more information about adding namespace buckets using the user interface, see Adding namespace bucket using the OpenShift Container Platform user interface.

Utilizing all the available devices during initial deployment and scaling up for local storage devices

For all the local storage devices in attached mode deployments, the storage cluster now utilizes all the locally available storage devices. Similarly, during scaling up by adding capacity, all the available storage devices can be added.

Prevent adding no-out flag on the failure domain if an OSD is down due to reasons other than node drain

When an OSD is down due to disk failure, a no-out flag is added on the failure domain. This prevents the OSD from being marked out using standard ceph mon_osd_down_out_interval. With this update, when an OSD is down due to reasons other than node drain, say, disk failure, in such a situation, if the pgs are unhealthy then rook will create a blocking PodDisruptionBudget on other failure domains to prevent further node drains on them. noout flag won’t be set on the node in this case. If the OSD is down but all the pgs are active+clean, the cluster will be treated as fully healthy. The default PodDisruptionBudget (with maxUnavailable=1) will be added back and the blocking ones will be deleted.