- Issued:
- 2021-08-03
- Updated:
- 2021-08-03
RHBA-2021:3003 - Bug Fix Advisory
Synopsis
Red Hat OpenShift Container Storage 4.8.0 container images bug fix and enhancement update
Type/Severity
Bug Fix Advisory
Topic
Updated images that include numerous bug fixes and enhancements are now available for Red Hat OpenShift Container Storage 4.8.0 on Red Hat Enterprise Linux 8.
Description
Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
These updated images include numerous bug fixes and enhancements. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Container Storage Release Notes for information on the most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s torage/4.8/html/4.8_release_notes/index
All Red Hat OpenShift Container Storage users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Data Foundation 4 for RHEL 8 x86_64
- Red Hat OpenShift Data Foundation for IBM Power, little endian 4 for RHEL 8 ppc64le
- Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 for RHEL 8 s390x
Fixes
- BZ - 1819483 - [Tracker for BZ #1900111] Ceph MDS won't run in OCS with millions of files
- BZ - 1848278 - [RFE] Enable OCS/OCP to create and consume Ceph thick-provisioned images
- BZ - 1918783 - [RFE] NamespaceStore creation wizard
- BZ - 1923819 - csi-cephfsplugin pods CrashLoopBackoff in fresh 4.6 cluster due to conflict with kube-rbac-proxy
- BZ - 1924946 - [RFE] Add ability to set primary-affinity on OSDs
- BZ - 1924949 - [RFE] Allow setting OSD weight using crush reweight
- BZ - 1929209 - Missing prints in gather-debug.log
- BZ - 1934633 - Error messages in OCS must-gather relating to volumesnapshot -A and dump of obc list(noobaa)
- BZ - 1936388 - RBD PVC with thick provisioning is failing
- BZ - 1936858 - OCS deployment with KMS fails when kv-v2 is used for backend path
- BZ - 1937604 - [Deployment blocker]ocs-operator.v4.8.0-292.ci is in installing phase
- BZ - 1938112 - [RFE] Add Slow Ops alert
- BZ - 1939007 - [Arbiter] [Tracker for BZ #1939766] When performed drain and undrained 4 Mon and 1 Osd are going into CLBO and ceph is not accessible
- BZ - 1940312 - [OCS 4.8]Persistent Storage Dashboard throws Alert - Ceph Manager has disappeared from Prometheus target discovery and Object Service dashboard has Unknown Data Resiliency
- BZ - 1943280 - [RFE] Allow the addition of more than 3 OSD at a time during scale up and during initial deployment
- BZ - 1944158 - Unclear error message while creating a storageclass
- BZ - 1944410 - OCS 4.8 builds blocked due to badly-generated CRD YAML
- BZ - 1946595 - ocs-storagecluster phase is "Ready" when flexible scaling and arbiter are both enabled
- BZ - 1947796 - [GSS] Noobaa is creating many buckets on a single obc request from quay operator.
- BZ - 1948378 - Alert 'ClusterObjectStoreState' is not triggered when RGW interface is unavailable
- BZ - 1950225 - Under StorageCluster.Status , desired image of noobaaCore is pointing rhceph image
- BZ - 1950419 - [RFE] Change PDB Controler behavior for single OSD failure caused by failed drive
- BZ - 1952344 - OCS 4.8: v4.8.0-359 - storagecluster is in progressing state
- BZ - 1953572 - Encryption key in vault for volumesnapshot does not get deleted when the snapshot is deleted in OCS
- BZ - 1955831 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout
- BZ - 1956232 - [RHEL7][RBD] FailedMount error when using restored PVC on app pod
- BZ - 1956256 - [4.8 clone] Upgrade of noobaa DB failed when upgrading OCS 4.6 to 4.7
- BZ - 1957712 - [4.8 clone] Noobaa migrate job is failing when upgrading OCS 4.6.4 to 4.7 on FIPS environment
- BZ - 1958373 - OCS 4.8 deployment fails since 4.8.0-378.ci with storagecluster stuck in Progressing state
- BZ - 1959257 - no image is set for CSI_VOLUME_REPLICATION_IMAGE
- BZ - 1959964 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover
- BZ - 1961517 - rook-ceph-rbd-mirror pods show failed: AdminSocket::bind_and_listen: The UNIX domain socket path
- BZ - 1961647 - [RBD] Deleting RBD provisioner leader pod breaks thick provisioning
- BZ - 1962109 - [RFE] when doing oc get volumereplications output should also show more detail like replicationState, volumeReplicationClass
- BZ - 1962207 - CSI pods fail to start if the rook operator fails to read config map
- BZ - 1962278 - Include events in Rook for cephcluster
- BZ - 1962751 - ocs operator does not marshal resource requests and limits for ceph crushcollector pods
- BZ - 1962755 - [RFE] [must-gather]Add ceph config for each osd in the cluster
- BZ - 1963134 - [4.8] [External Mode] [vSphere] Deployment failure due to storagecluster stuck in Progressing state
- BZ - 1963191 - [4.8] [External Mode] must-gather does not collect noobaa logs in External mode
- BZ - 1964238 - Update OCP CSI sidecar to 4.8
- BZ - 1964373 - [Tracker for BZ #1947673] [Ceph-mgr] 'rbd trash purge' gets hung on rbds with clones or snapshots and should skip to continue progress
- BZ - 1964467 - Upgrade of external cluster is getting stuck in Pending state when upgrading 4.7 to 4.8
- BZ - 1965290 - must-gather helper pods fail to come up on OCS tainted nodes, ceph collections are empty
- BZ - 1966149 - The osd removal job name on OCS4.7 and OCS4.8 is different
- BZ - 1966661 - Hide nodes tainting from the console - moving https://issues.redhat.com/browse/KNIP-1602 to 4.10
- BZ - 1966999 - must gather fails even when storagecluster is present
- BZ - 1967628 - [OCS tracker for OCP bug #1957756] :Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement"
- BZ - 1967837 - Update OCP CSI sidecar to 4.8
- BZ - 1967877 - [IBM][ROKS] ocs-operator pod in CrashLoopBackOff a week after successful installation
CVEs
References
(none)
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.