- Issued:
- 2023-02-14
- Updated:
- 2023-02-14
RHBA-2023:0764 - Bug Fix Advisory
Synopsis
Red Hat OpenShift Data Foundation 4.11.5 Bug Fix Update
Type/Severity
Bug Fix Advisory
Topic
Updated images that fix several bugs are now available for Red Hat OpenShift Data Foundation 4.11.5 on Red Hat Enterprise Linux 8 from Red Hat Container registry.
Description
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Data Foundation. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Bug fix(es):
- Previously, false alarms for storage capacity based on inodes metrics were raised. This was because the metrics to raise the alerts provided a storage capacity status that could be dynamically changed without any intervention when more storage space was required. When a PVC uses CephFs as storage backend, the inodes metrics such as `kubelet_volume_stats_inodes_free`, `kubelet_volume_stats_inodes`, `kubelet_volume_stats_inodes_used` are not correct because in CephFs inodes get allocated on-demand. By design, CephFS is a filesystem with dynamic inode allocations. With this fix, metrics for `kubelet_volume_stats_inodes_free`, `kubelet_volume_stats_inodes`, and `kubelet_volume_stats_inodes_used` are not provided for CephFS backed PVCs. As a result, false alarms for storage capacity based on inodes metrics are not raised. (BZ#2149676)
- Previously, persistent volumes (PVs) that used CephFS did not provide accurate statistics about consumed or free inodes as the number of free inodes on CephFS volume is not relevant because new inodes get created when required. So, the metrics that suggested running out of inodes did not provide accurate information. With this fix, Ceph-CSI does not return metrics about inodes on CephFS. This prevents erroneous alerting about running low or out of inodes. (BZ#2149677)
- Previously, the listing operation would fail depending on the number of objects in the bucket due to incorrect mapping of indexes in the Multicloud Object Gateway database (MCG DB). This incorrect mapping caused certain queries to take longer time than needed and fail the specific actions as a result. With this fix, the indexes are updated to fix the listing queries. (BZ#2149226)
- Previously, when an OSD is restarted after a node restart, the OSD would be marked as `down` in Ceph instead of coming back online as it looked like a stale OSD. This was because, in some environments, Ceph OSD was not running as PID 1, which resulted in a non-random `nonce` being used to start the OSD. With this fix, an environment variable, `CEPH_USE_RANDOM_NONCE` is set on the OSD pods to ensure Ceph is always aware that OpenShift Data Foundation is running in a containerized environment and to randomize the `nonce`. Hence, the OSDs start properly after a node restart. (BZ#2150410)
- Previously, the `rook-ceph-osd-prepare` job sometimes would be stuck in `CrashLoopBackOff` (CLBO) state and would never come up. This is due to the deletion of OSD deployment in an encrypted cluster backed by CSI provisioned PVC which causes the `rook-ceph-osd-prepare` job for that OSD to be stuck in `CrashLoopBackOff` state. With this fix, the `rook-ceph-osd-prepare` job removes the stale encrypted device and opens it again avoiding the CLBO state. As a result, the `rook-ceph-osd-prepare` job runs as expected and the OSD comes up. (BZ#2153675)
All users of Red Hat OpenShift Data Foundation are advised to upgrade to
these updated images which provide these bug fixes.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Data Foundation 4 for RHEL 8 x86_64
- Red Hat OpenShift Data Foundation for IBM Power, little endian 4 for RHEL 8 ppc64le
- Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 for RHEL 8 s390x
Fixes
- BZ - 2135631 - Do not use rook master tag in job template [4.11.z]
- BZ - 2142901 - Disable Liveness container in csi pods
- BZ - 2149226 - [Backport to 4.11.z] [GSS] Bucket list operations are failing with 504 Gateway time out
- BZ - 2149676 - [4.11 clone] Alert KubePersistentVolumeInodesFillingUp MON-2802
- BZ - 2149677 - [4.11 clone] CephFS should not report incomplete/incorrect inode info
- BZ - 2151138 - Backingstores enter an AUTH_FAILED state despite being given functioning credentials
- BZ - 2151914 - Update to RHCS 5.3 Ceph container image at ODF-4.11.5
- BZ - 2153675 - [KMS] rook-ceph-osd-prepare pod in CLBO state after deleting rook OSD deployment
- BZ - 2168566 - Include at ODF 4.11 container images the RHEL8 CVE fix on "sudo"
CVEs
References
(none)
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.