- Issued:
- 2020-09-15
- Updated:
- 2020-09-15
RHBA-2020:3754 - Bug Fix Advisory
Synopsis
Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update
Type/Severity
Bug Fix Advisory
Topic
Updated OpenShift Container Storage 4.5 packages that add various enhancements and fix several bugs are now available.
Description
Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
These updated packages include numerous bug fixes and enhancements. Users are directed to the Red Hat OpenShift Container Storage Release Notes for information on the most significant of these changes:
All Red Hat OpenShift Container Storage users are advised to upgrade to these updated packages.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Data Foundation 4 for RHEL 8 x86_64
Fixes
- BZ - 1798571 - lib-bucket-provisioner is labeled as a community operator - a problem for disconnected environments
- BZ - 1801365 - [RFE] (for IBM) expose the Rook ROOK_CSI_KUBELET_DIR_PATH variable in the OCS operator
- BZ - 1806953 - [RFE]Collect rbd and ceph fs outputs as part of must-gather
- BZ - 1809545 - [GSS] rook-ceph-rgw-ocs-storagecluster-cephobjectstore pod segfaults when resharding starts
- BZ - 1810022 - [BAREMETAL] ocs-storagecluster did not get deployed from UI
- BZ - 1811323 - [RFE] OCS independent Mode Multi-tenancy. Add the option to use a non-admin ceph key to connect to a external ceph cluster in rook
- BZ - 1811709 - [NooBaa UI] onRelease causes some dialog boxes to close
- BZ - 1814771 - MCG PVC BackingStore creation dialog doesn't have a minimum size
- BZ - 1816492 - [BAREMETAL] rook-ceph-mgr pod restarted with assert message
- BZ - 1818736 - UI crashed after cluster reaching 85% capacity
- BZ - 1819549 - [GSS] noobaa StatefulSet doesn't inherit cluster-wide proxy settings and fails to create first.bucket on VMware
- BZ - 1820297 - noobaa-operator reports panic on creating an invalid Backingstore: Provider :PVC with incorrect SC name
- BZ - 1820643 - Pod created from cephfs pvc failed to reach Running state
- BZ - 1823403 - Port 9091 has to be made configurable in OCS
- BZ - 1823409 - OSD needs to recognize and support multipath
- BZ - 1823886 - collect oc get subscription as part of ocs must-gather
- BZ - 1824962 - Azure: OCP 4.4: Create Storage Cluster on Azure from OperatorHub uses "null" for storageclass
- BZ - 1825994 - [Azure] RGW pods should not be started in cloud deployments like Azure, GCP
- BZ - 1827317 - [NooBaa] BackingStore state doesn't update when removing access, only updated once data is tried to be written
- BZ - 1832889 - [VMWare] Upgrade from OCS 4.4 to OCS 4.5 does not upgrade osd pods
- BZ - 1833297 - Remove CephClient from the UI of OCS Operator
- BZ - 1833875 - [RFE] OCS must-gather should collect PVC information from all namespaces
- BZ - 1833880 - [RFE] OCS must-gather should collect noobaa obc list in raw output
- BZ - 1834301 - The OCS 4.5 build is not deployable as rook-ceph-detect-version is in loop of creating/terminating
- BZ - 1834327 - Upgrade from OCS 4.4 to OCS 4.5 does not upgrade csi-* pods
- BZ - 1834939 - rook-ceph-crashcollector went into CLBO when localvolume sym link is deleted
- BZ - 1835636 - [OCS YAML + UI] Kubernetes Pool node limit isn't communicated/enforced
- BZ - 1835908 - If OCS node with co-located OSD+operator pod is powered off, time taken for operator pod to force delete the OSD ~45 mins (Bug 1830015#c21)
- BZ - 1836359 - Update OSD requests and limits
- BZ - 1838621 - [NooBaa] S3 commands fail (InternalError)
- BZ - 1839117 - Storagecluster in Progressing state with "noobaInitializing" message in OCS 4.3 installed from OCS 4.4 registry
- BZ - 1840084 - The operator should delete the unused PVCs for mons
- BZ - 1840539 - Node replacement on LSO: Stale leftover hostname entry should also be removed from ceph osd tree post node replacement(ceph crush map)
- BZ - 1840729 - Upgrade of the OSDs taking much longer time, looks like one OSD per 10 mins.
- BZ - 1841322 - [GSS] [RFE] NooBaa Operator unable to function with FIPS enabled
- BZ - 1841461 - [VMWare] [Dynamic] ocs-osd-removal job does not clean up the entry from ceph osd tree during disk replacement
- BZ - 1842233 - [NooBaa Login] User is led to account-based login form instead of SSO
- BZ - 1845872 - The details card under Persistent Storage tab shows mode as 'Converged' for Independent mode
- BZ - 1846085 - Updates don't work on StorageClass which will keep PV expansion disabled for upgraded cluster
- BZ - 1846095 - During OCS upgrade: No wait time enforced between OSD pod respins in case there is change only in ROOK_CEPH_IMAGE and not CEPH_IMAGE
- BZ - 1846219 - [PVPool] Cannot create PVPool via OpenShift console (spec.pvPool.secret)
- BZ - 1846759 - noobaa stores aws access key and secret in log file
- BZ - 1847099 - [NooBaa] S3 sync fails (Bad Gateway)
- BZ - 1847815 - [NooBaa] Backingstore events aren't using correct types
- BZ - 1847875 - CephObjectStoreUser CRs are stuck in a "Created" phase in independent mode
- BZ - 1847943 - aws-s3-provisioner is getting created as part of OCS
- BZ - 1847973 - Add more ceph debug commands
- BZ - 1847980 - [OCS 4.5] ocs-must-gather missing in relatedImages section of OCS operator CSV
- BZ - 1848387 - Independent mode OCS 4.5(v4.5.0-449): CSV is stuck in Installing state and never reached Succeeded state
- BZ - 1848867 - [GCP] While deploying OCS on GCP, the RGW pods should not be deployed
- BZ - 1849105 - Default storageclasses, ocs node labels and ocs node taints are not removed when StorageCluster is deleted
- BZ - 1849717 - Capacity alerts for PVCs created by OCS are not available for all namesapce
- BZ - 1850334 - noobaa-default-backing-store-noobaa-pod stuck in Pending State due to PVC issue in OCS 4.5.0-460.ci build
- BZ - 1850704 - Independent Mode: The ceph-external-cluster-details-exporter.py generates the json output even with incorrect rgw-endpoint IP address
- BZ - 1850944 - OCS 4.5 : [vSphere]: RGW pod is not deployed in vSphere enviroinmnet
- BZ - 1851328 - [AWS/VSPHERE]: ocs-operator.v4.5.0-463 is in Pending state in Latest OCP nightly builds
- BZ - 1851697 - NooBaa SCC is overriding the anyuid one
- BZ - 1852852 - OCs 4.x: OSD logging is not enabled to write to stderr as with all other daemons
- BZ - 1854651 - Converged Mode:ocs-operator in CrashLoopBackoff with empty labelSelector
- BZ - 1854768 - [NooBaa Independent Mode] NB looks for the wrong CephObjectStoreUser secret
- BZ - 1857304 - [Independent Mode]Make --rgw-endpoint an optional parameter in ceph-external-cluster-details-exporter.py script
- BZ - 1857721 - Backingstore creation should reject "Provider :PVC" with "OBC only SC- ocs-storagecluster-ceph-rgw"
- BZ - 1858385 - POD and PVC stay back in terminating state on deleting a UI-based-Backingstore (Provider:PVC)
- BZ - 1858778 - Collect Events, StorageClass and InstallPlan information as part of OCS must-gather in OCS 4.5
- BZ - 1859516 - OCS 4.5 AWS LSO: StorageCluster creation fails with "Invalid:Integer" & deployment fails even after adding the String to the CPU limits(Pending Pods)
- BZ - 1860418 - OCS 4.5 Uninstall: Deleting StorageCluster leaves Noobaa-db PV in Released state(secret not found)
- BZ - 1860431 - OCS 4.5 Uninstall: /var/lib/rook contents are not cleaned Up on initiating StorageCluster Deletion(expected as per Bug 1849532#c5)
- BZ - 1860891 - Deleting csi-cephfsplugin-provisioner pod during PVC deletion leaves behind PV in Released state
- BZ - 1860939 - [BAREMETAL] When OCP cluster was upgrade ceph status reported 2 daemons have recently crashed
- BZ - 1861364 - Update to 4.4.2 build has not started on the OCP env with OLM fix
- BZ - 1862133 - [Independent Mode]Make MDS pool and filesystem detection optional in ceph-external-cluster-details-exporter.py script
- BZ - 1862449 - [Independent Mode]Remove validation for RGW Endpoint in UI creation of external Storagecluster (linked to Bug 1857304)
- BZ - 1862755 - In a cluster behind a proxy, some backingstores fail to communicate with their target buckets
- BZ - 1866155 - [Clone][Independent Mode]Make MDS pool and filesystem detection optional in OCS Storagecluster creation
- BZ - 1867092 - After 2 OCS node shutdown, both provisioner pods running on a same worker node
- BZ - 1867130 - [External] Provisioning fails for OCS PVCs when the MON leader is down on the RHCS cluster
- BZ - 1867762 - After OCP upgrade, user created backingstores with provider: PVC went to rejected state
- BZ - 1867885 - Fix CVP test failures by removing invalid namespace fields in CSV
- BZ - 1868646 - With pre-configured pv-pools before OCS upgrade, noobaa-operator pod reports panic and is in CLBO post upgrade to 4.5-rc1
- BZ - 1869330 - Deletion of PVC while performing Ceph/OCS pod deletion leaves behind PV in Released state
- BZ - 1871408 - Noobaa falsely reports AWS endpoint as having IO_ERRORS if its name doesn't contain "amazonaws.com"
References
(none)
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.