- Issued:
- 2021-05-19
- Updated:
- 2021-05-19
RHSA-2021:2041 - Security Advisory
Synopsis
Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update
Type/Severity
Security Advisory: Moderate
Topic
Updated images which include numerous security fixes, bug fixes, and enhancements are now available for Red Hat OpenShift Container Storage 4.7.0 on Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Description
Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
- nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)
- kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in logs when logLevel >= 9 (CVE-2020-8565)
- jwt-go: access restriction bypass vulnerability (CVE-2020-26160)
- nodejs-date-and-time: ReDoS in parsing via date.compile (CVE-2020-26289)
- golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)
- golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)
- NooBaa: noobaa-operator leaking RPC AuthToken into log files (CVE-2021-3528)
- nodejs-yargs-parser: prototype pollution vulnerability (CVE-2020-7608)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
This update includes various bug fixes and enhancements. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Container Storage Release Notes for information on the most significant of these changes:
All Red Hat OpenShift Container Storage users are advised to upgrade to these updated images.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Data Foundation 4 for RHEL 8 x86_64
- Red Hat OpenShift Data Foundation for IBM Power, little endian 4 for RHEL 8 ppc64le
- Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 for RHEL 8 s390x
Fixes
- BZ - 1803849 - [RFE] Include per volume encryption with Vault integration in RHCS 4.1
- BZ - 1814681 - [RFE] use topologySpreadConstraints to evenly spread OSDs across hosts
- BZ - 1840004 - CVE-2020-7608 nodejs-yargs-parser: prototype pollution vulnerability
- BZ - 1850089 - OBC CRD is outdated and leads to missing columns in get queries
- BZ - 1860594 - Toolbox pod should have toleration for OCS tainted nodes
- BZ - 1861104 - OCS podDisruptionBudget prevents successful OCP upgrades
- BZ - 1861878 - [RFE] use appropriate PDB values for OSD
- BZ - 1866301 - [RHOCS Usability Study][Installation] “Create storage cluster” should be a part of the installation flow or need to be emphasized as a crucial step.
- BZ - 1869406 - must-gather should include historical pod logs
- BZ - 1872730 - [RFE][External mode] Re-configure noobaa to use the updated RGW endpoint from the RHCS cluster
- BZ - 1874367 - "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider
- BZ - 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability
- BZ - 1886112 - log message flood with Reconciling StorageCluster","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"
- BZ - 1886416 - Uninstall 4.6: ocs-operator logging regarding noobaa-core PVC needs change
- BZ - 1886638 - CVE-2020-8565 kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in logs when logLevel >= 9
- BZ - 1888839 - Create public route for ceph-rgw service
- BZ - 1892622 - [GSS] Noobaa management dashboard reporting High number of issues when the cluster is in healthy state
- BZ - 1893611 - Skip ceph commands collection attempt if must-gather helper pod is not created
- BZ - 1893613 - must-gather tries to collect ceph commands in external mode when storagecluster already deleted
- BZ - 1893619 - OCS must-gather: Inspect errors for cephobjectoreUser and few ceph commandd when storage cluster does not exist
- BZ - 1894412 - [RFE][External] RGW metrics should be made available even if anything else except 9283 is provided as the monitoring-endpoint-port
- BZ - 1896338 - OCS upgrade from 4.6 to 4.7 build failed
- BZ - 1897246 - OCS - ceph historical logs collection
- BZ - 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers
- BZ - 1898509 - [Tracker][RHV #1899565] Deployment on RHV/oVirt storage class ovirt-csi-sc failing
- BZ - 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability
- BZ - 1898808 - Rook-Ceph crash collector pod should not run on non-ocs node
- BZ - 1900711 - [RFE] Alerting for Namespace buckets and resources
- BZ - 1900722 - Failed to init upgrade process on noobaa-core-0
- BZ - 1900749 - Namespace Resource reported as Healthy when target bucket deleted
- BZ - 1900760 - RPC call for Namespace resource creation allows invalid target bucket names
- BZ - 1901134 - OCS - ceph historical logs collection
- BZ - 1902192 - [RFE][External] RGW metrics should be made available even if anything else except 9283 is provided as the monitoring-endpoint-port
- BZ - 1902685 - Too strict Content-Length header check refuses valid upload requests
- BZ - 1902711 - Tracker for Bug #1903078 Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready
- BZ - 1903973 - [Azure][ROKS] Set SSD tuning (tuneFastDeviceClass) as default for OSD devices in Azure/ROKS platform
- BZ - 1903975 - Add "ceph df detail" for ocs must-gather to enable support to debug compression
- BZ - 1904302 - [GSS] ceph_daemon label includes references to a replaced OSD that cause a prometheus ruleset to fail
- BZ - 1904929 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod
- BZ - 1907318 - Unable to deploy & upgrade to ocs 4.7 - missing postgres image reference
- BZ - 1908414 - [GSS][VMWare][ROKS] rgw pods are not showing up in OCS 4.5 - due to pg_limit issue
- BZ - 1908678 - ocs-osd-removal job failed with "Invalid value" error when using multiple ids
- BZ - 1909268 - OCS 4.7 UI install -All OCS operator pods respin after storagecluster creation
- BZ - 1909488 - [NooBaa CLI] CLI status command looks for wrong DB PV name
- BZ - 1909745 - pv-pool backing store name restriction should be at 43 characters
- BZ - 1910705 - OBCs are stuck in a Pending state
- BZ - 1911131 - Bucket stats in the NB dashboard are incorrect
- BZ - 1911266 - Backingstore phase is ready, modecode is INITIALIZING
- BZ - 1911627 - CVE-2020-26289 nodejs-date-and-time: ReDoS in parsing via date.compile
- BZ - 1911789 - Data deduplication does not work properly
- BZ - 1912421 - [RFE] noobaa cli allow the creation of BackingStores with already existing secrets
- BZ - 1912894 - OCS storagecluster is Progressing state and some noobaa pods missing with latest 4.7 build -4.7.0-223.ci and storagecluster reflected as 4.8.0 instead of 4.7.0
- BZ - 1913149 - make must-gather backward compatibility for version <4.6
- BZ - 1913357 - ocs-operator should show error when flexible scaling and arbiter are both enabled at the same time
- BZ - 1914132 - No metrics available in the Object Service Dashboard in OCS 4.7, logs show "failed to retrieve metrics exporter servicemonitor"
- BZ - 1914159 - When OCS was deployed using arbiter mode mon's are going into CLBO state, ceph version = 14.2.11-95
- BZ - 1914215 - must-gather fails to delete the completed state compute-xx-debug pods after successful completion
- BZ - 1915111 - OCS OSD selection algorithm is making some strange choices.
- BZ - 1915261 - Deleted MCG CRs are stuck in a 'Deleting' state
- BZ - 1915445 - Uninstall 4.7: Storagecluster deletion stuck on a partially created KMS enabled OCS cluster + support TLS configuration for KMS
- BZ - 1915644 - update noobaa db label in must-gather to collect db pod in noobaa dir
- BZ - 1915698 - There is missing noobaa-core-0 pod after upgrade from OCS 4.6 to OCS 4.7
- BZ - 1915706 - [Azure][RBD] PV taking longer time ~ 9 minutes to get deleted
- BZ - 1915730 - [ocs-operator] Create public route for ceph-rgw service
- BZ - 1915737 - Improve ocs-operator logging during uninstall to be more verbose, to understand reasons for failures - e.g. for Bug 1915445
- BZ - 1915758 - improve noobaa logging in case of uninstall - logs do not specify clearly the resource on which deletion is stuck
- BZ - 1915807 - Arbiter: OCS Install failed when used label = topology.kubernetes.io/zone instead of deprecated failureDomain label
- BZ - 1915851 - OCS PodDisruptionBudget redesign for OSDs to allow multiple nodes to drain in the same failure domain
- BZ - 1915953 - Must-gather takes hours to complete if the OCS cluster is not fully deployed, delay seen in ceph command collection step
- BZ - 1916850 - Uninstall 4.7- rook: Storagecluster deletion stuck on a partially created KMS enabled OCS cluster(OSD creation failed)
- BZ - 1917253 - Restore-pvc creation fails with error "csi-vol-* has unsupported quota"
- BZ - 1917815 - [IBM Z and Power] OSD pods restarting due to OOM during upgrade test using ocs-ci
- BZ - 1918360 - collect timestamp for must-gather commands and also the total time taken for must-gather to complete
- BZ - 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve
- BZ - 1918925 - noobaa operator pod logs messages for other components - like rook-ceph-mon, csi-pods, new Storageclass, etc
- BZ - 1918938 - ocs-operator has Error logs with "unable to deploy Prometheus rules"
- BZ - 1919967 - MCG RPC calls time out and the system is unresponsive
- BZ - 1920202 - RGW pod did not get created when OCS was deployed using arbiter mode
- BZ - 1920498 - [IBM Z] OSDs are OOM killed and storage cluster goes into error state during ocs-ci tier1 pvc expansion tests
- BZ - 1920507 - Creation of cephblockpool with compression failed on timeout
- BZ - 1921521 - Add support for VAULT_SKIP_VERIFY option in Ceph-CSI
- BZ - 1921540 - RBD PVC creation fails with error "invalid encryption kms configuration: "POD_NAMESPACE" is not set"
- BZ - 1921609 - MongoNetworkError messages in noobaa-core logs
- BZ - 1921625 - 'Not Found: Secret "noobaa-root-master-key" message' in noobaa logs and cli output when kms is configured
- BZ - 1922064 - uninstall on VMware LSO+ arbiter with 4 OSDs in Pending state: Storagecluster deletion stuck, waiting for cephcluster to be deleted
- BZ - 1922108 - OCS 4.7 4.7.0-242.ci and beyond: osd pods are not created
- BZ - 1922113 - noobaa-db pod init container is crashing after OCS upgrade from OCS 4.6 to OCS 4.7
- BZ - 1922119 - PVC snapshot creation failing on OCP4.6-OCS 4.7 cluster
- BZ - 1922421 - [ROKS] OCS deployment stuck at mon pod in pending state
- BZ - 1922954 - [IBM Z] OCS: Failed tests because of osd deviceset restarts
- BZ - 1924185 - Object Service Dashboard shows alerts related to "system-internal-storage-pool" in OCS 4.7
- BZ - 1924211 - 4.7.0-249.ci: RGW pod not deployed, rook logs show - failed to create object store "must be no more than 63 characters"
- BZ - 1924634 - MG terminal logs show `pods "compute-x-debug" not found` even though pods are in Running state
- BZ - 1924784 - RBD PVC creation fails with error "invalid encryption kms configuration: failed to parse kms configuration"
- BZ - 1924792 - RBD PVC creation fails with error "invalid encryption kms configuration: failed to parse kms configuration"
- BZ - 1925055 - OSD pod stuck in Init:CrashLoopBackOff following Node maintenance in OCP upgrade from OCP 4.7 to 4.7 nightly
- BZ - 1925179 - MG fix [continuation from bug 1893619]: Do not attempt creating helper pod if storagecluster/cephcluster already deleted
- BZ - 1925249 - KMS resources should be garbage collected when StorageCluster is deleted
- BZ - 1925533 - [GSS] Unable to install Noobaa in AWS govcloud
- BZ - 1926182 - [RFE] Support disabling reconciliation of monitoring related resources using a dedicated reconcile strategy flag
- BZ - 1926617 - osds are in Init:CrashLoopBackOff with rgw in CrashLoopBackOff on KMS enabled cluster
- BZ - 1926717 - Only one NOOBAA_ROOT_SECRET_PATH key created in vault when the same backend path is used for multiple OCS clusters
- BZ - 1926831 - [IBM][ROKS] Deploy RGW pods only if IBM COS is not available on platform
- BZ - 1927128 - [Tracker for BZ #1937088] When Performed add capacity over arbiter mode cluster ceph health reports PG_AVAILABILITY Reduced data availability: 25 pgs inactive, 25 pgs incomplete
- BZ - 1927138 - must-gather skip collection of ceph in every run
- BZ - 1927186 - Configure pv-pool as backing store if cos creds secret not found in IBM Cloud
- BZ - 1927317 - [Arbiter] Storage Cluster installation did not started because ocs-operator was Expecting 8 node found 4
- BZ - 1927330 - Namespacestore-backed OBCs are stuck on Pending
- BZ - 1927338 - Uninstall OCS: Include events for major CRs to know the cause of deletion getting stuck
- BZ - 1927885 - OCS 4.7: ocs operator pod in 1/1 state even when Storagecluster is in Progressing state
- BZ - 1928063 - For FD: rack: actual osd pod distribution and OSD placement in rack under ceph osd tree output do not match
- BZ - 1928451 - MCG CLI command of diagnose doesn't work on windows
- BZ - 1928471 - [Deployment blocker] Ceph OSDs do not register properly in the CRUSH map
- BZ - 1928487 - MCG CLI - noobaa ui command shows wss instead of https
- BZ - 1928642 - [IBM Z] rook-ceph-rgw pods restarts continously with ocs version 4.6.3 due to liveness probe failure
- BZ - 1931191 - Backing/namespacestores are stuck on Creating with credentials errors
- BZ - 1931810 - LSO deployment(flexibleScaling:true): 100% PGS unknown even though ceph osd tree placement is correct(root cause diff from bug 1928471)
- BZ - 1931839 - OSD in state init:CrashLoopBackOff with KMS signed certificates
- BZ - 1932400 - Namespacestore deletion takes 15 minutes
- BZ - 1933607 - Prevent reconcile of labels on all monitoring resources deployed by ocs-operator
- BZ - 1933609 - Prevent reconcile of labels on all monitoring resources deployed by rook
- BZ - 1933736 - Allow shrinking the cluster by removing OSDs
- BZ - 1934000 - Improve error logging for kv-v2 while using encryption with KMS
- BZ - 1934990 - Ceph health ERR post node drain on KMS encryption enabled cluster
- BZ - 1935342 - [RFE] Add OSD flapping alert
- BZ - 1936545 - [Tracker for BZ #1938669] setuid and setgid file bits are not retained after a OCS CephFS CSI restore
- BZ - 1936877 - Include at OCS Multi-Cloud Object Gateway core container image the fixes on CVEs from RHEL8 on "nodejs"
- BZ - 1937070 - Storage cluster cannot be uninstalled when cluster not fully configured
- BZ - 1937100 - [RGW][notification][kafka]: notification fails with error: pubsub endpoint configuration error: unknown schema in: kafka
- BZ - 1937245 - csi-cephfsplugin pods CrashLoopBackoff in fresh 4.6 cluster due to conflict with kube-rbac-proxy
- BZ - 1937768 - OBC with Cache BucketPolicy stuck on pending
- BZ - 1939026 - ServiceUnavailable when calling the CreateBucket operation (reached max retries: 4): Reduce your request rate
- BZ - 1939472 - Failure domain set incorrectly to zone if flexible scaling is enabled but there are >= 3 zones
- BZ - 1939617 - [Arbiter] Mons cannot be failed over in stretch mode
- BZ - 1940440 - noobaa migration pod is deleted on failure and logs are not available for inspection
- BZ - 1940476 - Backingstore deletion hangs
- BZ - 1940957 - Deletion of Rejected NamespaceStore is stuck even when target bucket and bucketclass are deleted
- BZ - 1941647 - OCS deployment fails when no backend path is specified for cluster wide encryption using KMS
- BZ - 1941977 - rook-ceph-osd-X gets stuck in initcontainer expand-encrypted-bluefs
- BZ - 1942344 - No permissions in /etc/passwd leads to fail noobaa-operaor
- BZ - 1942350 - No permissions in /etc/passwd leads to fail noobaa-operaor
- BZ - 1942519 - MCG should not use KMS to store encryption keys if cluster wide encryption is not enabled using KMS
- BZ - 1943275 - OSD pods re-spun after "add capacity" on cluster with KMS
- BZ - 1943596 - [Tracker for BZ #1944611][Arbiter] When Performed zone(zone=a) Power off and Power On, 3 mon pod(zone=b,c) goes in CLBO after node Power off and 2 Osd(zone=a) goes in CLBO after node Power on
- BZ - 1944980 - Noobaa deployment fails when no KMS backend path is provided during storagecluster creation
- BZ - 1946592 - [Arbiter] When both the rgw pod hosting nodes are down, the rgw service is unavailable
- BZ - 1946837 - OCS 4.7 Arbiter Mode Cluster becomes stuck when entire zone is shutdown
- BZ - 1955328 - Upgrade of noobaa DB failed when upgrading OCS 4.6 to 4.7
- BZ - 1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files
- BZ - 1957187 - Update to RHCS 4.2z1 Ceph container image at OCS 4.7.0
- BZ - 1957639 - Noobaa migrate job is failing when upgrading OCS 4.6.4 to 4.7 on FIPS environment
CVEs
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.