- Issued:
- 2024-03-19
- Updated:
- 2024-03-19
RHSA-2024:1383 - Security Advisory
Synopsis
Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update
Type/Severity
Security Advisory: Important
Topic
Updated packages that include numerous enhancements and bug fixes are now available for Red Hat OpenShift Data Foundation 4.15.0 on Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Description
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
These updated packages include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
All Red Hat OpenShift Data Foundation users are advised to upgrade to these packages that provide these bug fixes and enhancements.
Solution
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Data Foundation 4 for RHEL 9 x86_64
- Red Hat OpenShift Data Foundation for IBM Power, little endian 4 for RHEL 9 ppc64le
- Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 for RHEL 9 s390x
- Red Hat OpenShift Data Foundation for RHEL 9 ARM 4 aarch64
Fixes
- BZ - 2005835 - [DR] VRClass resource should not be editable after creation
- BZ - 2022467 - [RFE] enable distributed ephemeral pins on "csi" subvolume group
- BZ - 2126028 - exporter python script does not support IPV6
- BZ - 2130266 - [RFE] Deploy two MGR PODs one active and one standby
- BZ - 2151493 - [OCP Tracker] [RDR] A VR isn't marked secondary and cleanup of primary cluster remains stuck after Failover operation
- BZ - 2165128 - [RDR] DRPolicy creation is restricted if 2 different sub-versions are ODF are installed on the Managed Clusters
- BZ - 2165907 - [GSS] Noobaa-db pod is stuck in CLBO
- BZ - 2196858 - [RFE] Add role capability in rados user in ODF via rook
- BZ - 2207925 - [RDR] When performing cross cluster failover of 2 apps cleanup of app-1 is stuck and failover of app-2 is stuck
- BZ - 2208302 - ocs-metrics-exporter cannot list/watch persisentvolume
- BZ - 2209616 - test_rgw_kafka_notifications test fails with "RGW bucket notification is not working as expected." Error
- BZ - 2210970 - Data Foundation - i18n misses
- BZ - 2213885 - [RDR] [UI] When workloads are deleted via ACM UI, DRPC isn't deleted
- BZ - 2222254 - [RDR] [Tracker for Ceph BZ 2248719] Ceph reports mgr crash with Module 'devicehealth' has failed: disk I/O error
- BZ - 2228785 - [ODF][OBC] Provisioning of Radosgw OBCs with the same bucket name is successful
- BZ - 2229670 - All the Controller Operations should reach the one Controller (active) not multiple Controllers
- BZ - 2231076 - ocs-operator crashes due to a panic cause by Nil pointer dereference
- BZ - 2231860 - [Tracker] pod noobaa-db-pg-0 CrashLoopBackOff
- BZ - 2233010 - [RDR] Replace existing OSDs to use bluestore-rdr store
- BZ - 2234479 - RFE: automation around OSD removal to avoid Data Loss
- BZ - 2236384 - [RDR] When workload is deployed but progression isn't marked as Completed, both Failover & Relocate are allowed
- BZ - 2236400 - [RDR][CEPHFS] When RDR upgrade is performed volSync values are getting removed from ramen-hub configmap
- BZ - 2237427 - [MCG] Patching a replication-policy onto a used bucketclass fails with "invalid input syntax for type json "
- BZ - 2237895 - ocs-storagecluster-cephblockpool contains Raw capacity card
- BZ - 2237903 - Noobaa fails to use the new internal cert after rotation
- BZ - 2237920 - NFS-Ganesha server does not have a health checking mechanism implemented
- BZ - 2239208 - CephFS PVC Creation Fails Provision with size 500M
- BZ - 2239590 - [RDR] DRPC reports wrong Phase/Progression as Deployed/Completed the moment failover is triggered
- BZ - 2239608 - ocs-client-operator fails to reconcile CSI
- BZ - 2240756 - [UI] ODF Topology. Node details of Compact mode. Role merged to one word
- BZ - 2240908 - [RDR] [UI] DR UI allows Relocate and Failover to same peer causing the Failover to get stuck in Wait_For_Fencing progression
- BZ - 2241268 - Enabling ceph mirroring enables bluestore-rdr and OSDs fail to recreate
- BZ - 2241872 - [Tracker][27770] ocs-metrics-exporter is in CrashLoopBackOff when running in Provider/Client mode
- BZ - 2242309 - [RDR] Apps don't get DR protected when drpolicy gets validated after applying it to those apps
- BZ - 2244568 - [noobaa-operator clone] Redundant dependency on Vault library - should be removed
- BZ - 2244569 - [csi-driver clone] Redundant dependency on Vault library - should be removed
- BZ - 2244570 - [mco clone] Redundant dependency on Vault library - should be removed
- BZ - 2245004 - rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a does not come up after node reboot
- BZ - 2246084 - [RDR] [Hub recovery] Failover doesn't complete
- BZ - 2246993 - StoraegProfile CRs are mutable upon creation
- BZ - 2247094 - [IBM Z ] - [ODF deployment] - [multipath device] - StorageSystem creation from webUI using local storage devices does not select multipath devices by default
- BZ - 2247313 - rook-ceph-osd-prepare pods in CLBO during Installation
- BZ - 2247518 - [RDR] [Hub recovery] If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert is not fired on Passive Hub even when monitoring label is applied
- BZ - 2247542 - [RDR] [Hub-recovery] Failover didn't succeed for cephfs backed workloads
- BZ - 2247714 - [RDR][Hub Recovery] Fixing DRPC after hub recovery for failover/relocate can lead to data loss
- BZ - 2247731 - [GSS] Noobaa Operator Does not update Secret with FIPS enabled
- BZ - 2247743 - Ensure provider api server allows empty platform and operator version
- BZ - 2247748 - [Tracker][29042] The storage-client CronJob create too many jobs
- BZ - 2248117 - Backup failing to transfer data: snapshot PVC in pending state
- BZ - 2248664 - [RDR] [Hub recovery] A workload which was in failedover state before hub recovery goes to cleaning up on passive hub
- BZ - 2248666 - [RDR] [Hub recovery] DRCluster post secret creation goes to exponential backoff and takes too long to get validated
- BZ - 2248684 - metric ocs_storage_client_operator_version is returning -1
- BZ - 2248832 - Revert The ocs-osd-removal-job should have unique names for each Job
- BZ - 2249678 - the multus network address detection job does not derive placement configs from CephCluster "all" placement
- BZ - 2249844 - [CEPH bug 2249814 tracker] Health Warn after to upgrade to 4.13.5-6 - 1 daemons have recently crashed
- BZ - 2250092 - Unsuccessful retrieval of a data pool isn't properly captured
- BZ - 2250152 - [RDR] [Hub recovery] Sync for all cephfs workloads stopped post hub recovery
- BZ - 2250636 - Do not create Virtualization storage class in Provider mode
- BZ - 2250911 - [UI] [OCPBUGS-25881 Tracker] In Openshift-storage-client namespace, 'RWX' access mode RBD PVC with Volume mode 'Filesystem' is not blocked, it attempt to create and stuck in pending state
- BZ - 2250995 - [Tracker][29079] rook-ceph-exporter pod restarts multiple times on a fresh installed HCI cluster
- BZ - 2251741 - MCG endpoint minCount and maxCount in ocs-storagecluster are ignored
- BZ - 2252035 - Discrepancy shown when viewing storage metrics - (but product can continue to work)
- BZ - 2252756 - [RDR] [Hub recovery] [CephFS] volumes are lost on the secondary managed cluster after hub recovery
- BZ - 2253185 - [GSS] During installation of ODF 4.14 'rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a pod' stuck in CrashLoopBackOff state
- BZ - 2253257 - [RDR] DR cluster operator goes to failed state after adding the ODF cluster to AHCM
- BZ - 2253953 - noobaa-core-0 in CLBO in disconnected deployment mode
- BZ - 2254159 - odf prep script for odf-external doesn't honour the --run-as-user parameter
- BZ - 2254216 - [Provider-Client deployment] storageclient-status-reporter CLBO. storageclass claims stuck in configuring
- BZ - 2254330 - [UI] No Backing store elements create btn, list, other, OCP 4.15, ODF 4.14
- BZ - 2254333 - [UI] ceph blockpool page crushed OCP 4.15, ODF 4.14
- BZ - 2254513 - [Eng. Enhancement] Create a separate go module for ocs-operator APIs
- BZ - 2255036 - External storage system stats on data foundation page shows only zeros when multi storagecluster installed
- BZ - 2255194 - Provide subscription based application details also in the dr dashboard
- BZ - 2255219 - UI re-installation of external mode in multi storagecluster fails because the namespace exists
- BZ - 2255232 - rook-ceph-rgw-ocs-storagecluster-cephobjectstore warning when deployed on control-plan
- BZ - 2255240 - Must gather in 4.15 is not collected in 35 mins
- BZ - 2255241 - Upgrade to 4.15 is blocked
- BZ - 2255310 - odf-operator should update it's upgradeable conditions based on the dependents
- BZ - 2255320 - remove incompatible provider api changes for onboarding clients
- BZ - 2255328 - Rook-ceph-exporter pods not available in ODF 4.15
- BZ - 2255332 - [DR] UI needs textual updates (for 4.15 epics)
- BZ - 2255333 - [ODF] UI needs textual updates (for 4.15 epics)
- BZ - 2255340 - Not Found error is shown in Configure modal for CephMonCountLow Alert
- BZ - 2255343 - External Storage cluster(ocs-external-storagecluster) showing in error state
- BZ - 2255411 - Separate out the go.mod for the API package to reduce dependency problems
- BZ - 2255491 - Add 'managedBy' label to rook-ceph-exporter metrics and alerts
- BZ - 2255499 - [4.15 backport][ocs-operator] Backport fusion hci phase 2 related changes from main to 4.15 branch
- BZ - 2255501 - [4.15 backport][ocs-client-operator] Backport fusion hci phase 2 related changes from main to 4.15 branch
- BZ - 2255508 - [4.15 backport][odf-console] Backport fusion hci phase 2 related changes from main to 4.15 branch
- BZ - 2255557 - Noobaa stuck configuring when deploying on IBM Cloud(IPI) with COS-backed backingstore
- BZ - 2255586 - Noobaa backingstore noobaa-default-backing-store-noobaa-noobaa not found
- BZ - 2255890 - [4.15 backport][odf-operator] Backport fusion hci phase 2 related changes from main to 4.15 branch
- BZ - 2256085 - [UI] Styles (CSS) are broken at certain places
- BZ - 2256161 - MDS pod in CrashLoopBackOff due to Liveness probe failure
- BZ - 2256456 - Provide better logging for ocs-metrics-exporter
- BZ - 2256566 - [UI]Setting up the Default StorageClass option as RBD is not working with an external mode cluster.
- BZ - 2256580 - PVC label selector should not recommend labels that don't work, they should be filtered
- BZ - 2256597 - Ceph reports 'MDSs report oversized cache' warning, yet there is no observed alert for high MDS cache usage
- BZ - 2256633 - [UI][MDR] Application showing wrong status of protected PVC but main dashboard showing healthy
- BZ - 2256637 - [Tracker: Bug #2256731] change priority of mds rss perf counter to useful
- BZ - 2256725 - The MDSCacheUsageHigh alert is lacking a call to action
- BZ - 2256777 - [provider] During regression testing RGW pod rook-ceph-rgw-ocs-storagecluster-cephobjectstore on provider went to CLBO state
- BZ - 2257222 - [RDR] Optimise cluster for RDR daialog fails with error msg
- BZ - 2257296 - NFS-Ganesha RPC liveness probe is not properly enabled
- BZ - 2257310 - The MDSCPUUsageHigh alert is lacking a call to action
- BZ - 2257427 - After deletion of onboarding-ticket-key the onboarding ticket generation job did not re-trigger
- BZ - 2257441 - [UI] System raw capacity card is not showing external mode StorageSystem
- BZ - 2257634 - Add Runbooks for ODF alerts - no links, wrong text, sublinks not work
- BZ - 2257674 - rook-ceph operator don't have permissions to create ServiceMonitor on external namespaces
- BZ - 2257694 - multicluster mode ocs-metrics-exporter issues: unable to collect PV data or to get CSI config
- BZ - 2257711 - ocs-metrics-exporter is not updated when ocs-operator is upgraded
- BZ - 2257982 - Noobaa system failed to create using external postgres - db_url link is not correct
- BZ - 2258015 - Client onboarding token generation from UI is not working for upgraded provider clusters
- BZ - 2258021 - OSD cpu overutilization alert is raised during normal fio workloads
- BZ - 2258351 - [RDR] [Hub recovery] [Co-situated] Failover did not progress after site failure
- BZ - 2258357 - Storagecluster is in warning state on IBM Power cluster due to "no active mgr" error
- BZ - 2258560 - [RDR] [Hub recovery] [Co-situated] MCO didn?t create VRCs after hub recovery which hinders failover operation
- BZ - 2258591 - Error 'Invalid image health, "", for pool ocs-storagecluster-cephblockpool' in ocs-metrics-expoter logs
- BZ - 2258681 - Backingstore Reconcilliation happening over 2000 times an hour
- BZ - 2258744 - odf-cli tool - ceph does not have option to set log-level in release-4.15
- BZ - 2258814 - Missing initial mon count value in "CephMonLowNumber" configure modal
- BZ - 2258937 - [Provider-Client] Additional subvolumegroups created on provider cluster
- BZ - 2258974 - [Tracker for https://github.ibm.com/ProjectAbell/abell-tracking/issues/31423?] Even after DF client installed on Hosted cluster , connection :- Unknown and storage status :- Installing (Upgraded ODF)
- BZ - 2259187 - [RDR] Osd migration has halted in between
- BZ - 2259476 - Missing initial mon count value in "CephMonLowNumber" configure modal
- BZ - 2259632 - ocs-metrics-exporter: ocs_rbd_pv_metadata is missing
- BZ - 2259664 - Static name for cephfs and RBD CSI driver deployed by ocs-client-operator
- BZ - 2259773 - Add translations for ODF 4.15 release
- BZ - 2259852 - Alert "CephMonLowNumber" not triggered for rack,host based failure domains
- BZ - 2260050 - Enabling Replica-1 from UI is not working on LSO backed ODF on IBM Power cluster
- BZ - 2260131 - Health is going in Warning state after patching the storagecluster for replica-1 in ODF4.15 on IBM Power cluster
- BZ - 2260279 - Exporter deployment recreation with each Ceph daemon creation causes unnecessary reconciles
- BZ - 2260340 - CephMonLowNumber alert is raised only when metrics exporter pod is restarted
- BZ - 2260818 - Additional failure domain is shown in Configure Ceph Monior UI tab
- BZ - 2261936 - Collect csidriver related details in the must-gather
- BZ - 2262052 - default multisite zonegroup creation should contain realm
- BZ - 2262252 - ocs-storagecluster is in progressing state due to noobaa in configuring state
- BZ - 2262376 - OCS Client operator is not listed in the UI due to the non-standalone annotation, making upgrade through UI impossible
- BZ - 2262974 - Noobaa system failed to create using external postgres - externalPgSSLRequired is always set true
- BZ - 2263319 - Replica-1 configuration fails with cephcluster spec invalid value
- BZ - 2263472 - change the sample yaml in the CSV
- BZ - 2263984 - Noobaa system failed to create using external postgres - no noobaa CR is created
- BZ - 2264002 - Log SiteStatuses details and description in GetVolumeReplicationInfo RPC call
- BZ - 2264825 - [UI][MDR] User-1 can delete application created by user-2
- BZ - 2265051 - ocs-storagecluster is in progressing state due to noobaa in configuring state due to tls: failed to verify certificate: x509: certificate signed by unknown authority
- BZ - 2265109 - NooBaaNamespaceBucketErrorState and NooBaaNamespaceResourceErrorState are not triggered when namespacestore's target bucket is deleted
- BZ - 2265124 - [4.15] Move cephFS fencing under a new flag to trigger networkFence
- BZ - 2265514 - "unable to get monitor info from DNS SRV with service name: ceph-mon" error observed while creating fedora app pod
- BZ - 2266564 - The CSV and COMET for all ODF's component operators are missing subscription information
- BZ - 2266583 - Number failure domain value is hardcoded in CephMonLowNumber alert
- BZ - 2267209 - collect logs of the external cluster in the openshift-storage-extended namespace for multi storage cluster
- BZ - 2267712 - ODF client operator has nondescript information on the operator hub
- BZ - 2267857 - noobaa instance is initializating due to panic: reflect.Value.Interface: cannot return value obtained from unexported field or method
- BZ - 2267885 - [RDR] [Hub recovery] [Co-situated] Missing VRCs blocks failover operation for RBD workloads
- BZ - 2268407 - must-gather is not collecting ceph commands output
- BZ - 2268959 - After upgrade, noobaa is in initialization state due system.read_system() Response Error: Code=UNAUTHORIZED Message=account not found 65ec27f52e541700269b2156
CVEs
- CVE-2021-35937
- CVE-2021-35938
- CVE-2021-35939
- CVE-2023-3462
- CVE-2023-5363
- CVE-2023-5954
- CVE-2023-5981
- CVE-2023-7104
- CVE-2023-24532
- CVE-2023-26159
- CVE-2023-27043
- CVE-2023-28486
- CVE-2023-28487
- CVE-2023-29406
- CVE-2023-29409
- CVE-2023-39318
- CVE-2023-39319
- CVE-2023-39321
- CVE-2023-39322
- CVE-2023-39615
- CVE-2023-42282
- CVE-2023-42465
- CVE-2023-43646
- CVE-2023-43804
- CVE-2023-45803
- CVE-2023-46218
- CVE-2023-48631
- CVE-2023-48795
- CVE-2023-51385
- CVE-2024-0553
- CVE-2024-0567
aarch64
odf4/mcg-core-rhel9@sha256:23becbe8a9d70cda09e2d10fdd3411b943368a3d92535798d595d29b5bbc4f7e |
odf4/mcg-rhel9-operator@sha256:d04a5a9748ac34459c21d4d10efcc9318220c7ceeeb15c41c56cae9bfbb44872 |
odf4/ocs-client-rhel9-operator@sha256:54d229a2b193748da1f66de6ce0c405f591808dd35e26dd0304582d86ba4d708 |
odf4/ocs-rhel9-operator@sha256:f0eed493b993b35b41819c28e10218d5e3cae8da5a4fc6050a845c24b440b198 |
odf4/odf-cli-rhel9@sha256:d6ad83e3d8739bc004c3f634674badca5bfb284ef78b336f2e68509687bda0a2 |
odf4/odf-csi-addons-rhel9-operator@sha256:32f3f0dbf489a6db8f56235780e2dd119c6295b8fa4d5216418daa17203e38c8 |
odf4/odf-csi-addons-sidecar-rhel9@sha256:7c900a434cff47dae8662ff00e64542b591eb388e5aecdc9fe3746542f278f9c |
odf4/odf-multicluster-rhel9-operator@sha256:21f83cb5ff06ee334b44f85fe0f6dbb2e18216f27a493c56a35e319512887697 |
odf4/odf-must-gather-rhel9@sha256:ffca15ba53f5ba61368883cb03b21e4a3c071fc8f5565ed106e2599f55cdd402 |
odf4/odf-rhel9-operator@sha256:418e629e0aaf849c95257c6bcf23b3d51e87c5de7c008e7f53cd3acaa1295461 |
odf4/odr-rhel9-operator@sha256:c10016cc04ebf7d944e6dbab8c15f7a60f277cb19487cf47cb8438d7060694f9 |
ppc64le
odf4/cephcsi-rhel9@sha256:b4b66a42eb728b46ff977431fd07be976d465680f49516dac3f433cdd12e4dbc |
odf4/mcg-core-rhel9@sha256:b0eece2979587d7884f8707a6026e28a4aa7a3d96646b01249a9e351acf6b935 |
odf4/mcg-operator-bundle@sha256:98fb1b9309d82c663d3118d19fe776de51a277c474a42bf249fdf075f6993bc4 |
odf4/mcg-rhel9-operator@sha256:94f6dded2ff421275b26d147e35d33e383f06ecb04cd0a618d8b4c5046d0e307 |
odf4/ocs-client-console-rhel9@sha256:abe77b4860b2c3d88b081aa0bb69c996cb83bc288209aa805de626220cc0ad48 |
odf4/ocs-client-operator-bundle@sha256:b081d99f0507897a3217db22bca57a948bbefdf27e30fd0506275b1dd908c7f6 |
odf4/ocs-client-rhel9-operator@sha256:f3b2c47af82b837c2f51cf04d4b7fc973bb3774151d144354d12e9893eadbcef |
odf4/ocs-metrics-exporter-rhel9@sha256:c89d0ad159bff3915c121a3b97a3dc207d69cab48a25c73e27b53f2aacc6ebf7 |
odf4/ocs-operator-bundle@sha256:001f41ba066f39fc00818ec9982d5408415cc41cfa0c35e17a81310994e976a6 |
odf4/ocs-rhel9-operator@sha256:2ee595d52185127dbcef534dee457e9bdb2e72c7292fdafaec8f730e00762ef9 |
odf4/odf-cli-rhel9@sha256:8bcf56bad8bfa401441828b8f2c77a88697f83103360f76f534abaabc7b4ed1f |
odf4/odf-console-rhel9@sha256:9e12f5478515f1641afb3a4dde983671270afeed59a6372a30c74b3b7a5936a3 |
odf4/odf-cosi-sidecar-rhel9@sha256:dbf68ee98865ddf8ace7a8f626c7c588f35724bc565ce10ffe23d4c76bf2c7e5 |
odf4/odf-csi-addons-operator-bundle@sha256:3738a473f48376f6debe80dfed9d4decbfffd0a4ea04f59a9d20b7b6b7039830 |
odf4/odf-csi-addons-rhel9-operator@sha256:3efe898c6142154200e47f4615acf03b0ce6c30ead6470307a782a9a2578cf53 |
odf4/odf-csi-addons-sidecar-rhel9@sha256:ca132fd7b3c36d14ba87259812a92f219bc4cceb12ba30a119062b95f86963e9 |
odf4/odf-multicluster-console-rhel9@sha256:7fcb03d69dbceff7968a78ee658c6c8318253f59262d5e3e8c898a5b45316067 |
odf4/odf-multicluster-operator-bundle@sha256:e8ff736bac5d26c34a47f72b019597b71eba1235d382b0879f21f38e366d669d |
odf4/odf-multicluster-rhel9-operator@sha256:369d1d12a4f8d7fe23f33553e2aa7f5491e1233b988c11b71ed8f2d9d512ad1c |
odf4/odf-must-gather-rhel9@sha256:ad4964e6ff46b37a4903b3cc13b216e29d07b4864b607a12db755b45884db656 |
odf4/odf-operator-bundle@sha256:2df80b908fc7dfd07dae58d641d9ab412f345114bed59cdd59e7c2b9f274b1a3 |
odf4/odf-rhel9-operator@sha256:4044d62c852f88417de640f4d1c0fdf9e37d9b1c61de54ea379d321d9e04d05d |
odf4/odr-cluster-operator-bundle@sha256:ebb47486174b4437bf8ccee7b642fa499e054e204d7c4cabc7f249beda3d0675 |
odf4/odr-hub-operator-bundle@sha256:95a9dfb1583482c881ded3de02d84776e3392a3d449146affaee05a289d97d45 |
odf4/odr-rhel9-operator@sha256:78368eb70e39f1977c0efaeff99a13f22dce181d2e8ec1dfb39fa6575c7bd6ef |
odf4/rook-ceph-rhel9-operator@sha256:7066ce8a9b54bd3065d1b5a839350e7e7e57538a475873ea840435c111357e91 |
s390x
odf4/cephcsi-rhel9@sha256:765e623f5b4de11f0482f3abde344df4946665046c9190293a0ae9a5d0d62cd2 |
odf4/mcg-core-rhel9@sha256:31d951727244d235e62aed00d1121c6f8da4b9d22e895dcfdc814a8545442dac |
odf4/mcg-operator-bundle@sha256:3303677bc0ba2cc5e3f55ae313b516f32378b67a20389f69e09e2c3a9738a87d |
odf4/mcg-rhel9-operator@sha256:d312e5adf2261eeab1106a72565652637d42e2d49d2294cb1413be62697f18d2 |
odf4/ocs-client-console-rhel9@sha256:f1b19a6b805ad0ae2bc7dc08f2e7992e9d440bfa858d5ac2667f9311a95d6e12 |
odf4/ocs-client-operator-bundle@sha256:7fdae4274b87123284a7fb6cc7b56be3e22be629ff834d658a9afd867ef6473b |
odf4/ocs-client-rhel9-operator@sha256:d027156f3a8a80a12cfe6bdf207a25a3b1e0e84db9a59853072fdc6ea3766e18 |
odf4/ocs-metrics-exporter-rhel9@sha256:378212f323bd7dcc080f31b2f75082ec28c3d6104240c6eb2763d6bd2fc94d8e |
odf4/ocs-operator-bundle@sha256:e63f59d7d4aeff45ffe3e5d622b0526f568f39b559f21fa87f08a69e1a8452ac |
odf4/ocs-rhel9-operator@sha256:4ce02a91070158e2b0e199d6949b66daa9ecdb716dddcbc586c368cd5180dd16 |
odf4/odf-cli-rhel9@sha256:bb23cf0882f1174a311a05c59646cb1cf30d4a74a297c854c19a26152d47ab47 |
odf4/odf-console-rhel9@sha256:7dcf6b7101b8ebc902b7582dc2038f261b0d7f4783f0ac6ac8d97c60126801b2 |
odf4/odf-cosi-sidecar-rhel9@sha256:c7c5f4cbee26d0ef67fcf9294d3773e2e37308ae56da865cc0ad356ed8f5e206 |
odf4/odf-csi-addons-operator-bundle@sha256:cc9a1591a6dde4b82a87915589458a5741566e29cc80e16f2d96bd5bafc85bbe |
odf4/odf-csi-addons-rhel9-operator@sha256:fe0cbb229b959a9fe2f47c08fbe6239a275bd30e6dc00e81e6f5f97ef8211a38 |
odf4/odf-csi-addons-sidecar-rhel9@sha256:2e339236492b5bf2e881b8e88c40f54b4b0b9dddbe442363b2efb92dfd720d29 |
odf4/odf-multicluster-console-rhel9@sha256:f294e16791a917b48c09d075f327e97b17cf5307356a3626da618a13952c8cfb |
odf4/odf-multicluster-operator-bundle@sha256:b2c64034cfa1ed2865a2f53ee075063a9b2e9c9dc288f09eec1b0e7df4e7b4ef |
odf4/odf-multicluster-rhel9-operator@sha256:d09dcf83c8efd61a8409c6677c284705a9e53a42931d2afc432e1676d308dbfe |
odf4/odf-must-gather-rhel9@sha256:1a5797806a22bd7e683138c043a45f6d59b19d6ef5d92a07cf47226991512520 |
odf4/odf-operator-bundle@sha256:5254d6581a1e5c7bbdf2621448d812be6ae5a1931009a3f9aa1046176841f68b |
odf4/odf-rhel9-operator@sha256:3776c9d3070eef0f138f92c1d795c5070fd36da9f253d5c40b06f2eda733415b |
odf4/odr-cluster-operator-bundle@sha256:d9651310304b6f822609b3b4f3268582cf1d218d2ad0f123e3b09a095c78a1fd |
odf4/odr-hub-operator-bundle@sha256:2e95ebb9976f65c2cbc4c9f96d3b262dab13aac1fb7740a1f4962d54e8a88470 |
odf4/odr-rhel9-operator@sha256:1a9ee6e62764e9331fda9d9b947e011f7bc15c68edeef730780fe4b43e399a08 |
odf4/rook-ceph-rhel9-operator@sha256:9e0986c18a9642b411e27b3f460c74f089ae4c15b94332999f84bfe690616ff0 |
x86_64
odf4/cephcsi-rhel9@sha256:5e262fe96badcdebcf0fc40e07acecd607b83a3d48fb90b05bc89ebe790add14 |
odf4/mcg-core-rhel9@sha256:f98634a0fa2517efb383bf6c2a4809f150408225875d7425e64a339209d72e32 |
odf4/mcg-operator-bundle@sha256:19c29de2cf31f95e5f363064095602656355879eb19c778a8655083cebd54ce6 |
odf4/mcg-rhel9-operator@sha256:93ed7e87e7660991843f3b2114ff2abe74f3b509c2b9a0ee32fc8707051203af |
odf4/ocs-client-console-rhel9@sha256:1add011155a8f31010d52ab3a441a332272ca8bc0bbe93f21d126ed99b07218a |
odf4/ocs-client-operator-bundle@sha256:85b41f9c66184ccc262cb8e00d3fdeb9029ca5c39df5521afb44bac604039bbe |
odf4/ocs-client-rhel9-operator@sha256:e40c768e57bc8fc19a4028d8474cc00529b05fea3e1fdc7bd793c0b49df4891b |
odf4/ocs-metrics-exporter-rhel9@sha256:f437b8176480efc681950d885ce5a2816e3bc1a8bcb98b8d02fef7c201134c21 |
odf4/ocs-operator-bundle@sha256:72c56580bab623f3e19418c52a6cc2e062c1f799e30609a4d4e292f525d01ec2 |
odf4/ocs-rhel9-operator@sha256:b94cc2b0df9566a19418cd9445018c7974bdeb501b9bc26d374f8ecc400ff725 |
odf4/odf-cli-rhel9@sha256:5f9fc660dfa8507f01495558580e75612e3468fea4222f1a2fd9aac88533cdde |
odf4/odf-console-rhel9@sha256:9a90552dd4d8921f3ef1f2a090140ec0d2de3ee41d1b7a6d9963104689618153 |
odf4/odf-cosi-sidecar-rhel9@sha256:d984ee4c271840b18c3b2624e24db18a7be822950d00eab744a7ccd88998c539 |
odf4/odf-csi-addons-operator-bundle@sha256:04ccc74bd95015ac4cfa72deca9fdc1b21b968e61640c525ad6e2ae5ce138cb1 |
odf4/odf-csi-addons-rhel9-operator@sha256:b5ca5ce01ba5ac651a1442498c18b2620bd290d64e902f94810162e172d749ba |
odf4/odf-csi-addons-sidecar-rhel9@sha256:033f3718e8a073dc2ebc871a2c623ff684a2a9d73c46f9d300a0ca0426c1233b |
odf4/odf-multicluster-console-rhel9@sha256:b088bec0345723d1a1916f77908adff41574b1327428576503d15390cd26430f |
odf4/odf-multicluster-operator-bundle@sha256:a49eef2a4eaf6751f2e2addfaecdcf2e8c44d989e5bb617747d12dbee56cc862 |
odf4/odf-multicluster-rhel9-operator@sha256:078b72a6719351177b317ecdc257534c78fbc7ab23e0918392aea48767cbe9eb |
odf4/odf-must-gather-rhel9@sha256:7e8c6afa59ea01c3f827852bcbde72e129713f19bd0c3528d2b42818eedfbc5d |
odf4/odf-operator-bundle@sha256:3d7ea702b7c914d1708d3e01570a9d9a49880ad94f8dbe13f852d377e2de5448 |
odf4/odf-rhel9-operator@sha256:bfbc74cba2f0cf0daf18ff1064b72b975ec6f1d83342837673da6b2fc0cc9473 |
odf4/odr-cluster-operator-bundle@sha256:3744ce0820bfe62afe92f3d54dfe14e84855e31c74742e625076cb4c64a6f098 |
odf4/odr-hub-operator-bundle@sha256:287d69fe191022e7681a5d6067128f72323cdcbc710f82e88e2cdd3d29410f9e |
odf4/odr-rhel9-operator@sha256:b291ff5f30c8d2771315c2531e48c5484915d3ec5c3e7ca0d29157445f181b60 |
odf4/rook-ceph-rhel9-operator@sha256:a1f86e792657000ce9ba0faf3ee6ade5e11724146ab86b0b67e50cdab233fd4a |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.