- Issued:
- 2023-11-08
- Updated:
- 2023-11-08
RHSA-2023:6832 - Security Advisory
Synopsis
Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update
Type/Severity
Security Advisory: Important
Topic
Updated packages that include numerous enhancements and bug fixes are now available for Red Hat OpenShift Data Foundation 4.14.0 on Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Description
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
- HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) (CVE-2023-44487)
A Red Hat Security Bulletin which addresses further details about the Rapid Reset flaw is available in the References section.
- lapack: Out-of-bounds read in *larrv (CVE-2021-4048)
- net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding (CVE-2022-41723)
- Hashicorp/vault: Vault’s LDAP Auth Method Allows for User Enumeration (CVE-2023-3462)
- golang.org/x/net/html: Cross site scripting (CVE-2023-3978)
- golang: net/http, net/textproto: denial of service from excessive memory allocation (CVE-2023-24534)
- golang: html/template: improper sanitization of CSS values (CVE-2023-24539)
- golang: html/template: improper handling of empty HTML attributes (CVE-2023-29400)
- goproxy: Denial of service (DoS) via unspecified vectors (CVE-2023-37788)
- hashicorp: html injection into web ui (CVE-2023-2121)
- golang: net/http, x/net/http2: rapid stream resets can cause excessive work (CVE-2023-39325)
- hashicorp/vault: Google Cloud Secrets Engine Removed Existing IAM Conditions When Creating / Updating Rolesets (CVE-2023-5077)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
These updated packages include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
All Red Hat OpenShift Data Foundation users are advised to upgrade to these packages that provide these bug fixes and enhancements.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Data Foundation 4 for RHEL 9 x86_64
- Red Hat OpenShift Data Foundation for IBM Power, little endian 4 for RHEL 9 ppc64le
- Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 for RHEL 9 s390x
- Red Hat OpenShift Data Foundation for RHEL 9 ARM 4 aarch64
Fixes
- BZ - 1970939 - The ocs-osd-removal-job should have unique names for each Job
- BZ - 1982721 - [Multus] rbd command hung in toolbox pod on Multus enabled OCS cluster
- BZ - 2023189 - [DR] Rbd image mount failed on pod saying applyFSGroup failed for vol
- BZ - 2024358 - CVE-2021-4048 lapack: Out-of-bounds read in *larrv
- BZ - 2067095 - [RDR] [tracker for BZ 2111364 and BZ 2211290] rbd mirror scheduling is getting stopped for some images
- BZ - 2079232 - ramen-hub-operator pod is getting into CLBO with the fresh install of Openshift DR Hub Operator
- BZ - 2104207 - [Tracker for BZ #2138216] [MetroDR] Monitor crash - ceph_assert(0 == \"how did we try and do stretch recovery while we have dead monitor buckets?\"
- BZ - 2104254 - [RDR] Last taken snap time is not as per the snapshot schedule status for most of the images
- BZ - 2121514 - [ODF Tracker] RDR - don't leave an incomplete primary snapshot if the peer who is handling snapshot creation dies
- BZ - 2122521 - Disable and block double registration of the same storage IP
- BZ - 2134040 - [RDR][CEPHFS] volsync-dd-io-pvc pvc's are taking a lot of time to come in Bound state
- BZ - 2134115 - [RDR][CEPHFS] After performing Failover volsync-rsync-src are still running on Secondary cluster
- BZ - 2138855 - [RDR] When relocate is performed, Peer Ready isn't marked as False
- BZ - 2142462 - Value of all the Fields (like Raw Capacity etc.) in Storagesystem page shows 0
- BZ - 2150752 - ocs_advanced_usage metric is 0 even when storageclass/PV encryption is enabled
- BZ - 2150996 - [Ceph Tracker BZ #2119217] ODF - DR : osd pods will be killed due slowness to respond on Liveness probes
- BZ - 2154351 - [RDR] [Ceph Fix Tracker BZ #2119217] lastSyncTime for all VR's is several hours behind lastUpdateTime
- BZ - 2158773 - ClusterObjectStoreState reports critical alert
- BZ - 2160034 - [Ceph Tracker BZ #2153673] [RDR] rbd images report up+error, description: failed to refresh remote image on secondary cluster after workload deletion from primary cluster
- BZ - 2165941 - [RDR] ACM UI allows Relocate operation even when Peer Ready is False due to which cleanup of Primary cluster remains stuck for on-going failover operation
- BZ - 2166354 - [RDR][CEPHFS][Tracker] sync/replication is getting stopped for some pvc rsync: connection unexpectedly closed
- BZ - 2169499 - [RDR] Relocate did not happen
- BZ - 2172624 - [RDR] VRG on primary, DRPC and other resources on hub remains stuck when workload is deleted
- BZ - 2175201 - [MDR]: After hub restore Ramen wasn't reconciling and so was not able to initiate failover from UI
- BZ - 2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding
- BZ - 2179348 - [RDR] [UI] Manage policy for ApplicationSets page shows 404 error when DRPolicy is not connected to any app
- BZ - 2180329 - [RDR][tracker for BZ 2215392] RBD images left behind in managed cluster after deleting the application
- BZ - 2182351 - [RFE] Expose mpath device type in localvolumeset creation
- BZ - 2183092 - [GSS] Upload object to Virtual-hosted style noobaa bucket failed
- BZ - 2183444 - [GSS] Modify the schedule to run ReclaimSpaceJob accidentally disabled CronJob.
- BZ - 2184483 - CVE-2023-24534 golang: net/http, net/textproto: denial of service from excessive memory allocation
- BZ - 2184647 - [RFE] [RDR] Add a pre-check to validate PVC label before applying DRPolicy
- BZ - 2185042 - [RFE] Change "Used Capacity" on System Capacity card to "Used Raw Capacity"
- BZ - 2189866 - [GSS] The "access_key: '123' and secret_key: 'abc'," messages are observed from the log of noobaa agent pod
- BZ - 2190382 - [RDR] [Tracker for BZ #2119217] A few components remain stuck upon workload deletion
- BZ - 2192852 - KMSServerConnectionAlert is not cleared when connection to kms is restored
- BZ - 2193109 - Name req validation fail to check start symbol
- BZ - 2196026 - CVE-2023-24539 golang: html/template: improper sanitization of CSS values
- BZ - 2196029 - CVE-2023-29400 golang: html/template: improper handling of empty HTML attributes
- BZ - 2207918 - [RDR] After performing relocate operation some pod stuck in ContainerCreating with msg applyFSGroup failed for vol
- BZ - 2208563 - The ocs-operator uses the `Always` pull policy for images pulled by digest
- BZ - 2209251 - [UI] ODF Topology rook-ceph-operator deployment shows wrong resources
- BZ - 2209258 - [UI] Topology view Resources list is not adjusting dynamically to window size
- BZ - 2209288 - [UI][MDR] After hub recovery, cant initiate failover of applicationSet based apps.
- BZ - 2210047 - [UX] ODF Topology tab appears with lag when navigating to Data Foundation
- BZ - 2210289 - [GSS] Supported action in s3 bucket policy
- BZ - 2211362 - [RDR] ocs metrics exporter pod returns authentication timeout error
- BZ - 2211482 - [RDR] There is a lag in showing correct operation readiness status on ACM console
- BZ - 2211491 - [RDR] After initiating failover, maintenance mode is not enabled
- BZ - 2211564 - [RDR] DR dashboard doesn't show application information during Relocate action
- BZ - 2211643 - [MDR][ACM Tracker] After zone failure(c1+h1 cluster) and hub recovery, apps on c2 cluster are cleaned up as application namespace Manifestwork isn't backed up
- BZ - 2211807 - [RDR] DR dashboard doesn't reflect correct health status of operators on hub cluster when they are unhealthy
- BZ - 2211866 - ocs-client operator sets hardcoded quota per consumer to 1TB
- BZ - 2212773 - OCS Provider Server service comes up on public subnets
- BZ - 2212931 - [RDR] Missing page loader leads to delay in fetching cluster names or DRPolicy related details
- BZ - 2213085 - [Stretch cluster] UI - Add capacity failure should carry proper error message
- BZ - 2213118 - [RDR] Peer Ready value in drpc -o wide doesn't match with the value in it's yaml
- BZ - 2213183 - [Stretch cluster] Add capacity should not show "thin-csi" storage class in storageClass dropdown for LSO stretch cluster
- BZ - 2213550 - [RDR] [UI] Correct the display names of rbd image statuses on BlockPools page
- BZ - 2213552 - [RDR] [UI] Time format and field name for "Data last synced on" is different than CLI format
- BZ - 2214023 - [UI] Node details from ODF Topology incorrect
- BZ - 2214033 - [UI] Resources utilization in ODF Topology do not match to metrics
- BZ - 2214237 - CVE-2023-2121 hashicorp: html injection into web ui
- BZ - 2214288 - noobaa-operator pod shows multiple restarts
- BZ - 2214838 - Failed to restart VMI in cnv - Failed to terminate process Device or resource busy
- BZ - 2215239 - The CephOSDCriticallyFull and CephOSDNearFull alerts are not firing when reaching the ceph OSD full ratios
- BZ - 2215917 - [RFE] collect "rbd status" output for all images
- BZ - 2216707 - Disable topology view for external mode
- BZ - 2217887 - wrong mcg-operator.v4.12.3 installed on 4.14 version
- BZ - 2217904 - MCG endpoint minCount and maxCount in ocs-storagecluster are ignored
- BZ - 2218116 - Avoid wrong detection of disk media type, such as HDD instead of SSD on vSAN
- BZ - 2218309 - odf must-gather does not collect multus NetworkAttachmentDefinitions
- BZ - 2218492 - Pass-through CA certificates to Velero not working
- BZ - 2218593 - Storagecluster starts reporting error state after a few days
- BZ - 2219136 - Values on graph go out of tile
- BZ - 2219355 - [ODF-4.14] noobaa-operator pod CLBO due to "listen unix /var/lib/cosi/cosi.sock: bind: address already in use"
- BZ - 2219395 - [DR][4.14 clone] Pass-through CA certificates to Velero for k8s object protection to function
- BZ - 2219436 - Unable to deploy ODF4.14 on IBM Power(ppc64le) because of missing multiarch image of objectstorage-provisioner-sidecar in noobaa operator pod
- BZ - 2219797 - [IBM Z/MDR]: With ACM 2.8 applying DRpolicy to subscription workload fails
- BZ - 2219843 - [RDR] Fire OCP VolumeSynchronizationDelay alerts when ramen lastGroupSyncTime is delayed
- BZ - 2221473 - [RDR] [Cephfs] Destination pods remain stuck in ContainerCreating forever with reason "Unable to attach or mount volumes", sync doesn't start after failover operation
- BZ - 2221488 - ODF Monitoring is missing some of the metric values 4.14
- BZ - 2221638 - [RDR] Ceph reports no active mgr after failover of cephfs workloads
- BZ - 2221995 - noobaa-operator is in ErrImagePull
- BZ - 2222022 - [4.14]: Wrong version of ocs-storagecluster
- BZ - 2222887 - [RDR] [Tracker for Ceph BZ #2236188] Sync for cephfs workloads has stopped
- BZ - 2223553 - [MDR][Fusion][4.14 clone] PVC remain in pending state after successful failover
- BZ - 2223575 - Pod odf-operator-controller-manager CrashLoopBackOff
- BZ - 2223690 - Add a tool-tips for ceph RBD snapshot graph in DR dashboard
- BZ - 2223692 - Remove date from x-axis label of ceph RBD snapshot graph in DR dashboard
- BZ - 2223702 - Remove link from legend for ceph RBD snapshot graph in DR dashboard
- BZ - 2223705 - Fix the time selection dropdown issue for the ceph RBD snapshot graph in DR dashboard
- BZ - 2223706 - [MCG-4.14] Azure Namespacestore / Backingstore creation fails with InvalidCredentials error
- BZ - 2223976 - Noobaa stuck in configuration - stuck on CCO minting AWS creds
- BZ - 2224245 - CVE-2023-37788 goproxy: Denial of service (DoS) via unspecified vectors.
- BZ - 2224325 - [MDR] Not able to relocate STS based applications
- BZ - 2224493 - Panic when operator is fencing a node where pv is no provisioned by CSI
- BZ - 2225176 - After upgrade of ODF OSD pods are in Init:CrashLoopBackOff
- BZ - 2225223 - [UI] 'Other' PVC appears on Requested capacity card
- BZ - 2225685 - Policy list view table is crashing when try to sort the rows on manage policy modal
- BZ - 2226647 - Pod rook-ceph-operator CrashLoopBackOff
- BZ - 2227017 - [RDR] token-exchange-agent pod in CrashLoopBackOff state
- BZ - 2227607 - After upgrading to 4.14, MCG endpoint minCount and maxCount in ocs-storagecluster are ignored
- BZ - 2227835 - [MCG] RPC method "list_objects" fails with "RPC: object.list_objects() Call failed: failed to WebSocket dial"
- BZ - 2228020 - CVE-2023-3462 Hashicorp/vault: Vault?s LDAP Auth Method Allows for User Enumeration
- BZ - 2228108 - [MCG-4.14] Azure log-based deletion sync fails with UnhandledPromiseRejectionWarning error at the noobaa-core logs
- BZ - 2228319 - [RDR] [MDR]ramen-dr-cluster-operator pod in CrashLoopBackOff state
- BZ - 2228375 - Used capacity PVCs per namespace dropdown is not available on External mode deployments
- BZ - 2228689 - CVE-2023-3978 golang.org/x/net/html: Cross site scripting
- BZ - 2228805 - rsh failed for new prometheus-adapter pod which is running in openshift-storage NS
- BZ - 2228816 - [odf-console] The "Provider details" on namespacestore page is not updated
- BZ - 2230050 - [MCG-4.14] Azure log-based deletion sync fails with "TypeError: src_bucket_name.unwrap is not a function"
- BZ - 2230334 - Object Buckets / Object Bucket Claims list filter does not work
- BZ - 2230447 - [ACM Tracker] [MDR] After zone failure(c1+h1 cluster) and hub recovery, openshift-storage namespace is deleted on c2 cluster
- BZ - 2231074 - Upgrade from 4.12.z to 4.13.z fails if StorageCluster is not created
- BZ - 2231116 - [UI] [4.14] Wrong header at Object Service page
- BZ - 2231124 - [MCG-4.14] Azure log-based deletion sync isn't complete
- BZ - 2231709 - [MCG-4.14] Creating an OBC with replication via CLI fails with "no such host" error
- BZ - 2231838 - Modifying title description on manage policy modal
- BZ - 2232464 - CSI pods and customer workloads both have 'priority=0' and race for resources
- BZ - 2232502 - update rook API's
- BZ - 2232552 - [RDR] DRPC Deployment is failing after assigning DRPolicy to the application
- BZ - 2232608 - [UI] [4.14] Wrong measurement unit. Topology. Resources. Observe.
- BZ - 2233027 - [UI] Topology shows rook-ceph-operator on every node
- BZ - 2233036 - [i18n translations] add translations for 4.14
- BZ - 2233410 - ROOK_CSI_ENABLE_NFS is not enabled as part of nfs enable from UI
- BZ - 2233445 - Search box on the manage data policy modal is not working
- BZ - 2233727 - OCS-operator | Need to bump rook to v.1.12.2
- BZ - 2233731 - Default is OVN as of OCP4.12, I guess we need to update the document and the console UI?
- BZ - 2234357 - Restart all ceph daemons when Encryption Compression(any Network.Connections options) change
- BZ - 2234386 - Change the name Snapshot into something like Snapshot replication time.
- BZ - 2234428 - [RDR] If placement is deleted on workload deletion, drpc shouldn't be left behind
- BZ - 2234735 - Revert brownfield changes in OCS operator for 4.14
- BZ - 2234759 - [Greenfield] Add a new option to enable RDR for fresh ODF deployment
- BZ - 2235245 - [MDR]metrics initialization was skipped when DRPC creation failed initially
- BZ - 2235395 - Wrong CIDR gets blocklisted and pod does not get mounted upon fencing when having gateway in between
- BZ - 2235423 - Random bogus error in relocate/failover dialog: "No DRPolicy found"
- BZ - 2235708 - [RDR][CEPHFS] When failover is done for cephfs workload drpc stuck in `Cleaning Up` and sync for workloads are stopped
- BZ - 2236387 - [RDR] Volsync secret name is limited to 62chars and blocks intermittent deployments
- BZ - 2236436 - [RDR] Assigning data policy is allowed even without selecting any pvc label selector
- BZ - 2236444 - No metrics from ceph-exporter
- BZ - 2236445 - [RDR] In Data policies page, Connected applications for a DRPolicy shows incorrect application type
- BZ - 2237213 - [UI] [4.14] TypeError on OBC creation page
- BZ - 2237226 - [RDR][CEPHFS] Make "destinationCopyMethod: LocalDirect" as the default parameter for volsync when using RDR
- BZ - 2238400 - latency metrics Prometheus request is failing
- BZ - 2238682 - [Tracker ACM-7479][RDR] Applications are not getting DR protected
- BZ - 2238720 - [ACM Tracker] [IBMZ / MDR]: Ramen-dr-cluster-operator is not deployed on managed clusters after applying drpolicy
- BZ - 2238895 - revert the ibm-storage-odf version to 1.4
- BZ - 2239033 - "Create StorageClass" page is breaking for ceph-csi and ceph-rbd provisioners
- BZ - 2239093 - [RDR] The note in info. icon next to Snapshots synced graph is incorrect
- BZ - 2239096 - [RDR] Graphs on the DR monitoring dashboard are missing rbd label
- BZ - 2239101 - [RDR] DR monitoring dashboard is inconsistent in replicating data for subscription based workloads
- BZ - 2239140 - [RDR] App-set workload resources remain stuck on deletion
- BZ - 2239580 - [RDR][CEPHFS] Failover/Relocate Operation end up syncing back to the secondary all files.
- BZ - 2239589 - [RDR][Tracker] Few rbd images remain stuck in starting_replay post failover, image_health also reports warning
- BZ - 2239622 - When creating a new pool, we show device type HDD without an option to select SSD
- BZ - 2239776 - [Tracker ACM-7600][RDR] Source pods remain stuck on the primary cluster and sync stops for cephfs workloads
- BZ - 2239802 - [External mode]: Failed to run rbd commands from rook ceph operator pod
- BZ - 2240778 - [ODF 4.12][MCG: DB password showing up in clear text in core and endpoint pod logs]
- BZ - 2241015 - [RDR][Ceph-FS] Relocation does not proceed, progression status stuck at WaitingForResourceRestore
- BZ - 2241185 - [RDR] Ceph reports osd down but pod are in running state
- BZ - 2241980 - CVE-2023-5077 hashicorp/vault: Google Cloud Secrets Engine Removed Existing IAM Conditions When Creating / Updating Rolesets
- BZ - 2242121 - [RDR] A few cephfs pvcs are stuck in pending state over fresh deployment
- BZ - 2242374 - Redundant dependency on Vault library - should be removed
- BZ - 2242803 - CVE-2023-44487 HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
- BZ - 2242854 - Noobaa should fallback to a pv-pool default BS when CCO is not providing credentials
- BZ - 2243296 - CVE-2023-39325 golang: net/http, x/net/http2: rapid stream resets can cause excessive work (CVE-2023-44487)
- BZ - 2244383 - Annotate the RBD-virtualization StorageClass with `storageclass.kubevirt.io/is-default-virt-class`
- BZ - 2244517 - [RDR] Ceph reports osd down but pod are in running state
- BZ - 2244566 - [ocs-operator clone] Redundant dependency on Vault library - should be removed
- BZ - 2244638 - [RDR] odf-multicluster-console is in CLBO
- BZ - 2244791 - [RDR] Snapshot graphs are not showing any metrics
- BZ - 2244793 - [RDR] [ACM Tracker] Graphs are empty and cluster operator is degraded as DR metrics don't work with ACM observability
- BZ - 2245978 - Multus deployment of ODF 4.14 is failing because `ip` tool in RHCS build component is missing.
- BZ - 2246185 - [Tracker Volsync] [RDR] Request to enable TCP keepalive timeout and lower its value in order to detect broken connection within 15mins
CVEs
- CVE-2018-14041
- CVE-2018-20676
- CVE-2018-20677
- CVE-2021-4048
- CVE-2022-41723
- CVE-2023-2121
- CVE-2023-2602
- CVE-2023-2603
- CVE-2023-3341
- CVE-2023-3462
- CVE-2023-3978
- CVE-2023-4527
- CVE-2023-4806
- CVE-2023-4813
- CVE-2023-4911
- CVE-2023-5077
- CVE-2023-24534
- CVE-2023-24539
- CVE-2023-29400
- CVE-2023-30630
- CVE-2023-37788
- CVE-2023-38545
- CVE-2023-38546
- CVE-2023-39325
- CVE-2023-40217
- CVE-2023-43040
- CVE-2023-44487
- CVE-2023-46159
aarch64
odf4/mcg-cli-rhel9@sha256:21d8549a1b42e78f04f555d4bf662da649e7411b54491055779859fa4bce2a37 |
odf4/mcg-core-rhel9@sha256:61db6bf89d17320cb1de31b07905a67db6418b8bd5da3b28c105c30c673a69ed |
odf4/mcg-rhel9-operator@sha256:5149a129f3caf1f0e93ce490df3e4bbab4d842cabd85556970349c81d1de6970 |
odf4/ocs-client-rhel9-operator@sha256:bed6367b1913369a99f1fc690df25a592b86d1fce68e2b86ae71ecbea7098952 |
odf4/ocs-rhel9-operator@sha256:e5d0c68cad94e04ad024ab91c05c04606b166d30e2c81a5414b4517195878bc3 |
odf4/odf-csi-addons-rhel9-operator@sha256:fd167d95c42008058f7b7dc10f57ab835af7bef956599284daab4d95923a687b |
odf4/odf-csi-addons-sidecar-rhel9@sha256:0b1bec21a84065d144bdcfc433adccbc93038bd2a5c8a8164c50004138dbcb9f |
odf4/odf-multicluster-rhel9-operator@sha256:1db819006c96804631357c3415b6ddf2a1c7adc6658d5f2a95dd686331893af2 |
odf4/odf-must-gather-rhel9@sha256:8c5171517378cf3d1ce8ff5526dfed9cd38c13691e425bb59c5f0ddca83c3984 |
odf4/odf-rhel9-operator@sha256:7bb21477b9fc520dafb79bdf074b29fc2ee34dd66ea936ed828d9cd54298afa2 |
odf4/odr-rhel9-operator@sha256:6168ffb958078d292d77f6e6331d1875e248c053b05fc87c102cf24912540e80 |
ppc64le
odf4/cephcsi-rhel9@sha256:d256c07680a4463f09d4a06a000fe001806cde5ccaa958005a1a3e66a127b5b1 |
odf4/mcg-cli-rhel9@sha256:52cadb4a29242624ef35270f56317e019ae04dda9c6710d9d944a8628ca00818 |
odf4/mcg-core-rhel9@sha256:a3c88a63ab8c071cfd28aa97a12d2cf44f8d6f3779875b8bac79b76334995575 |
odf4/mcg-operator-bundle@sha256:db479b0a4c27104c57911bead5d15b99617a2ae355586e765b04ea02f72662cf |
odf4/mcg-rhel9-operator@sha256:45e081dc18d87532beb038ee07e9e99f3ab686ad5a4fe65664f82e490d295d14 |
odf4/ocs-client-console-rhel9@sha256:0dbc5b109af0e72148bc5c70089e2db9f3a6a30946484ea2b75a05ed96b74049 |
odf4/ocs-client-operator-bundle@sha256:4ff423ec1d18619590b5e3ce4b10352dbc50894ec92ec5d0164c8488e6f45a9e |
odf4/ocs-client-rhel9-operator@sha256:7f0e8e93d13f4fd74c0e801b94c0fa79369f319e371832c59c990f80a02a29fb |
odf4/ocs-metrics-exporter-rhel9@sha256:52b43f37c8801a43153ea7df37ec7a20494ecaa3f6b4b3c71a97bb670b418e44 |
odf4/ocs-operator-bundle@sha256:2ae1361711997b8ab997c6fc09985af6909968a055d616d633bde23910a000b0 |
odf4/ocs-rhel9-operator@sha256:36a3587297be43c27a0348efc2ce0bd6d5cb4ff5447206e07770b78c6804d827 |
odf4/odf-console-rhel9@sha256:1355bfd45d0f7715fe5a8c709d92951a25d1ee06742373ad632890dbd8678dc4 |
odf4/odf-cosi-sidecar-rhel9@sha256:1073a52233897ca521366b8a98c401b8f4b43a75e289dfe2bab3281498134808 |
odf4/odf-csi-addons-operator-bundle@sha256:53e09d58b3875d5584aedc091fcce42dcfe3f0d215092ecff8da61211ba822e3 |
odf4/odf-csi-addons-rhel9-operator@sha256:f5ba1535c00f7330847db8d7202aa1bb09aa7598e2c840dabb9156135688c2ad |
odf4/odf-csi-addons-sidecar-rhel9@sha256:eecdcf798f46f87751d06803634ee2c55dba5be4fb9191724f6f66ce8e2d1d53 |
odf4/odf-multicluster-console-rhel9@sha256:93e15fb7e7ad0ca8bdfa151bb4bc709e19b84a93b9dd4d6c5b5f09c1669410a7 |
odf4/odf-multicluster-operator-bundle@sha256:60eefcb09904faadfef9b44f25da9fc8fc4923b178090e7218e67cb9ae0793a3 |
odf4/odf-multicluster-rhel9-operator@sha256:0110d494cf1e0171e50218990dde818dfbd0119e88237459dc44fd8ffa8018bc |
odf4/odf-must-gather-rhel9@sha256:656596458a504dc59881347a69ed67ded606fe2ecafebe612cac4ef687b83c2a |
odf4/odf-operator-bundle@sha256:f7b2891ac970e16db4204bcc1fd562d41ff4c907223bdbec5e372fe42361a03e |
odf4/odf-rhel9-operator@sha256:d207a79a6144f1f929342f41d2270bf6528caad00bea697d9129eb405241a7df |
odf4/odr-cluster-operator-bundle@sha256:f0e5ba9ff8d2a7413b21dcd88e238431b8897cae5d24134faeeb17769b930702 |
odf4/odr-hub-operator-bundle@sha256:a9f630b9bfebe4e363dd4d06e471ac0bc1bb8ada15aa64eeff02d38aa359302d |
odf4/odr-rhel9-operator@sha256:4c71572fdd69a4073455467201b958a308fbd59db81fc553a6ee24eddc45f011 |
odf4/rook-ceph-rhel9-operator@sha256:9c6657db4952f5096eb9cbae6a8b5bcfaec530b9c41202e604d23bfba90dbcb4 |
s390x
odf4/cephcsi-rhel9@sha256:19066af2eb30877d010ef66b5d538749da1f0521b4a762871a53c57ead216e9a |
odf4/mcg-cli-rhel9@sha256:c354acbac03e2af6fcb7b5b1dbe16ab0a382004ef7a335f9327893138ce2e922 |
odf4/mcg-core-rhel9@sha256:0a9ccf86889c47b950246e2f6829b568b2f3cb0f37b2ad18282ff2a40f0448d6 |
odf4/mcg-operator-bundle@sha256:b2586abf037cc9a313b5b07a36806d27f8a5fdee8daf99a00ee491243085a7d4 |
odf4/mcg-rhel9-operator@sha256:fcb6309f9244be94c6154d45f0a5c415783407064fee1ee48a4c05970054da1c |
odf4/ocs-client-console-rhel9@sha256:092b03642b8adae73f45ff264d38a796fb8a7fd89b87e712db0473f9ccda3862 |
odf4/ocs-client-operator-bundle@sha256:4c73760f7c2e5283d5fa197a95191d8e0568dbebfabffe476933a2545bea2b3b |
odf4/ocs-client-rhel9-operator@sha256:6b8fb443051345da399d48c5cff2dce22b4b8b2a9ab224e76d423d820151a8fe |
odf4/ocs-metrics-exporter-rhel9@sha256:309b5a86d0f438d93ca64d7eea18c0b69a33a345e913977e2cd6507ce54e4fc9 |
odf4/ocs-operator-bundle@sha256:e15785bfa7738c85d01dff5e137564401d9a8982785e0bdff73f7df166137cd9 |
odf4/ocs-rhel9-operator@sha256:ed46de26d5f1383b2c2924535cca7af2837e6ebbffd576b20939660facd70dc7 |
odf4/odf-console-rhel9@sha256:d164df6e45f3541a748cfac39014048cf356aad5e90441c342f57b318dd44935 |
odf4/odf-cosi-sidecar-rhel9@sha256:d4cff642b23e5d006b179d81b20d1dc5aad68d0123dcb339f72973c68845ec4a |
odf4/odf-csi-addons-operator-bundle@sha256:d79389a67faa8bdecb904f3341125428463fd9dee0eef4c49ad62b2046b0bc3b |
odf4/odf-csi-addons-rhel9-operator@sha256:f380c5901d83e14c0fd49616749a1cd7d9ac83b4b3a757c152a1cc94460c0f7f |
odf4/odf-csi-addons-sidecar-rhel9@sha256:98a5ddc54894ec63160dace01773f0e078d659dbf2e3a637a5061692d13d740d |
odf4/odf-multicluster-console-rhel9@sha256:e2185f7ac5e82ba3271f5d11bdcf649775afb4e275ff04868c9ba7d0be511e1e |
odf4/odf-multicluster-operator-bundle@sha256:1a1d5cce65a73a7ebd6796803e2adf296ca671816858d668117d9d21d14b1a87 |
odf4/odf-multicluster-rhel9-operator@sha256:bae71b3fcde98cc26207fdab8c6e08bb1c20d5fc517107eee23e52ceb03d060e |
odf4/odf-must-gather-rhel9@sha256:fa66ff5299778d29c990a6c177aa1378f76499f14da81ca0726705ed88205b6c |
odf4/odf-operator-bundle@sha256:aca59c5da1ab71fda2a9c1b58d9637349c99f6501787443d102bd3312a00bf61 |
odf4/odf-rhel9-operator@sha256:8eb089256eae891796741e26ebc18d251e24692f4fce4d6bdab6dbfa8d1930af |
odf4/odr-cluster-operator-bundle@sha256:91b4ab5822b81c8c731e587951830b61d318124b0f70a7b5992ab1d7979ca1e1 |
odf4/odr-hub-operator-bundle@sha256:7a3d210c3b23f5afc21d3db41ce8904a041ad7feffd8503f5c2ee52a97357cb7 |
odf4/odr-rhel9-operator@sha256:a3f5d7281f9e28528923f54d816d7fe586f1cba6074f0a775fa613c99e5ad2a5 |
odf4/rook-ceph-rhel9-operator@sha256:e18b3bec5944442cba078061c67c549d820522ddc6261c58a00208643def28a8 |
x86_64
odf4/cephcsi-rhel9@sha256:35a02234cca01c1f4acc66744fe5238c1810ba098b63f95c5c09f183647032f9 |
odf4/mcg-cli-rhel9@sha256:ee129f877f166acabe289aba862cd1d0154f29987dfd9df4800b18c005725256 |
odf4/mcg-core-rhel9@sha256:4db0118093d6186245a1bad6e33a7c324adf4385ffe632e72ef8eb6fa4bc7cc4 |
odf4/mcg-operator-bundle@sha256:217ee67bfcec7b49193c035292a10577603397acae7b3d36ef2a8809a47c4b01 |
odf4/mcg-rhel9-operator@sha256:cf3caa9b68d92d8b99237dedd6631fbe96a30ce1ee44f4cc3eb4c9a45b5f906d |
odf4/ocs-client-console-rhel9@sha256:1a8a1d7bfa55de51ec2a777f2ad0d21d61c0e4a8be2802cbf4d76dc4a5d578d6 |
odf4/ocs-client-operator-bundle@sha256:cab28a3e2b44dc6f2efebbd5a52dee943ac9a8ae1f52cd4455418a6b689d54b3 |
odf4/ocs-client-rhel9-operator@sha256:923bc2abdbf6ad3594e3db6acf6a0054ee13deea7127480e4d200758cc2789d0 |
odf4/ocs-metrics-exporter-rhel9@sha256:bc2b43b18fb8a054376e2295dc8e4bd7be57e8d2898a3efca3a014c40a109964 |
odf4/ocs-operator-bundle@sha256:33726ea05cce60e263d4f64dab60543b9c70542f1649aa79f0e10bc264d82f6a |
odf4/ocs-rhel9-operator@sha256:d83c01e9498460d577809a37bae6c4a5e8a908209ff5e3541e8453aff29d378b |
odf4/odf-console-rhel9@sha256:9e58f78573d466a52b2091df3680331b1453f8f1185d72fa8daddaa2bee34522 |
odf4/odf-cosi-sidecar-rhel9@sha256:d0c257b6f5fe9790c687dd255ef574a5e5fe5d2d24f90bd76fc674febf681871 |
odf4/odf-csi-addons-operator-bundle@sha256:36074f09cf007c51360bbcef9c20f05696a666ed5a7e9a087d3c5f2abc28d3b1 |
odf4/odf-csi-addons-rhel9-operator@sha256:22f1de08f29857973e81b71745ac622d43de92b383a70f9efa7bff7404fb8703 |
odf4/odf-csi-addons-sidecar-rhel9@sha256:701aa70b3c88008e143b97edf1b2ad6f408f3620b3e174fe08d355eca186a8e1 |
odf4/odf-multicluster-console-rhel9@sha256:7e51fe4b13a94ba6ab433112f2d165a9950b63b6e479d6fd158ad593ff9c8bac |
odf4/odf-multicluster-operator-bundle@sha256:c106a989373ad238437d87f84b70333bd419949a218bbd0b1b33f55ee010a892 |
odf4/odf-multicluster-rhel9-operator@sha256:a662af8813a91db2db0bc6036b57809cba61814d06c3064d2223196883186c9c |
odf4/odf-must-gather-rhel9@sha256:bc71d7a33f838e3afeefe1cb72df8a9e29e0cf1aa20e3beea882939dcffd51b1 |
odf4/odf-operator-bundle@sha256:2073c981617b587e900e92a933f7d144f14a86c9ba5429923a4bba4129877290 |
odf4/odf-rhel9-operator@sha256:d52027bc58ef6a9a4a0b1b47abb4fb2651db958da5ffb4931c82cebb0838782e |
odf4/odr-cluster-operator-bundle@sha256:c11e131a187ff58e4143de82e9e81164359c072ecaa953ba851e2f3ce973e8c0 |
odf4/odr-hub-operator-bundle@sha256:50ff517fcb19def59ed5599de7ca1aadfc45f8506e605df0a6f7b28a5352e1d7 |
odf4/odr-rhel9-operator@sha256:b180a6a638ac27696a64a05f59dfe0de59f95bad3ade5d922cb8ee4d13d9e024 |
odf4/rook-ceph-rhel9-operator@sha256:17aec04a9fbdbe6901a564743d0bdda4080b32571892a3633c63034aa7bc25cd |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.