- Issued:
- 2024-02-08
- Updated:
- 2024-02-08
RHBA-2024:0763 - Bug Fix Advisory
Synopsis
LVMS 4.15 Bug Fix and Enhancement update
Type/Severity
Bug Fix Advisory
Topic
Updated container images that add new features and fix multiple bugs are now available for LVMS 4.15
Description
Logical volume manager storage creates thin-provisioned volumes using the
Logical Volume Manager and provides dynamic provisioning of block storage
on a cluster.
Enhancement(s):
- Recovering volume groups from the previous LVMS installation - With this release, `LVMCluster` custom resource (CR) provides support for recovering volume groups from the previous LVMS installation. If the `deviceClasses.name` field is set to the name of a volume group from the previous LVMS installation, LVMS recreates the resources related to that volume group in the current LVMS installation. This simplifies the process of using devices from the previous LVMS installation through the re-installation of LVMS.
- Support for wiping the devices in LVMS - This feature provides a new optional field `forceWipeDevicesAndDestroyAllData` in the `LVMCluster` custom resource (CR) to force wipe the selected devices. Prior to this release, wiping the devices required manually accessing the host. With this release, you can force wipe the disks without manual intervention. This simplifies the process of wiping the disks.
WARNING: If `forceWipeDevicesAndDestroyAllData` is set to `true`, LVMS wipes all previous data on the devices. You must use this feature with caution.
- Support for deploying LVMS on multi-node clusters - This feature provides support for deploying LVMS on multi-node clusters. Prior to this release, LVMS only supported single-node configurations. With this release, LVMS supports all the Red Hat OpenShift Container Platform deployment topologies. This enables provisioning of local storage on multi-node clusters.
WARNING: LVMS only supports node local storage on multi-node clusters. It does not support storage data replication mechanisim across nodes. When you are using LVMS on multi-node clusters, you must ensure storage data replication through active or passive replication mechanism to avoid single point of failure.
- Integrating RAID arrays with LVMS - This feature provides support for integrating RAID arrays that are created using the `mdadm` utility with LVMS. The `LVMCluster` custom resource (CR) provides support for adding paths to the RAID arrays in the `deviceSelector.paths` field and the `deviceSelector.optionalPaths` field.
- FIPS Compliance - With this release, LVMS is designed for Federal Information Processing Standards (FIPS). When LVMS is installed on Red Hat OpenShift Container Platform in FIPS mode, LVMS uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-3 validation on only the x86_64 architecture.
After successful LVMS upgrade from 4.15, the latest lvms-operator SHA image
should be seen.
Users of LVMS are advised to upgrade to the latest version in
OpenShift Container Platform, which fixes these bugs and adds these
enhancements.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat OpenShift Container Platform 4.15 for RHEL 9 x86_64
- Red Hat OpenShift Container Platform for Power 4.15 for RHEL 9 ppc64le
- Red Hat OpenShift Container Platform for IBM Z and LinuxONE 4.15 for RHEL 9 s390x
- Red Hat OpenShift Container Platform for ARM 64 4.15 for RHEL 9 aarch64
Fixes
- OCPBUGS-11486 - Extend leaseDurationSeconds in SNO
- OCPBUGS-13558 - LVMS: After node loss, lvmCluster resource still has the entry for the lost node
- OCPBUGS-15576 - 4.13 LVMS: PVC stuck pending when applying whole LVMS deployment config in a single List
- OCPBUGS-17805 - LVMS: Not seeing 'NotEnoughCapacity' events on Multi-node cluster
- OCPBUGS-17848 - LVM degrades with paths already part of vg
- OCPBUGS-17853 - expectedVGCount / readyVGCount mismatch in MultiNode Clusters fails Readiness
- OCPBUGS-18354 - LVMS: Watches / Owns Configuration missing for some dependents of LVM Operator
- OCPBUGS-23181 - topolvm-node crash loopback errors due to default device-class not detected properly
- OCPBUGS-23191 - LVM Controller not respecting Tolerations while counting VGs created
- OCPBUGS-23375 - [LVMS] After OCP upgrade old lvms resource pods still present in 'Error' state
- OCPBUGS-23409 - [LVMS] After OCP upgrade along with lvms upgrade workload annotations are missing in topolvm-controller deployment
- OCPBUGS-23995 - LVMS: LVMCluster without any valid optionalPaths remains in "Progressing" instead of "Failed" state
- OCPBUGS-24396 - overprovisionRatio can not be set to 1
- OCPBUGS-24586 - FIPS scan failure - lvms-must-gather container missing dependent openssl version
- OCPBUGS-24600 - Workload annotations are missing in the pod spec
- OCPBUGS-27003 - lvmcluster creation using diskselector with symlink (by-id) and wipe option fails
- OCPBUGS-27219 - Workload annotation missing in lvms namespace causing LVMS installation failure
- OCPEDGE-26 - Wipe local filesystem with LVMS
- OCPEDGE-37 - TopoLVM: Support PVCs in smaller Increments than 1Gi
- OCPEDGE-45 - LVMS Maintainability and Debugability 4.15
- OCPEDGE-63 - LVMS FIPS Build and Delivery Compliance
- OCPEDGE-64 - Enable multi-node LVMS
- OCPEDGE-670 - LVMS 4.15 Release
- OCPEDGE-70 - Software RAID via mdadm on LVMS
- OCPEDGE-77 - LVMS Support for OpenShift without CSISnapshot Capability
- OCPEDGE-81 - Recover LVMS cluster from on-disk metadata
- OCPBUGS-27330 - [release-4.15] LVMS deployment.kubernetes.io/revision in topolvm-controller container is continously patched out
- OCPBUGS-27358 - [release-4.15] LVM Operator does not have update and patch rights for various Cluster scoped resources
- OCPBUGS-26768 - PVC provisioning fails with storageClass using default volumeGroup in a cluster where multiple VGs are present
- OCPBUGS-27820 - [LVMS] FIPS build compliance check payload scan failed on lvms operator
- OCPBUGS-18397 - Deletion of LVMCluster is stuck when VG no longer exists and doesn't remove lvmd conf entry for deviceClass
- OCPBUGS-18708 - Creating more than one LVMCluster is allowed but not working
- OCPBUGS-19990 - Name of a device class should not be allowed to change
- OCPBUGS-22109 - [LVMS] PVC warning event 'notEnoughCapacity' is not generated
- OCPBUGS-22362 - Build is missing "valid subscription" data - LVM Storage [4.15]
- OCPBUGS-28966 - After an upgrade from 4.14 to 4.15, LVMS cannot tag the existing volume groups
- OCPBUGS-29065 - During an upgrade, TopoLVM Node volumes are not reconciled
CVEs
(none)
References
(none)
aarch64
| lvms4/lvms-must-gather-rhel9@sha256:2d00be1c9f630535de160678a87eeaaf1d511efef5d0bfa3d58bcbfac4e4c674 |
| lvms4/lvms-operator-bundle@sha256:521f46b40d0fef3d607bf2d7959ec6d4e23c84f31d22a081beb2a6c01c34f8e0 |
| lvms4/lvms-rhel9-operator@sha256:bc125fd0127eead35979721f44e342c73b38667dbd2033fad80f5aefbc199e74 |
| lvms4/topolvm-rhel9@sha256:496ed3c167de7431366695f289e0dc3321bc5098a040f59c1ed7eb9bd9d9c78f |
ppc64le
| lvms4/lvms-must-gather-rhel9@sha256:881c3ab87c7c746a1cf82f3711c505b91890e0016f2eab8b295b8343c38c5f8b |
| lvms4/lvms-operator-bundle@sha256:2fdaacffa5b8b70f115ee97f4a5fd9d5139f69ba22e0a6fc6b1279499d15f4a1 |
| lvms4/lvms-rhel9-operator@sha256:84972bf05e2f4160efe73ccbf5e3a014d0dade80509af16739bc1a2239089aaf |
| lvms4/topolvm-rhel9@sha256:1f3a6f540e689ed30623339e3d4805b21b8975cf53fe5093cdc1bb43b0ca6ce7 |
s390x
| lvms4/lvms-must-gather-rhel9@sha256:0eaf13d2d6753aaf47bebb4027fc5b09803131264daf0e1fb1e97179add90fcc |
| lvms4/lvms-operator-bundle@sha256:9634320ef358398e0b0f0fbc47665cebc303bc890abd1db64b9283e2ded68dd4 |
| lvms4/lvms-rhel9-operator@sha256:414c079dadec59f8d3e4746bea7f381bcd993a61ac96f1892f8c1b0864133354 |
| lvms4/topolvm-rhel9@sha256:a5a36369d970a5cf12f47332c311547d9193c88180d1f9bd612f63c3986d21e5 |
x86_64
| lvms4/lvms-must-gather-rhel9@sha256:06d47e031100db66a41ab0f0099046fb5ec2ada92aafa76ac3c3f5ade055bfba |
| lvms4/lvms-operator-bundle@sha256:be4cd67708878c3b775268c7cd307c3636557c30ed6e815c08e4e9a6a0a939b6 |
| lvms4/lvms-rhel9-operator@sha256:c57a3c45c6346ee502bfcb59fc20fe12ebd28dbe169667be2203eece716aba87 |
| lvms4/topolvm-rhel9@sha256:079356154d8ca250786f894632abb0e7d7693364aec08a41fab7023d76b11dd2 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.