Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

Chapter 34. Storage

lvconvert --repair now works properly on cache logical volumes

Due to a regression in the lvm2-2.02.166-1.el package, released in Red Hat Enterprise Linux 7.3, the lvconvert --repair command could not be run properly on cache logical volumes. As a consequence, the Cannot convert internal LV error occurred. The underlying source code has been modified to fix this bug, and lvconvert --repair now works as expected. (BZ#1380532)

LVM2 library incompatibilities no longer cause device monitoring to fail and be lost during an upgrade

Due to a bug in the lvm2-2.02.166-1.el package, released in Red Hat Enterprise Linux 7.3, the library was incompatible with earlier versions of Red Hat Enterprise Linux 7. The incompatibility could cause device monitoring to fail and be lost during an upgrade. As a consequence, device failures could go unnoticed (RAID), or out-of-space conditions were not handled properly (thin-p). This update fixes the incompatibility, and the logical volume monitoring now works as expected. (BZ#1382688)

be2iscsi driver errors no longer cause the system to become unresponsive

Previously, the operating system sometimes became unresponsive due to be2iscsi driver errors. This update fixes be2iscsi, and the operating system no longer hangs due to be2iscsi errors. (BZ#1324918)

Interaction problems no longer occur with the lvmetad daemon when mirror segment type is used

Previously, when the legacy mirror segment type was used to create mirrored logical volumes with 3 or more legs, interaction problems could occur with the lvmetad daemon. Problems observed occurred only after a second device failure, when mirror fault policies were set to the non-default allocate option, when lvmetad was used, and there had been no reboot of the machine between device failure events. This bug has been fixed, and the described interaction problems no longer occur. (BZ#1380521)

The multipathd daemon no longer shows incorrect error messages for blacklisted devices

Previously, the multipathd daemon showed incorrect error messages that it could not find devices that were blacklisted, causing users to see error messages when nothing was wrong. With this fix, multipath checks if a device has been blacklisted before issuing an error message. (BZ#1403552)

Multipath now flags device reloads when there are no usable paths

Previously, when the last path device to a multipath device was removed, the state of lvmetad became incorrect, and lvm devices on top of multipath could stop working correctly. This was because there was no way for the device-mapper to know how many working paths there were when a multipath device got reloaded. Because of this, the multipath udev rules that dealt with disabling scanning and other dm rules only worked when multipath devices lost their last usable path because it failed, instead of because of a device table reload. With this fix, multipath now flags device reloads when there are no usable paths, and the multipath udev rules now correctly disable scanning and other dm rules whenever a multipath device loses its last usable path. As a result, the state of lvmetad remains correct, and LVM devices on top of multipath continue working correctly. (BZ#1239173)

Read requests sent after failed writes will always return the same data on multipath devices

Previously, if a write request wass hung in the rbd module, and the iSCSI initiator and multipath layer decided to fail the request to the application, read requests sent after the failure may not have reflected the state of the write. This was because When a Ceph rbd image is exported through multiple iSCSI targets, the rbd kernel module will grab the exclusive lock when it receives a write request. With this fix, The rbd module will grab the exclusive lock for both reads and writes. This will cause hung writes to be flushed and or failed before executing reads. As a result, read requests sent after failed writes will always return the same data. (BZ#1380602)

When a path device in a multipath device switches to read-only, the multipath device will be reloaded read-only

Previously, when reloading a multipath device, the multipath code always tried to reload the device read-write first, and then failed back to read-only. If a path device had already been opened read-write in the kernel, it would continue to be opened read-write even if the device had switched to read-only mode and the read-write reload would succeed. Consequently, when path devices switched to from read-write to read-only, the multipath device would still be read-write (even though all writes to the read-only device would fail). With this fix, multipathd now reloads the multipath device read-only when it gets a uevent showing that a path devices has become read-only. As a result, when a path device in a multipath device switches to read-only, the multipath device will be reloaded read-only. (BZ#1431562)

Users no longer get potentially confusing stale data for multipath devices that are not being checked

Previously, when a path device is orphaned (not a member of a multipath device), the device state and checker state displayed with the show paths command in the state of the device before it was orphaned. As a result, the show paths command showed out of date information on devices that was no longer checking. With this fix, the show paths command now displays undef as the checker state and unknown as the device state of orphaned paths and users no longer get potentially confusing stale data for devices that are not being checked. (BZ#1402092)

The multipathd daemon no longer hangs as a result of running the prioritizer on failed paths

Previously, multipathd was running the prioritizer on paths that had failed in some cases. Because of this, if multipathd was configured with a synchronous prioritizer, it could hang trying to run the prioritizer on a failed path. With this fix, multipathd no longer runs the prioritizer when a path has failed and it no longer fails for this reason. (BZ#1362120)

New RAID4 volumes, and existing RAID4 or RAID10 logical volumes after a system upgrade are now correctly activated

After creating RAID4 logical volumes on Red Hat Enterprise Linux version 7.3, or after upgrading a system that has existing RAID4 or RAID10 logical volumes to version 7.3, the system sometimes failed to activate these volumes. With this update, the system activates these volumes successfully. (BZ#1386184)

LVM tools no longer crash due to an incorrect status of PVs

When LVM observes particular types of inconsistencies between the metadata on Physical Volumes (PVs) in a Volume Group (VG), LVM can automatically repair them. Such inconsistencies happen, for example, if a VG is changed while some of its PVs are temporarily invisible to the system and then the PVs reappear.
Prior to this update, when such a repair operation was performed, all the PVs were sometimes temporarily considered to have returned even if this was not the case. As a consequence, LVM tools sometimes terminated unexpectedly with a segmentation fault. With this update, the described problem no longer occurs. (BZ#1434054)