Red Hat Training
A Red Hat training course is available for Red Hat Enterprise Linux
Chapter 17. Storage
Support added in LVM for RAID level takeover
LVM now provides full support for RAID takeover, previously available as a Technology Preview, which allows users to convert a RAID logical volume from one RAID level to another. This release expands the number of RAID takeover combinations. Support for some transitions may require intermediate steps. New RAID types that are added by means of RAID takeover are not supported in older released kernel versions; these RAID types are raid0, raid0_meta, raid5_n, and raid6_{ls,rs,la,ra,n}_6. Users creating those RAID types or converting to those RAID types on Red Hat Enterprise Linux 7.4 cannot activate the logical volumes on systems running previous releases. RAID takeover is available only on top-level logical volumes in single machine mode (that is, takeover is not available for cluster volume groups or while the RAID is under a snapshot or part of a thin pool). (BZ#1366296)
LVM now supports RAID reshaping
LVM now provides support for RAID reshaping. While takeover allows users to change from one RAID type to another, reshaping allows users to change properties such as the RAID algorithm, stripe size, region size, or number of images. For example, a user can change a 3-way stripe to a 5-way stripe by adding two additional devices. Reshaping is available only on top-level logical volumes in single machine mode, and only while the logical volume is not in-use (for example, when it is mounted by a file system). (BZ#1191935, BZ#834579, BZ#1191978, BZ#1392947)
Device Mapper linear devices now support DAX
Direct Access (DAX) support has been added to the
dm-linear
and dm-stripe
targets. Multiple Non-Volatile Dual In-line Memory Module (NVDIMM) devices can now be combined to provide larger persistent memory (PMEM) block devices. (BZ#1384648)
libstoragemgmt rebased to version 1.4.0
The libstoragemgmt packages have been upgraded to upstream version 1.4.0, which provides a number of bug fixes and enhancements over the previous version. Notably, the following libraries have been added:
- Query serial number of local disk: lsm_local_disk_serial_num_get()/lsm.LocalDisk.serial_num_get()
- Query LED status of local disk: lsm_local_disk_led_status_get()/lsm.LocalDisk.led_status_get()
- Query link speed of local disk: lsm_local_disk_link_speed_get()/lsm.LocalDisk.link_speed_get()
Notable bug fixes include:
- The
megaraid
plug-in for the Dell PowerEdge RAID Controller (PERC) has been fixed. - The local disk rotation speed query on the NVM Express (NVMe) disk has been fixed.
lsmcli
incorrect error handling on a local disk query has been fixed.- All gcc compile warnings have been fixed.
- The obsolete usage of the
autoconf
AC_OUTPUT macro has been fixed. (BZ#1403142)
mpt3sas
updated to version 15.100.00.00
The
mpt3sas
storage driver has been updated to version 15.100.00.00, which adds support for new devices. Contact your vendor for more details. (BZ#1306453)
The lpfc_no_hba_reset
module parameter for the lpfc
driver is now available
With this update, the
lpfc
driver for certain models of Emulex Fibre Channel Host Bus Adapters (HBAs) has been enhanced by adding the lpfc_no_hba_reset
module parameter. This parameter accepts a list of one or more hexadecimal world-wide port numbers (WWPNs) of HBAs that are not reset during SCSI error handling.
Now,
lpfc
allows you to control which ports on the HBA may be reset during SCSI error handling time. Also, lpfc
now allows you to set the eh_deadline
parameter, which represents an upper limit of the SCSI error handling time. (BZ#1366564)
LVM now detects Veritas Dynamic Multi-Pathing systems and no longer accesses the underlying device paths directly
For LVM to work correctly with Veritas Dynamic Multi-Pathing, you must set
obtain_device_list_from_udev
to 0 in the devices section of the configuration file /etc/lvm/lvm.conf
. These multi-pathed devices are not exposed through the standard udev interfaces and so without this setting LVM will be unaware of their existence. (BZ#1346280)
The libnvdimm
kernel subsystem now supports PMEM subdivision
Intel's Non-Volatile Dual In-line Memory Module (NVDIMM) label specification has been extended to allow more than one Persistent Memory (PMEM) namespace to be configured per region (interleave set). The kernel shipped with Red Hat Enterprise Linux 7.4 has been modified to support these new configurations.
Without subdivision support, a single region could previously be used in only one mode:
pmem
, device dax
, or sector
. With this update, a single region can be subdivided, and each subdivision can be configured independently of the others. (BZ#1383827)
Warning messages when multipathd
is not running
Users now get warning messages if they run a
multipath
command that creates or lists multipath devices while multipathd
is not running.
If
multipathd
is not running, then the devices are not able to restore paths that have failed or react to changes in the device setup. The multipathd
daemon now prints a warning message if there are multipath devices and multipathd
is not running. (BZ#1359510)
c library interface added to multipathd to give structured output
Users can now use the libdmmp library to get structured information from multipathd. Other programs that want to get information from multipathd can now get this information without running a command and parsing the results. (BZ#1430097)
New remove retries
multipath configuration value
If a multipath device is temporarily in use when multipath tries to remove it, the remove will fail. It is now possible to control the number of times that the
multipath
command will retry removing a multipath device that is busy by setting the remove_retries
configuration value. The default value is 0, in which case multipath will not retry failed removes. (BZ#1368211)
New multipathd reset multipaths stats
commands
Multipath now supports two new
multipathd
commands: multipathd reset multipaths stats
and multipathd reset multipath
dev stats
. These commands reset the device stats that multipathd
tracks for all the devices, or the specified device, respectively. This allows users to reset their device stats after they make changes to them. (BZ#1416569)
New disable_changed_wwids
mulitpath configuration parameter
Multipath now supports a new
multipath.conf
defaults section parameter, disable_changed_wwids
. Setting this will make multipathd notice when a path device changes its wwid while in use, and will disable access to the path device until its wwid returns to its previous value.
When the wwid of a scsi device changes, it is often a sign that the device has been remapped to a different LUN. If this happens while the scsi device is in use, it can lead to data corruption. Setting the
disable_changed_wwids
parameter will warn users when the scsi device changes its wwid. In many cases multipathd
will disable access to the path device as soon as it gets unmapped from its original LUN, removing the possibility of corruption. However multipathd
is not always able to catch the change before the scsi device has been remapped, meaning there may still be a window for corruption. Remapping in-use scsi devices is not currently supported. (BZ#1169168)
Updated built-in configuration for HPE 3PAR array
The built-in configuration for the 3PAR array now sets
no_path_retry
to 12. (BZ#1279355)
Added built-in configuration for NFINIDAT InfiniBox.* devices
Multipath now autoconfigures NFINIDAT InfiniBox.* devices (BZ#1362409)
device-mapper-multipath
now supports the max_sectors_kb
configuration parameter
With this update,
device-mapper-multipath
provides a new max_sectors_kb
parameter in the defaults, devices, and multipaths sections of the multipath.conf
file. The max_sectors_kb
parameter allows you to set the max_sectors_kb
device queue parameter to the specified value on all underlying paths of a multipath device before the multipath device is first activated.
When a multipath device is created, the device inherits the
max_sectors_kb
value from the path devices. Manually raising this value for the multipath device or lowering this value for the path devices can cause multipath to create I/O operations larger than the path devices allow.
Using the
max_sectors_kb multipath.conf
parameter is an easy way to set these values before a multipath device is created on top of the path devices, and prevent invalid-sized I/O operations from being passed down. (BZ#1394059)
New detect_checker
multipath configuration parameter
Some devices, such as the VNX2, can be optionally configured in ALUA mode. In this mode, they need to use a different
path_checker
and prioritizer
than in their non-ALUA mode. Multipath now supports the detect_checker
parameter in the multipath.conf
defaults and devices sections. If this is set, multipath will detect if a device supports ALUA, and if so, it will override the configured path_checker
and use the TUR checker instead. The detect_checker
option allows devices with an optional ALUA mode to be correctly autoconfigured, regardless of what mode they are in. (BZ#1372032)
Multipath now has a built-in default configuration for Nimble Storage devices
The multipath default hardware table now includes an entry for Nimble Storage arrays. (BZ#1406226)
LVM supports reducing the size of a RAID logical volume
As of Red Hat Enterprise Linux 7,4, you can use the
lvreduce
or lvresize
command to reduce the size of a RAID logical volume. (BZ#1394048)
iprutils rebased to version 2.4.14
The iprutils packages have been upgraded to upstream version 2.4.14, which provides a number of bug fixes and enhancements over the previous version. Notably:
- Endian swapped device_id is now compatible with earlier versions.
- VSET write cache in bare metal mode is now allowed.
- Creating RAIDS on dual adapter setups has been fixed.
- Verifying rebuilds for single adapter configurations is now disabled by default. (BZ#1384382)
mdadm rebased to version 4.0
The mdadm packages have been upgraded to upstream version 4.0, which provides a number of bug fixes and enhancements over the previous version. Notably, this update adds bad block management support for Intel Matrix Storage Manager (IMSM) metadata. The features included in this update are supported on external metadata formats, and Red Hat continues supporting the Intel Rapid Storage Technology enterprise (Intel RSTe) software stack. (BZ#1380017)
LVM extends the size of a thin pool logical volume when a thin pool fills over 50 percent
When a thin pool logical volume fills by more than 50 percent, by default the
dmeventd thin
plugin now calls the dmeventd
thin_command
command with every 5 percent increase. This resizes the thin pool when it has been filled above the configured thin_pool_autoextend_threshold
in the activation
section of the configuration file. A user may override this default by configuring an external command and specifying this command as the value of thin_command
in the dmeventd
section of the lvm.conf
file. For information on the thin
plugin and on configuring external commands to maintain a thin pool, see the dmeventd(8)
man page.
In previous releases, when a thin pool resize failed, the
dmeventd
plugin would try to unmount unconditionally all thin volumes associated with the thin pool when a compile-time defined threshold of more than 95 percent was reached. The dmeventd
plugin, by default, no longer unmounts any volumes. Reproducing the previous logic requires configuring an external script. (BZ#1442992)
LVM now supports dm-cache metadata version 2
LVM/DM cache has been significantly improved. It provides support for larger cache sizes, better adaptation to changing workloads, greatly improved startup and shutdown times, and higher performance overall. Version 2 of the dm-cache metadata format is now the default when creating cache logical volumes with LVM. Version 1 will continue to be supported for previously created LVM cache logical volumes. Upgrading to version 2 will require the removal of the old cache layer and the creation of a new cache layer. (BZ#1436748)
Support for DIF/DIX (T10 PI) on specified hardware
SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.4, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests.
At the current time, the following vendors are known to provide this support.
FUJITSU supports DIF and DIX on:
EMULEX 16G FC HBA:
- EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with:
- FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650
QLOGIC 16G FC HBA:
- QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with:
- FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3
Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability.
EMC supports DIF on:
EMULEX 8G FC HBA:
- LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with:
- EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later
EMULEX 16G FC HBA:
- LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with:
- EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later
QLOGIC 16G FC HBA:
- QLE2670-E-SP and QLE2672-E-SP, with:
- EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later
Please refer to the hardware vendor's support information for the latest status.
Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1457907)
The dmstats facility can now track the statistics for files that change
Previously, the
dmstats
facility was able to report statistics for files that did not change in size. It now has the ability to watch files for changes and update its mappings to track file I/O even as the file changes in size (or fills holes that may be in the file). (BZ#1378956)
Support for thin snapshots of cached logical volumes
LVM in Red Hat Enterprise Linux 7.4 allows you to create thin snapshots of cached logical volumes. This feature was not available in earlier releases. These external origin cached logical volumes are converted to a read-only state and thus can be used by different thin pools. (BZ#1189108)
New package: nvmetcli
The
nvmetcli
utility enables you to configure Red Hat Enterprise Linux as an NVMEoF target, using the NVME-over-RDMA fabric type. With nvmetcli
, you can configure nvmet
interactively, or use a JSON file to save and restore the configuration. (BZ#1383837)
Device DAX is now available for NVDIMM devices
Device DAX enables users like hypervisors and databases to have raw access to persistent memory without an intervening file system. In particular, Device DAX allows applications to have predictable fault granularities and the ability to flush data to the persistence domain from user space. Starting with Red Hat Enterprise Linux 7.4, Device Dax is available for Non-Volatile Dual In-line Memory Module (NVDIMM) devices. (BZ#1383489)