Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

6.5. Storage and File Systems

lvm2 component, BZ#1024347
An event is generated for any device that is being watched for changes by means of a special WATCH udev rule. This udev rule is also used for logical volumes and it causes the /dev/ directory to be up-to-date with any data written to the logical volume (mainly the symlinks that are based on metadata, like the content of the /dev/disk directory). The event is generated each time the device is closed after being open for writing.
device-mapper: remove ioctl on  failed: Device or resource busy
This is caused by the LVM command and udev interaction where the original logical volume is open for writing and then part of the logical volume is zeroed so it is prepared for thin pool use. Then the logical volume is closed, which triggers the WATCH rule. Then LVM tries to remove the original volume while it can still be opened by udev. This causes the error message to appear. LVM tries to remove the logical volume a few times before exiting with an lvconvert failure. Normally, udev should process the logical volume quickly and LVM should continue retrying to remove the logical volume. Normally, users can just ignore this error message; the logical volume is processed correctly on next retry. If the number of retries is not sufficient, then lvconvert can fail as a result. If this is the case, users are encouraged to comment out the OPTIONS+="watch" line in the /lib/udev/rules.d/13-dm-disk.rules file. This will cause the WATCH rule for LVM volumes to be disabled. However, this may cause the /dev/ content to be out-of-sync with actual metadata state stored on the logical volume. If LVM needs to retry the logical volume removal because it is being opened in parallel, most notably by udev as described before, it issues an error message "remove ioctl failed: Device or resource busy". If this is the case, the removal is retried several times before lvconvert fails completely.
device-mapper-persistent-date component, BZ#960284
Tools provided by the device-mapper-persistent-data package fail to operate on 4K hard-sectored metadata devices.
anaconda component
In UEFI mode, when creating a partition for software RAID, anaconda can be unable to allocate the /boot/efi mount point to the software RAID partition and fails with the "have not created /boot/efi" message in such a scenario.
kernel component, BZ#918647
Thin provisioning uses reference counts to indicate that data is shared between a thin volume and snapshots of the thin volume. There is a known issue with the way reference counts are managed in the case when a discard is issued to a thin volume that has snapshots. Creating snapshots of a thin volume and then issuing discards to the thin volume can therefore result in data loss in the snapshot volumes. Users are strongly encouraged to disable discard support on the thin-pool for the time being. To do so using lvm2 while the pool is offline, use the lvchange --discard ignore <pool> command. Any discards that might be issued to thin volumes will be ignored.
kernel component
Storage that reports a discard_granularity that is not a power of two will cause the kernel to improperly issue discard requests to the underlying storage. This results in I/O errors associated with the failed discard requests. To work around the problem, if possible, do not upgrade to newer vendor storage firmware that reports discard_granularity that is not a power of two.
parted component
Users might be unable to access a partition created by parted. To work around this problem, reboot the machine.
lvm2 component, BZ#852812
When filling a thin pool to 100% by writing to thin volume device, access to all thin volumes using this thin pool can be blocked. To prevent this, try not to overfill the pool. If the pool is overfilled and this error occurs, extend the thin pool with new space to continue using the pool.
dracut component
The Qlogic QLA2xxx driver can miss some paths after booting from Storage Area Network (SAN). To workaroud this problem, run the following commands:
echo "options qla2xxx ql2xasynclogin=0" > /etc/modprobe.d/qla2xxx.conf
mkinitrd  /boot/initramfs-`uname -r`.img `uname -r` --force
lvm2 component, BZ#903411
Activating a logical volume can fail if the --thinpool and --discards options are specified on logical-volume creation. To work around this problem, manually deactivate all thin volumes related to the changed thin pool prior to running the lvchange command.
kernel component
Unloading the nfs module can cause the system to terminate unexpectedly if the fsx utility was ran with NFSv4.1 before.
device-mapper-multipath component
When the multipathd service is not running, failed devices will not be restored. However, the multipath command gives no indication that multipathd is not running. Users can unknowingly set up multipath devices without starting the multipathd service, keeping failed paths from automatically getting restored. Make sure to start multipathing by
  • either running:
    ~]# mpathconf --enable
    ~]# service multipathd start
    
  • or:
    ~]# chkconfig multipathd on
    ~]# service multipathd start
    
multipathd will automatically start on boot, and multipath devices will automatically restore failed paths.
lvm2 component, BZ#837603
When the administrator disables use of the lvmetad daemon in the lvm.conf file, but the daemon is still running, the cached metadata are remembered until the daemon is restarted. However, if the use_lvmetad parameter in lvm.conf is reset to 1 without an intervening lvmetad restart, the cached metadata can be incorrect. Consequently, VG metadata can be overwritten with previous versions. To work around this problem, stop the lvmedat daemon manually when disabling use_lvmetad in lvm.conf. The daemon can only be restarted after use_lvmetad has been set to 1. To recover from an out-of-sync lvmetad cache, execute the pvscan --cache command or restart lvmetad. To restore metadata to correct versions, use vgcfrestore with a corresponding file in /etc/lvm/archive.
lvm2 component, BZ#563927
Due to the limitations of the LVM 'mirror' segment type, it is possible to encounter a deadlock situation when snapshots are created of mirrors. The deadlock can occur if snapshot changes (e.g. creation, resizing or removing) happen at the same time as a mirror device failure. In this case, the mirror blocks I/O until LVM can respond to the failure, but the snapshot is holding the LVM lock while trying to read the mirror.
If the user wishes to use mirroring and take snapshots of those mirrors, then it is recommended to use the 'raid1' segment type for the mirrored logical volume instead. This can be done by adding the additional arguments '--type raid1' to the command that creates the mirrored logical volume, as follows:
~]$ lvcreate --type raid1 -m 1 -L 1G -n my_mirror my_vg
kernel component, BZ#606260
The NFSv4 server in Red Hat Enterprise Linux 6 currently allows clients to mount using UDP and advertises NFSv4 over UDP with rpcbind. However, this configuration is not supported by Red Hat and violates the RFC 3530 standard.
lvm2 component
The pvmove command cannot currently be used to move mirror devices. However, it is possible to move mirror devices by issuing a sequence of two commands. For mirror images, add a new image on the destination PV and then remove the mirror image on the source PV:
~]$ lvconvert -m +1 <vg/lv> <new PV>
~]$ lvconvert -m -1 <vg/lv> <old PV>
Mirror logs can be handled in a similar fashion:
~]$ lvconvert --mirrorlog core <vg/lv>
~]$ lvconvert --mirrorlog disk <vg/lv> <new PV>
or
~]$ lvconvert --mirrorlog mirrored <vg/lv> <new PV>
~]$ lvconvert --mirrorlog disk <vg/lv> <old PV>