Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

6.5. Storage and File Systems

device-mapper-persistent-date component, BZ#960284
Tools provided by the device-mapper-persistent-data package fail to operate on 4K hard-sectored metadata devices.
anaconda component
In UEFI mode, when creating a partition for software RAID, anaconda can be unable to allocate the /boot/efi mount point to the software RAID partition and fails with the "have not created /boot/efi" message in such a scenario.
parted component
Users might be unable to access a partition created by parted. To work around this problem, reboot the machine.
lvm2 component, BZ#852812
When filling a thin pool to 100% by writing to thin volume device, access to all thin volumes using this thin pool can be blocked. To prevent this, try not to overfill the pool. If the pool is overfilled and this error occurs, extend the thin pool with new space to continue using the pool.
dracut component
The Qlogic QLA2xxx driver can miss some paths after booting from Storage Area Network (SAN). To workaroud this problem, run the following commands:
echo "options qla2xxx ql2xasynclogin=0" > /etc/modprobe.d/qla2xxx.conf
mkinitrd  /boot/initramfs-`uname -r`.img `uname -r` --force
kernel component
Unloading the nfs module can cause the system to terminate unexpectedly if the fsx utility was ran with NFSv4.1 before.
device-mapper-multipath component
When the multipathd service is not running, failed devices will not be restored. However, the multipath command gives no indication that multipathd is not running. Users can unknowingly set up multipath devices without starting the multipathd service, keeping failed paths from automatically getting restored. Make sure to start multipathing by
  • either running:
    ~]# mpathconf --enable
    ~]# service multipathd start
    
  • or:
    ~]# chkconfig multipathd on
    ~]# service multipathd start
    
multipathd will automatically start on boot, and multipath devices will automatically restore failed paths.
lvm2 component, BZ#837603
When the administrator disables use of the lvmetad daemon in the lvm.conf file, but the daemon is still running, the cached metadata are remembered until the daemon is restarted. However, if the use_lvmetad parameter in lvm.conf is reset to 1 without an intervening lvmetad restart, the cached metadata can be incorrect. Consequently, VG metadata can be overwritten with previous versions. To work around this problem, stop the lvmedat daemon manually when disabling use_lvmetad in lvm.conf. The daemon can only be restarted after use_lvmetad has been set to 1. To recover from an out-of-sync lvmetad cache, execute the pvscan --cache command or restart lvmetad. To restore metadata to correct versions, use vgcfrestore with a corresponding file in /etc/lvm/archive.
lvm2 component, BZ#563927
Due to the limitations of the LVM 'mirror' segment type, it is possible to encounter a deadlock situation when snapshots are created of mirrors. The deadlock can occur if snapshot changes (e.g. creation, resizing or removing) happen at the same time as a mirror device failure. In this case, the mirror blocks I/O until LVM can respond to the failure, but the snapshot is holding the LVM lock while trying to read the mirror.
If the user wishes to use mirroring and take snapshots of those mirrors, then it is recommended to use the 'raid1' segment type for the mirrored logical volume instead. This can be done by adding the additional arguments '--type raid1' to the command that creates the mirrored logical volume, as follows:
~]$ lvcreate --type raid1 -m 1 -L 1G -n my_mirror my_vg
kernel component, BZ#606260
The NFSv4 server in Red Hat Enterprise Linux 6 currently allows clients to mount using UDP and advertises NFSv4 over UDP with rpcbind. However, this configuration is not supported by Red Hat and violates the RFC 3530 standard.
lvm2 component
The pvmove command cannot currently be used to move mirror devices. However, it is possible to move mirror devices by issuing a sequence of two commands. For mirror images, add a new image on the destination PV and then remove the mirror image on the source PV:
~]$ lvconvert -m +1 <vg/lv> <new PV>
~]$ lvconvert -m -1 <vg/lv> <old PV>
Mirror logs can be handled in a similar fashion:
~]$ lvconvert --mirrorlog core <vg/lv>
~]$ lvconvert --mirrorlog disk <vg/lv> <new PV>
or
~]$ lvconvert --mirrorlog mirrored <vg/lv> <new PV>
~]$ lvconvert --mirrorlog disk <vg/lv> <old PV>