Why do I see 'Found duplicate PV' warnings when using LVM with multipath storage in RHEL?

Solution Verified - Updated -

Environment

  • Red Hat Enterprise Linux (RHEL) 4, 5, 6, 7, 8
  • lvm2
  • Multiple paths to storage device(s)

Issue

  • Found duplicate PV warning messages after system boots up
  • LVM commands (such as vgs, lvchange, etc) display messages like this when trying to list VG or LV:
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf

Resolution

The most common cause of this issue is lvm claims the device before a different device mapper layer, such as multipath, can claim the device. Once lvm has the device open for exclusive access, the device is no longer available for use by multipath. There can be a race condition between lvm and multipath in which one gets to claim a device first as these can be proceeding in parallel during boot. Setting up a filter can prevent this by forcing lvm to only scan the devices the administrator desires.

  • The two most common resolutions for this issue are:
    "Set up LVM filters" within /etc/lvm/lvm.conf to avoid scanning unnecessary devices.
    • In many cases setting up a filter in lvm.conf may not be required, but doing so ensures that only appropriate devices are scanned by lvm for use -- this includes rejecting all but local disks and accepting multipath devices instead of the default of all plain sd* and other similar disk devices.
    • There can be a race condition between lvm and multipath on which gets to claim exclusive access to a device first as these can be proceeding in parallel during boot. Setting up a filter adds guarantees that lvm will only scan and claim appropriate devices.
    • There are instances where multiple reboots without an explicit lvm filter works correctly with multipath... until it suddenly does not on a following reboot. This is a reflection of the parallel discovery paths in lvm and multipath and the nature of timing that can affect which gets to claim exclusive access first. Again, setting of an explicit filter ensures lvm can only claim exclusive access to desired devices -- be it sd* or mpath.
  • Rebuild the initramfs boot image file ↴ to ensure it has the most current copies of lvm and multipath configuration files along with making sure the boot image has the multipath modules present.
    • If not present, the multipath will be loaded after the boot pivot point which is after lvm has had a chance to view and claim all devices ahead of multipath. This is especially true if a filter is not in place.

Case 1: The two devices displayed in the output are both single paths to the same device:
Example:

Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sdd not /dev/sdf

Here, sdd and sdf can both be found under the same multipath map, in the multipath -ll output.

mpatha 360000000000000000000000000000000 dm-2 Vendor,Model
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|- policy='service-time 0' prio=50 status=active
|- 2:0:0:0  sdd  8:48     active ready running
`- 3:0:0:0  sdf  65:240   active ready running

In this case, a filter can be configured in /etc/lvm/lvm.conf to restrict the devices that LVM will search for metadata. The filter is a list of patterns that will be applied to each device found by a scan of /dev (or the directory specified by the dir keyword in lvm.conf). Patterns are regular expressions delimited by any character and preceded by a (for accept) or r (for reject). The list is traversed in order, and the first regex that matches a device determines if the device will be accepted or rejected (ignored). Devices that don’t match any patterns are accepted.

This filter should include all devices that need to be checked for LVM metadata, such as the local hard drive with the root volume group on it and any multipath devices. By rejecting the underlying paths to a multipath device (such as /dev/sdb, /dev/sdd, etc) you can avoid these duplicate PV warnings since each unique metadata area will only be found once on the multipath device itself.

Example filters that will avoid duplicate PV warnings due to multiple storage paths being available:

  • To accept the second partition on the first hard drive (/dev/sda) and any device-mapper-multipath devices, while rejecting everything else:
filter = [ "a|/dev/sda2$|", "a|/dev/mapper/mpath.*|", "r|.*|" ]
  • To accept all HP SmartArray controllers and any EMC PowerPath devices:
filter = [ "a|/dev/cciss/.*|", "a|/dev/emcpower.*|", "r|.*|" ]
  • To accept any partitions on the first IDE drive and any multipath devices:
filter = [ "a|/dev/hda.*|", "a|/dev/mapper/mpath.*|", "r|.*|" ]

NOTE: When adding a new filter to /etc/lvm/lvm.conf, ensure that the original filter is either commented out with a # or is removed.

Once a filter has been configured and lvm.conf saved, check the output of these commands to ensure that no physical volumes or volume groups are missing:

# pvscan
# vgscan

The filter can also be tested on the fly, without having to modify /etc/lvm/lvm.conf adding the --config argument to the LVM command. For example:

# lvs --config 'devices{ filter = [ "a|/dev/emcpower.*|", "r|.*|" ] }'

NOTE: Testing filters using the --config parameter will not make permanent changes to the server's configuration. Make sure to include the working filter in /etc/lvm/lvm.conf after testing.

Once the desired filter is configured, it is recommended to rebuild the initrd with mkinitrd (RHEL4 and 5) or dracut (RHEL 6 and 7 ) so that only the necessary devices are scanned upon reboot.

Case 2: The two devices are an multipath device and a path under the multipath device.
Example:

Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjzzzz: using /dev/mapper/mpatha not /dev/sdd
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjzzzz: using /dev/sdd not /dev/mapper/mpatha

Where mpatha in multipath -ll shows sdd as a path:

mpatha 360000000000000000000000000000000 dm-2 Vendor,Model
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|- policy='service-time 0' prio=50 status=active
|- 2:0:0:0  sdd  8:48     active ready running
`- 3:0:0:0  sdf  65:240   active ready running

See Case 1 ⤵ as this is a variant of that issue with the same resolution of setting an lvm.conf filter to prevent scanning of PVs on both the multipath device and its underlying sdx paths.

Case 3: The two devices displayed in the output are both multipath maps.
Example:

Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/mapper/mpatha not /dev/mapper/mpathc

Or

Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowera not /dev/emcpowerh

Here, we are not looking at two different paths but two different devices. This is a lot more serious than the case above as it often means that the machine has been presented devices which it shouldn't be seeing (for example, LUN clones or mirrors). In this case, unless you have a clear idea of what devices should be removed from the machine, the situation could be unrecoverable. You might want to try contacting the Red Hat Technical Support anyway.

Case 4: The multipath modules and/or lvm, multipath configuration files are missing or out of date within the boot image file.

Found duplicate PV  ZC3kXXXXXXXXXXXXjd7ardDDMSMP: using /dev/sdx not /dev/sdy
Using duplicate PV  /dev/sdx which is last seen, replacing /dev/sdy
Found duplicate PV  ZC3kXXXXXXXXXXXXjd7ardDDMSMP: using /dev/sdx not /dev/sdy
  Using duplicate PV /dev/sdbq5 without holders, ignoring /dev/sdcx5
  WARNING: Device mismatch detected for xyzvg/tmpdir which is accessing /dev/sdx instead of /dev/sdy.
  WARNING: Device mismatch detected for xyzvg/var which is accessing /dev/sdx instead of /dev/sdy.

Issue: The difference between this case and case #1 is that while lvm.conf has appropriate filters set and/or multipath is configured -- the configuration files within the initial boot image file are older out-of-date versions of the configuration files and/or boot image file is missing needed multipath modules.

  • Diagnostics:

    • Use lsinitrd command to verify the size and date of the lvm and multipath configuration files within the boot image.
  • Rebuild initial boot image file:

    • Take a backup of the current boot image file.
    • Rebuild boot image file fo the current kernel version to pick up latest lvm/multipath configuration files and ensure multipath modules present at boot time.
      • During instances like installing new kernel, multipath modules and configuration files might be missing in the initial boot image file due to recent, but post-install, configuration changes.
      • After any changes to configuration files that may be used at the early stage of the boot process, it's necessary to rebuild the initial boot image file to include the proper kernel modules, files, and configuration directives. Especially for proper lvm and multipath use.
  • Refer to this article for detailed steps on "How to rebuild the initial ramdisk image"

# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
# dracut -f -v

Learn More
For additional documentation on LVM filters, see the following

Knowledgebase:
* How do I debug an LVM problem?"
* "What is LVM's filter setting and how do I configure it easily in RHEL?""
* "How do I setup multipath on a system that already has LVM configured?""
* "Cannot activate LVs in VG while PVs appear on duplicate devices"
* "LVM commands print "WARNING: Duplicate VG name " in RHEL"
* "WARNING: Duplicate VG name"
* "LVM reports "Cannot use device with duplicates." "
* "lvm commands are not finding or displaying the expected volumes in RHEL"
* "Some LVM commands falsely report missing PVs even though the corresponding disks are present, have correct metadata and they are not marked with the "MISSING" flag."
* "WARNING: PV /dev/sdX in VG vgXX is using an old PV header, modify the VG to update"

Documentation
RHEL8 - Configuring and managing logical volumes
RHEL 7 - Logical Volume Manager Administration
RHEL 6 - Logical Volume Manager Administration
RHEL 5 - Logical Volume Manager Administration

Root Cause

  • Why do I get duplicate PV warnings when I run LVM commands?
    • The filter/configuration settings within /etc/lvm/lvm.conf are resulting in multiple devices being scanned that represent the same storage device (e.g. [sdd and sdf being two paths to the same storage disk ⤵[(https://access.redhat.com/node/2989/revisions/23305389/view#case_1)).
    • See further details below.
      *Why do my physical volumes show as sd devices instead of mpath devices?
    • The filter/configuration settings within /etc/lvm/lvm.conf are resulting in multiple devices being scanned that represent the same storage device (e.g. mpatha and sdd where sdd is an individual path under mpatha ⤵).
    • See further details below.
      *Why am I getting duplicate UUID warnings when I have multipath_component_detection = 1 set within /etc/lvm/lvm.conf?
    • During early stage in the boot process, there is a chance of race between multipath and lvm initialization. If multipath initialization takes slightly more duration, and lvm commands are scanning the devices in /dev directory, if the /dev/sdx devices are not already added to a multipath device, then lvm commands may pickup the underlying /dev/sdx device. Later this would also cause failure in dm-multipath when it tries to access the /dev/sdx devices, since these devices would already be in use by lvm volumes (devices are opened/locked by lvm making them unavailable to multipath).
    • To avoid any chances of above race between multipath and lvm initialization, it is recommended to setup the lvm filter to make sure that lvm commands only pickup the multipath devices and explicitly rejecting any underlying /dev/sdx devices.
    • See additional details below.
  • With a default configuration, LVM commands will scan for devices in /dev, and check every resulting device for LVM metadata. This is caused by the default filter in /etc/lvm/lvm.conf:
filter = [ "a/.*/" ]

This can cause both the multipath device itself and each of its underlying paths to be scanned. That set of devices, since it leads to the same storage device, will have the

When using device-mapper-multipath or other multipath software such as EMC PowerPath or Hitachi Dynamic Link Manager (HDLM), each path to a particular logical unit number (LUN) is registered as a different SCSI device, such as /dev/sdb, /dev/sdc, and so on. The multipath software will then create a new device that maps to those individual paths, such as /dev/mapper/mpath1 or /dev/mapper/mpatha for device-mapper-multipath, /dev/emcpowera for EMC PowerPath, or /dev/sddlmab for Hitachi HDLM. Since each LUN has multiple device nodes in /dev that point to the same underlying data, they all contain the same LVM metadata and thus LVM commands will find the same metadata multiple times and report them as duplicates.

These messages are only warnings and do not mean the LVM operation has failed. Rather, they are alerting the user that only one of the devices has been used as a physical volume and the others are being ignored. To avoid this situation, a filter should be applied to only search the necessary devices for physical volumes, and to leave out any underlying paths to multipath devices.

Duplicate(s) Info
* How to use lvm filter to hide duplicate messages
* 'found duplicate pv' warnings when using SAN based copy LUNs
* pvscan / vgscan shows errors
* Problems to deactivate a Volume Group
* LVM Error Message
* device-mapper duplicate PV warning messages

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments