LVM commands fail with bcache_invalidate errors

Solution Verified - Updated -

Issue

  • Errors in pvs output like the following:
    # pvs -a 
      Scan of VG fxltagrp1 from /dev/mapper/mpathd found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathh found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathk found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathai found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathd found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathh found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathk found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
      Scan of VG fxltagrp1 from /dev/mapper/mpathai found mda_checksum 11e08b60 mda_size 10970 vs previous 1eedece 10971
  • Trying to restore pv metadata from a backup results in errors like the following:
    #  pvcreate -ff --uuid "TzVM7j-kKli-Drjb-9L0r-XO1l-auxp-kuRta" --restorefile /etc/lvm/backup/testvg /dev/mapper/mpathd
      Error writing device /dev/mapper/mpathd at 31744 length 10950.
      bcache_invalidate: block (8, 0) still dirty
      Failed to write metadata to /dev/mapper/mpathd fd -1
      WARNING: Failed to write an MDA of VG testvg.
      Error writing device /dev/mapper/mpathh at 24064 length 10950.
      bcache_invalidate: block (12, 0) still dirty
      Failed to write metadata to /dev/mapper/mpathh fd -1
      WARNING: Failed to write an MDA of VG testvg.
      Error writing device /dev/mapper/mpathk at 14848 length 10950.
      bcache_invalidate: block (16, 0) still dirty
      Failed to write metadata to /dev/mapper/mpathk fd -1
      WARNING: Failed to write an MDA of VG testvg.
      Error writing device /dev/mapper/mpathai at 1039360 length 9216.
      bcache_invalidate: block (20, 7) still dirty
      Failed to write metadata to /dev/mapper/mpathai fd -1
      WARNING: Failed to write an MDA of VG testvg.
      Scan of VG testvg from /dev/mapper/mpathd found metadata seqno 471 vs previous 472.
      Error writing device /dev/mapper/mpathd at 4096 length 4096.
      bcache_invalidate: block (8, 0) still dirty
      Failed to wipe new metadata area on /dev/mapper/mpathd at 4096 len 4096
      Failed to add metadata area for new physical volume /dev/mapper/mpathd
      Failed to setup physical volume "/dev/mapper/mpathd".
  • Updating VG metadata, such as modifying the system_id, results in below error. This error may additionally prevent migration of LVM-activate resources across cluster nodes, since the system_id attached to the VG, cannot be updated from the node experiencing the issue:

    Mar 12 16:34:16 node01 pacemaker-controld[8738]: notice: testvg_start_0@node01 output [   Cannot access VG testvg with system ID node02 with local system ID node01\n  Error writing device /dev/mapper/mpatha at 4096 length 512.\n  WARNING: bcache_invalidate: block (0, 0) still dirty.\n  Failed to write mda header to /dev/mapper/mpatha.\n  Failed to write metadata area header\n  Error writing device /dev/mapper/mpatha at 4096 length 512.\n  WARNING: bcache_invalidate: block (0, 0) still dirty.\n
    
  • Wiping the lvm header off of the disk with a wipefs -a reports successful, but a second wipefs command with no options reports still seeing an LVM label immediately afterwards.

  • This issue is commonly observed around usage of fence_mpath or fence_scsi stonith devices, which use 3PAR reservations for fencing operations.

Environment

  • Red Hat Enterprise Linux 7 with the High Availability Add On
  • LVM2
  • pacemaker clustering software

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content