LVM volume group goes online and filesystem is mounted when not all physical volumes active

Latest response

I'm experiencing a strange situation on a couple of our RHEL5.9 systems.

Upon boot, something seems to be going wrong when detecting the LUNs. In spite of this, the volume groups that use the physical volumes that are on these LUNs still go active and subsequently the filesystem is mounted.

When the application (an Oracle database in this case) then tries to use the filesystem, the result is massive corruption and a rather extensive recovery time.

This leads me to two questions:
1) What could be preventing the system from registering all the LUNs on the system? and
2) What could be the reason these volume groups go online when not all physical volumes are available? Isn't that not supposed to happen?

Maurice

Responses

Hi Maurice!

It would be helpful to know, what pvdisplay,vgdisplay, lvdisplay and maybe dmesg on the system tells you.

At least the unavailability of the LUN can happen for a lot of reasons. And afaik the Logical Volume doesn't necessarily need all Physical Volumes to go online. Depending on your configuration unavailable devices might be skipped for recovery.

Kind Regards,
Andreas

Hi Andreas,

At the moment the system has been up for some time. Unfortunately I can't just restart it whenever I please to create the problem-situation.
Anyway, this is the output:

pvdisplay

--- Physical volume ---
PV Name /dev/mapper/mpath10p1
VG Name vg02
PV Size 25.00 GB / not usable 3.94 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 6399
Free PE 0
Allocated PE 6399
PV UUID HJBnx4-kbxH-Sjc4-FjCU-CA7G-vhTe-fMOIFY

--- Physical volume ---
PV Name /dev/mapper/mpath9p1
VG Name vg03
PV Size 200.00 GB / not usable 3.94 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 51199
Free PE 0
Allocated PE 51199
PV UUID tlO6Nd-qjIc-d0sP-VO7B-cltV-wZkh-84auOR

--- Physical volume ---
PV Name /dev/mapper/mpath13p1
VG Name vg03
PV Size 100.00 GB / not usable 3.94 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 25599
Free PE 3326
Allocated PE 22273
PV UUID cNXuZN-1Z7D-71TD-qHCl-yeGW-CScC-IFEY19

--- Physical volume ---
PV Name /dev/mapper/mpath11p1
VG Name vg01
PV Size 260.00 GB / not usable 3.94 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 66559
Free PE 0
Allocated PE 66559
PV UUID R9di3s-HXUf-6U4i-yuPz-xKH6-fiDR-2DhnSg

--- Physical volume ---
PV Name /dev/mapper/mpath14p1
VG Name vg01
PV Size 50.00 GB / not usable 3.94 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 12799
Free PE 254
Allocated PE 12545
PV UUID 9JbdbF-DUxp-WpY9-aDKR-seY5-87cq-hhnmXW

--- Physical volume ---
PV Name /dev/mapper/mpath12p1
VG Name vgrefresh
PV Size 200.00 GB / not usable 3.94 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 51199
Free PE 0
Allocated PE 51199
PV UUID 13XuwE-TfOW-O9gq-rOvp-S1ng-sc44-tx1IET

--- Physical volume ---
PV Name /dev/cciss/c0d0p2
VG Name vg00
PV Size 136.58 GB / not usable 14.92 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 4370
Free PE 2769
Allocated PE 1601
PV UUID CEGfc7-dR93-fpnq-UAiB-x6Ie-AY0p-jfVCap

vgdisplay

--- Volume group ---
VG Name vg02
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 24
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 25.00 GB
PE Size 4.00 MB
Total PE 6399
Alloc PE / Size 6399 / 25.00 GB
Free PE / Size 0 / 0
VG UUID meSGlF-eswz-45De-p2dc-0lXX-tj3P-hCRBe8

--- Volume group ---
VG Name vg03
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 106
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 12
Open LV 12
Max PV 0
Cur PV 2
Act PV 2
VG Size 299.99 GB
PE Size 4.00 MB
Total PE 76798
Alloc PE / Size 73472 / 287.00 GB
Free PE / Size 3326 / 12.99 GB
VG UUID D3xwVQ-ENAH-3zEg-He5B-IgMW-graH-Lt5118

--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 37
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 309.99 GB
PE Size 4.00 MB
Total PE 79358
Alloc PE / Size 79104 / 309.00 GB
Free PE / Size 254 / 1016.00 MB
VG UUID qmFmAJ-NC0r-505u-fINv-dNky-6qCx-NRz5SO

--- Volume group ---
VG Name vgrefresh
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 200.00 GB
PE Size 4.00 MB
Total PE 51199
Alloc PE / Size 51199 / 200.00 GB
Free PE / Size 0 / 0
VG UUID dLucGU-iumK-rhq1-5EtO-mFMo-PQs6-BzqlwV

--- Volume group ---
VG Name vg00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 13
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 136.56 GB
PE Size 32.00 MB
Total PE 4370
Alloc PE / Size 1601 / 50.03 GB
Free PE / Size 2769 / 86.53 GB
VG UUID 2uUnUy-k7RG-1aUI-ic5j-fJ50-LwI8-0fzKlU

lvdisplay

--- Logical volume ---
LV Name /dev/vg02/archiving
VG Name vg02
LV UUID r9vTQG-ufY9-VyFs-HVbK-wEce-3UZD-nzYN6G
LV Write Access read/write
LV Status available
# open 1
LV Size 25.00 GB
Current LE 6399
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:16

--- Logical volume ---
LV Name /dev/vg03/app-oracle
VG Name vg03
LV UUID Sfi2Be-tGx8-Cfrj-C9ld-DKxn-adk6-xY8DkN
LV Write Access read/write
LV Status available
# open 1
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:17

--- Logical volume ---
LV Name /dev/vg03/app-oracle-admin
VG Name vg03
LV UUID Oo99s5-droX-vKpG-S7TN-7Jgc-sXzk-adJw5A
LV Write Access read/write
LV Status available
# open 1
LV Size 1.00 GB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:18

--- Logical volume ---
LV Name /dev/vg03/app-oracle-diag
VG Name vg03
LV UUID clzmyk-SPE7-N8dq-sLOZ-KHRK-JEPl-ipOdz8
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:19

--- Logical volume ---
LV Name /dev/vg03/app-oracle-product-10.2.0
VG Name vg03
LV UUID afLGGo-QiJk-HLCE-2DNM-WRcy-p63w-02w2I2
LV Write Access read/write
LV Status available
# open 1
LV Size 4.00 GB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:20

--- Logical volume ---
LV Name /dev/vg03/app-oracle-product-11.2.0
VG Name vg03
LV UUID em16Hb-n6Aw-dDjE-sHO6-Vw98-jYbZ-8SBqjB
LV Write Access read/write
LV Status available
# open 1
LV Size 6.00 GB
Current LE 1536
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:21

--- Logical volume ---
LV Name /dev/vg03/redoctl1
VG Name vg03
LV UUID 0UPvFE-XEdn-KFUK-T1Vm-SZxB-hdAJ-bMZ5H0
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:22

--- Logical volume ---
LV Name /dev/vg03/redoctl2
VG Name vg03
LV UUID B4GvWQ-zcNO-hQbJ-Nq18-ssnL-knwq-5tGV9k
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:23

--- Logical volume ---
LV Name /dev/vg03/app-oracle-product-agent11g
VG Name vg03
LV UUID Cep65m-DRuD-aNLd-3jD5-FGgN-5a7H-IB1DHy
LV Write Access read/write
LV Status available
# open 1
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:24

--- Logical volume ---
LV Name /dev/vg03/app-oracle-product-client11g
VG Name vg03
LV UUID bnvyxd-ycbo-irr5-csi6-SZZL-985g-eoWGbD
LV Write Access read/write
LV Status available
# open 1
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:25

--- Logical volume ---
LV Name /dev/vg03/backup
VG Name vg03
LV UUID I6eoR3-7CRa-gSuI-GIKF-i7Ji-Yqlo-WudIKD
LV Write Access read/write
LV Status available
# open 1
LV Size 250.00 GB
Current LE 64000
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:26

--- Logical volume ---
LV Name /dev/vg03/app-axi
VG Name vg03
LV UUID SZ3l8n-XyBP-JnW2-R5qr-H3Pd-ooOJ-F1Uooc
LV Write Access read/write
LV Status available
# open 1
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:27

--- Logical volume ---
LV Name /dev/vg03/export-axirs
VG Name vg03
LV UUID 0c857r-lSoE-SESZ-HU1F-49Q2-Ndzn-3G13sK
LV Write Access read/write
LV Status available
# open 1
LV Size 4.00 GB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:28

--- Logical volume ---
LV Name /dev/vg01/database
VG Name vg01
LV UUID hdmCcp-JRyJ-ZvPA-zdDy-Hf9z-uZ5Z-Un5a4k
LV Write Access read/write
LV Status available
# open 1
LV Size 309.00 GB
Current LE 79104
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:15

--- Logical volume ---
LV Name /dev/vgrefresh/refresh
VG Name vgrefresh
LV UUID 4kQUm1-xTje-AGK3-H1Af-NK4e-03UB-p856Z1
LV Write Access read/write
LV Status NOT available
LV Size 200.00 GB
Current LE 51199
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/vg00/root
VG Name vg00
LV UUID wJHinP-XAie-KjQP-b7QY-9lX5-vpa4-jecSme
LV Write Access read/write
LV Status available
# open 1
LV Size 20.00 GB
Current LE 640
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Name /dev/vg00/var
VG Name vg00
LV UUID A4QGGx-ZX1w-aOHG-t4VX-DB9Y-vZMV-jI5XcC
LV Write Access read/write
LV Status available
# open 1
LV Size 4.00 GB
Current LE 128
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name /dev/vg00/software
VG Name vg00
LV UUID ukJDgq-z7QW-gzRs-rXHC-1Prl-ktfd-NPqVEm
LV Write Access read/write
LV Status available
# open 1
LV Size 16.03 GB
Current LE 513
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Name /dev/vg00/tmp
VG Name vg00
LV UUID FG7c77-wQjA-7sFj-7gQJ-Lf1N-kjSY-aGDjnV
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 64
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/vg00/swap
VG Name vg00
LV UUID MPg0A1-UglE-jOcR-20x2-2JnL-6t9T-rG2AgN
LV Write Access read/write
LV Status available
# open 1
LV Size 8.00 GB
Current LE 256
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

dmesg

ata1.00: hard resetting link
ata1.01: hard resetting link
ata1.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata1.01: SATA link down (SStatus 4 SControl 300)
ata1.01: NODEV after polling detection
ata1.00: configured for UDMA/100
ata1: EH complete
ata2.00: hard resetting link
ata2.01: hard resetting link
ata2.00: SATA link down (SStatus 4 SControl 300)
ata2.01: SATA link down (SStatus 4 SControl 300)
ata2: EH complete
ata2.00: hard resetting link
ata2.01: hard resetting link
ata2.00: SATA link down (SStatus 4 SControl 300)
ata2.01: SATA link down (SStatus 4 SControl 300)
ata2: EH complete
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbe: 104857600 512-byte hdwr sectors (53687 MB)
sdbe: Write Protect is off
sdbe: Mode Sense: 9f 00 00 00
SCSI device sdbe: drive cache: none
SCSI device sdbe: 104857600 512-byte hdwr sectors (53687 MB)
sdbe: Write Protect is off
sdbe: Mode Sense: 9f 00 00 00
SCSI device sdbe: drive cache: none
sdbe: unknown partition table
sd 3:0:4:6: Attached scsi disk sdbe
sd 3:0:4:6: Attached scsi generic sg57 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbf: 104857600 512-byte hdwr sectors (53687 MB)
sdbf: Write Protect is off
sdbf: Mode Sense: 9f 00 00 00
SCSI device sdbf: drive cache: none
SCSI device sdbf: 104857600 512-byte hdwr sectors (53687 MB)
sdbf: Write Protect is off
sdbf: Mode Sense: 9f 00 00 00
SCSI device sdbf: drive cache: none
sdbf: unknown partition table
sd 3:0:5:6: Attached scsi disk sdbf
sd 3:0:5:6: Attached scsi generic sg58 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbg: 104857600 512-byte hdwr sectors (53687 MB)
sdbg: Write Protect is off
sdbg: Mode Sense: 9f 00 00 00
SCSI device sdbg: drive cache: none
SCSI device sdbg: 104857600 512-byte hdwr sectors (53687 MB)
sdbg: Write Protect is off
sdbg: Mode Sense: 9f 00 00 00
SCSI device sdbg: drive cache: none
sdbg: unknown partition table
sd 3:0:6:6: Attached scsi disk sdbg
sd 3:0:6:6: Attached scsi generic sg59 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbh: 104857600 512-byte hdwr sectors (53687 MB)
sdbh: Write Protect is off
sdbh: Mode Sense: 9f 00 00 00
SCSI device sdbh: drive cache: none
SCSI device sdbh: 104857600 512-byte hdwr sectors (53687 MB)
sdbh: Write Protect is off
sdbh: Mode Sense: 9f 00 00 00
SCSI device sdbh: drive cache: none
sdbh: unknown partition table
sd 3:0:7:6: Attached scsi disk sdbh
sd 3:0:7:6: Attached scsi generic sg60 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbi: 104857600 512-byte hdwr sectors (53687 MB)
sdbi: Write Protect is off
sdbi: Mode Sense: 9f 00 00 00
SCSI device sdbi: drive cache: none
SCSI device sdbi: 104857600 512-byte hdwr sectors (53687 MB)
sdbi: Write Protect is off
sdbi: Mode Sense: 9f 00 00 00
SCSI device sdbi: drive cache: none
sdbi: unknown partition table
sd 4:0:4:6: Attached scsi disk sdbi
sd 4:0:4:6: Attached scsi generic sg61 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbj: 104857600 512-byte hdwr sectors (53687 MB)
sdbj: Write Protect is off
sdbj: Mode Sense: 9f 00 00 00
SCSI device sdbj: drive cache: none
SCSI device sdbj: 104857600 512-byte hdwr sectors (53687 MB)
sdbj: Write Protect is off
sdbj: Mode Sense: 9f 00 00 00
SCSI device sdbj: drive cache: none
sdbj: unknown partition table
sd 4:0:5:6: Attached scsi disk sdbj
sd 4:0:5:6: Attached scsi generic sg62 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbk: 104857600 512-byte hdwr sectors (53687 MB)
sdbk: Write Protect is off
sdbk: Mode Sense: 9f 00 00 00
SCSI device sdbk: drive cache: none
SCSI device sdbk: 104857600 512-byte hdwr sectors (53687 MB)
sdbk: Write Protect is off
sdbk: Mode Sense: 9f 00 00 00
SCSI device sdbk: drive cache: none
sdbk: unknown partition table
sd 4:0:6:6: Attached scsi disk sdbk
sd 4:0:6:6: Attached scsi generic sg63 type 0
Vendor: EMC Model: Invista Rev: 5100
Type: Direct-Access ANSI SCSI revision: 04
SCSI device sdbl: 104857600 512-byte hdwr sectors (53687 MB)
sdbl: Write Protect is off
sdbl: Mode Sense: 9f 00 00 00
SCSI device sdbl: drive cache: none
SCSI device sdbl: 104857600 512-byte hdwr sectors (53687 MB)
sdbl: Write Protect is off
sdbl: Mode Sense: 9f 00 00 00
SCSI device sdbl: drive cache: none
sdbl: unknown partition table
sd 4:0:7:6: Attached scsi disk sdbl
sd 4:0:7:6: Attached scsi generic sg64 type 0

Unfortunately the dmesg log is a bit short, before this, there was just a gazzilion messages about a malfunctioning mouse from the KVM-switch that this machine is connected to.

If you need any other info, please let me know.

Maurice

In answer to question #1:

It will depend on how you're accessing your storage devices. It can take a number of seconds from the time that the OS boots and starts scanning attached busses for devices before it finds and registers all of those devices. This would be why, if you're mounting bare /dev/sdX devices or even mpathd devices (rather than LVM objects), you'll want to add "_netdev" to your mount options. iSCSI tends to be even more susceptible to this than traditional SAN (though a sufficiently-large/complex SAN configuration can still suffer this problem).

In answer to question #2:

In general, LVM won't bring a given LV online if any of the device-signatures (PV UUID) are unavailable. You generally have to force LV to do something like that.

Depending on how you've configured (or failed to configure) LVM, it will bring a volume or volume group online so long is it's able to find all of the expected element-signatures. That is, if a given PE is normally accessible via multiple device nodes (e.g. /dev/sdhp1, /dev/sdzp1, /dev/mpathp1) but one or more of the paths is missing, LVM will start up the volume with the available dev-nodes while noting which expected dev-nodes are missing. Depending on how gracefully your system and your application handles moving from the dev-node LVM started the LV on to the dev-node it ultimately becomes active on, you may experience data-corruption. That said, assuming the transitions don't result in a "lost all paths" type of situation, most things should handle such transitions gracefully (i.e., no corruption should happen).

Your LVM outputs are showing that you're using multipathd in your configuration. However, your LVM outputs don't really tell me whether that's simply the current access-state of LVM or if it's also the target state (i.e., you told LVM to ignore any paths it finds PV UUIDs on that don't reside on the /dev/mapper/mpath* device-nodes). So, there's not a good way to tell if/how LVM assembled its volumes. LVM can't use the multipathd devices until the multipathd service declares them to be online (which it will do as soon as even one path in a multi-path configuration is available), but could have STARTED the LVs via available /dev/sdX device-nodes (transitioning to the multipathd nodes once available).

Overall, you're not relating enough information to diagnose a problem. You state "something seems to be going wrong when detecting the LUNs" but don't give us any useful error messages (either logged to the boot-console or the system logs).

It might be helpful to also include the output from 'multipath -ll'. I wanted to +1 Tom's comments about using '_netdev' when appropriate. If you can confirm this behavior in a staging or other non-production environment, it might be helpful to step through the process manually (boot, verify PV/VG/LV status, verify LUN status via your SAN & multipath -ll, and mount the LVs) to ensure proper operation. Also, you might want to ensure with your SAN team that the LUNs are masked properly and not available for use by other hosts.

Thanks for that suggestion, Phil. And welcome back! I don't think we've seen you around since we migrated the community to this new area.

First of all thank you very much for all the suggestions you have made so far.

My apologies for not supplying adequate information about this issue. I'm mostly hindered by the fact that I'm only having this issue on the acceptance and production systems and not on the test system that we have. The smaller set-up (less and smaller SAN-disks) of the test system is probably the reason for this. As both acceptance and obviously production are actively used, it is difficult to plan a test boot to get the logging you may need.

I can give you the output of 'multipath -ll' though:

mpath9 (360001440000000102015217c1fe481a9) dm-9 EMC,Invista
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=1][active]
_ 3:0:4:1 sdf 8:80 [active][ready]
_ 3:0:5:1 sdl 8:176 [active][ready]
_ 3:0:6:1 sdr 65:16 [active][ready]
_ 3:0:7:1 sdx 65:112 [active][ready]
mpath14 (360001440000000102015217c1fe48b34) dm-29 EMC,Invista
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=1][active]
_ 3:0:4:6 sdbe 67:128 [active][ready]
_ 3:0:5:6 sdbf 67:144 [active][ready]
_ 3:0:6:6 sdbg 67:160 [active][ready]
_ 3:0:7:6 sdbh 67:176 [active][ready]
_ 4:0:4:6 sdbi 67:192 [active][ready]
_ 4:0:5:6 sdbj 67:208 [active][ready]
_ 4:0:6:6 sdbk 67:224 [active][ready]
_ 4:0:7:6 sdbl 67:240 [active][ready]
mpath13 (360001440000000102015217c1fe48907) dm-8 EMC,Invista
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=1][active]
_ 4:0:7:5 sdbd 67:112 [active][ready]
_ 3:0:4:5 sdj 8:144 [active][ready]
_ 3:0:5:5 sdp 8:240 [active][ready]
_ 3:0:6:5 sdv 65:80 [active][ready]
mpath12 (360001440000000102015217c1fe48862) dm-7 EMC,Invista
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=1][active]
_ 4:0:7:4 sdbc 67:96 [active][ready]
_ 3:0:4:4 sdi 8:128 [active][ready]
_ 3:0:5:4 sdo 8:224 [active][ready]
_ 3:0:6:4 sdu 65:64 [active][ready]
mpath11 (360001440000000102015217c1fe481a5) dm-6 EMC,Invista
[size=260G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=1][active]
_ 4:0:7:3 sdbb 67:80 [active][ready]
_ 3:0:4:3 sdh 8:112 [active][ready]
_ 3:0:5:3 sdn 8:208 [active][ready]
_ 3:0:6:3 sdt 65:48 [active][ready]
_ 3:0:7:3 sdz 65:144 [active][ready]
mpath10 (360001440000000102015217c1fe481a7) dm-5 EMC,Invista
[size=25G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=1][active]
_ 4:0:7:2 sdba 67:64 [active][ready]
_ 3:0:4:2 sdg 8:96 [active][ready]
_ 3:0:5:2 sdm 8:192 [active][ready]
_ 3:0:6:2 sds 65:32 [active][ready]
_ 3:0:7:2 sdy 65:128 [active][ready]

There is a slight difference in the filter-line in the three systems.

test: filter = [ "a|/dev/mapper/.|", "a|/dev/cciss/.|", "r|.|" ]
accp: filter = [ "a|^/dev/mapper/mpath.
|", "a|^/dev/cciss/.|", "r/./" ]
prod: filter = [ "a|/dev/mapper/mpath.|", "a|dev/cciss/.|", "r/.*/" ]

The preferred_names is empty on all three systems.

Only acceptance has a blacklist configured in multipath.conf:

blacklist {
devnode "^sda[0-9]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
"
devnode "^hd[a-z][[0-9]]"
devnode "^cciss!c[0-9]d[0-9]
[p[0-9]*]"
}

I will try to see if I can get permission from the users to try _netdev on acceptance.

Hmm... I'm usually a lot more explicit in my filter defs. Basically, my lines look like

filter = [ "a|/dev/mapper/mpath.*|", "a|/dev/cciss/.*|", "r|/dev/sd.*|",  "r|/dev/mapper/sd.*|" ]

The "r|/dev/mapper/sd.*|" should be an optional (overspecification) kind of thing - can't specifically recall what caused us to add it to our standard filter.

At any rate "|.|" should probably only match a dev path-spec that consists of a single character. The "|.*|" filter has the potential to tell LVM to ignore pretty much all of the dev-paths. That said, that LVM is starting up at all says that it's not ignoring everything.

In general, we don't use the blacklist{} multipath option as you are. We approach things by explicitly including by VENDOR and array-string, instead. So, can't really speak to your multipath.conf file.