DM-Multipath Weirdness with NetApp Storage
Our enterprise normally uses EMC for block-mode storage. However, one of our tenants apparently has to use NetApp for block-mode. Because this is a rare configuration, haven't really done NetApp block-mode on RHEL since 2010.
The normal DM-multipath procedures work, for the most path:
- /dev/mapper/dm-N entries get created for each LUN
- `multipath -ll`shows device trees for each multipathed LUN
- I'm able to partition the DM devices
- I'm able to use kpartx to propagate the partition tables to all of the sdX devices in the dm-N dev-node
Where it gets weird is on reboot. Whem multipathd restarts, three dm-N devices become six. When I trace the devnodes under /dev, I find that the three mystery dm-N devices share major and minor numbers with the dm-NpX devicenodes:
brw-rw---- 1 root disk 253, 7 Jul 18 12:32 /dev/mapper/mpath0 brw-rw---- 1 root root 253, 7 Jul 18 12:32 /dev/dm-7 brw-rw---- 1 root disk 253, 8 Jul 18 12:32 /dev/mapper/mpath1 brw-rw---- 1 root root 253, 8 Jul 18 12:32 /dev/dm-8 brw-rw---- 1 root disk 253, 9 Jul 18 12:32 /dev/mapper/mpath2 brw-rw---- 1 root root 253, 9 Jul 18 12:32 /dev/dm-9 brw-rw---- 1 root disk 253, 10 Jul 18 12:32 /dev/mapper/mpath0p1 brw-rw---- 1 root root 253, 10 Jul 18 12:32 /dev/dm-10 brw-rw---- 1 root disk 253, 11 Jul 18 12:32 /dev/mapper/mpath1p1 brw-rw---- 1 root root 253, 11 Jul 18 12:32 /dev/dm-11 brw-rw---- 1 root disk 253, 12 Jul 18 12:32 /dev/mapper/mpath2p1 brw-rw---- 1 root root 253, 12 Jul 18 12:32 /dev/dm-12
I've verified all my configuration files and everything looks right, DM-multipath just seems to think that it has to create dm-N devices for each partition on the real dm-N devices.
That said, while these nodes show up under /dev and in the output of `fdisk -lu` (though the fdisk output shows them ad devices that are a few GB smaller than the real LUNs and show them as having no partition tables of their own), they don't show up in the `multipath -ll` ouput.
Hopefully, what I've described is clear. I'm writing from home and can't look at the system right now. Anyone run into this before? Tips? Suggestions.
Responses
That looks correct to me (but I don't say that with authority). Also, I now use gparted to analyze my devices (in addition to fdisk). Your output seems correct though. You would have a device which is the disk and then each partition would also be a device. Again - I don't >know< that this is correct, but your output looks like what I am accustomed to seeing.
# parted /dev/dm-6 print
Model: Linux device-mapper (dm)
Disk /dev/dm-6: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 215GB 215GB primary lvm
Information: Don't forget to update /etc/fstab, if necessary.
Ah... I thought I may not have understood the original question - and it turns out I was correct, I did not understand ;-)
It looks like /sbin/fdisk scans a ton of libraries then checks /proc/partitions - what I don't get: how does fdisk "know" to skip /dev/sdb1, or /dev/sdb2 - but to use /dev/sdb
Yet - it doesn't do the same for /dev/dm-<blah>
I guess I never really thought much about how it works.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
