Possible to use Device Mapper Multipath (RHEL 5.4) with a single HBA (single path)?

Latest response

Hi,

 

We have a single Qlogic 2560 HBA with one port (single path) that we will be configuring and it's not clear to us from reading the RH DM Multipathing guide (RHEL 5) whether we can use Device Mapper Multipath (DM-MP) with a single HBA.    We have plans to add a second HBA in the future but the immediate plan is to use the single HBA. 

 

Question:  better to configure the HBA card with device mapper now (single HBA) or don't use Device Mapper Multipath until we actually have two HBAs - we'd prefer to create the Device Mapper Multipath device with the single HBA now/upfront to avoid the need to do this in the future when we add a second HBA (multipath) if possible and supported.   I'm not not clear if Device Mapper Multipath can be used with a single HBA (single port).    If DM-MP not a good (keep in mind plans to multipath within 2 months) then we just present the LUN and use it much like it was direct attached to array...

 

Some details:

 

[root@xxx tmp]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
[root@xxx tmp]#
 

[root@xx tmp]# systool -c fc_host -v
Class = "fc_host"

  Class Device = "host7"
  Class Device path = "/sys/class/fc_host/host7"
    fabric_name         = "0x2000001b3294a2df"
    issue_lip           = <store method only>
    node_name           = "0x2000001b3294a2df"
    port_id             = "0x000000"
    port_name           = "0x2100001b3294a2df"
    port_state          = "Online"
    port_type           = "Unknown"
    speed               = "unknown"
    supported_classes   = "Class 3"
    supported_speeds    = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
    symbolic_name       = "QLE2560 FW:v4.04.09 DVR:v8.03.00.10.05.04-k"
    system_hostname     = ""
    tgtid_bind_type     = "wwpn (World Wide Port Name)"
    uevent              = <store method only>

    Device = "host7"
    Device path = "/sys/devices/pci0000:00/0000:00:03.0/0000:0d:00.0/host7"
      ct                  =
      edc                 = <store method only>
      els                 =
      fw_dump             =
      nvram               = "ISP "
      optrom_ctl          = <store method only>
      optrom              =
      reset               = <store method only>
      sfp                 = ""
      uevent              = <store method only>
      vpd                 = "+"

[root@xxx tmp]#

 

[root@xxx tmp]# multipath -ll -v2
[root@xxx tmp]#
 

 

Recommendations/insight is appreciated.   Thanks in advance for your consideration.

 

Jason

Responses

It is possible to have multipath create a map for just a single path.  In RHEL 5, as long as that device is not blacklisted in /etc/multipath.conf, once you run multipath (without -ll) or start multipathd it should create a map for it. 

 

For example:

 

# cat /etc/multipath.conf
blacklist {
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z]"
}
defaults {
    user_friendly_names yes
}

 

# fdisk -l

[... omitted local device ...]

 

Disk /dev/sda: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes

 

Disk /dev/sda doesn't contain a valid partition table

 

# service multipathd start
Starting multipathd daemon:                                [  OK  ]
# multipath -ll
mpath0 (0QEMU_QEMU_HARDDISK_drive-scsi0-0-0) dm-2 QEMU,QEMU HARDDISK
[size=2.0G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 0:0:0:0 sda 8:0   [active][ready]

 

As you can see, the single path is mapped. 

 

I mentioned that this is possible in RHEL 5 specifically because RHEL 6 now offers an option called find_multipaths which, if enabled, causes multipath to only find devices with more than one path.  However if its disabled, or the wwid is in /etc/multipath/wwids, you can still create maps for single-path devices.

 

Now, having said all that, you might want to consider waiting to set up multipath.  If you are using LVM on the device in question, then there really won't be much to do to switch over to multipath when the time comes to add another HBA/port.   For example, lets say your LUN is currently /dev/sdb.  You use /dev/sdb as a physical volume in volume group myvg, which has a logical volume lv1 that gets mounted on /mnt/lv1 via /etc/fstab:

 

  /dev/myvg/lv1 /mnt/lv1  ext3  defaults 1 2

 

You have /dev/sdb blacklisted in /etc/multipath.conf to prevent it from being mapped by multipath.

 

Now, you add a second path and are ready to use multipath.  Your new path is /dev/sdc, and so you remove /dev/sdb and /dev/sdc from your blacklist.  Once you have a chance to unmount your file system, you start multipathd and ensure a map has been created in 'multipath -ll' output.  Ensure LVM is using the new device by doing

 

  # pvscan

  # vgscan

  # lvscan

 

Make sure its using /dev/mapper/mpathX or /dev/dm-Y as the physical volume.  If its not, check your filter in /etc/lvm/lvm.conf to ensure it does not exclude mpath devices. 

 

If your PV resides on the multipath device, your logical volume and file system will now take advantage of your redundant paths, and you are good to mount it back up.  Your entry in /etc/fstab refers directly to the logical volume (as opposed to something like /dev/sdX that you'd need to change to /dev/mapper/mpathX), so nothing to change there.

 

My point is, the procedure for switching over is pretty simple, and so you may just want to wait so you don't have to deal with any additional complexity in your setup now.  That said, if you prefer to just configure multipath now and not have to worry about it later, I see no problems with doing so.

 

Hope that helps.  Let us know if you have any questions.

 

Regards,

John Ruemker, RHCA

Red Hat Technical Account Manager

Wow - fantastic answer John - that helps out a lot.

 

Thanks,

 

Jason

Hi ,

 

I wonder if somehow path to a single LUN increases (such as activating second head of the storage) requires an additional setup on multipathd or it automatically detects and enable those new paths for same Lun

 

Regards.

Hello,

You may need to do it manually. Once the device is available to OS, you can run "multipath -v2".

# multipath -ll
mpath2 (1IET_00030001) dm-3 IET,VIRTUAL-DISK
[size=900M][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 2:0:0:1 sdc 8:32  [active][ready]

It is mounted.
/dev/mapper/mpath2 on /test type ext3 (rw)

Scan for new device.
https://access.redhat.com/kb/docs/DOC-3942
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/index.html

# multipath -v2
reload: mpath2 (1IET_00030001)  IET,VIRTUAL-DISK
[size=900M][features=0][hwhandler=0][n/a]
\_ round-robin 0 [prio=1][undef]
 \_ 2:0:0:1 sdc 8:32  [active][ready]
\_ round-robin 0 [prio=1][undef]
 \_ 8:0:0:1 sdf 8:80  [undef][ready]

# multipath -ll
mpath2 (1IET_00030001) dm-3 IET,VIRTUAL-DISK
[size=900M][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 2:0:0:1 sdc 8:32  [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 8:0:0:1 sdf 8:80  [active][ready]

Please see " multipathd -k" also.

Regards,
Prasoon

Hello,

 

How can you disable a path on multipath ? So you can force the multipath to use only one path ?

 

I'll explain my self :

 

The storage team installed another card for the SAN and they'd like me to disable a path of the hba card so they can migrate the server on another path. Is there a way to disable a path on the device-mapper multipath ?

Delete the devnode associated with the path you want to disable. See the article I wrote for specifics.:

 

http://thjones2.posterous.com/linux-multipath-path-failure-simulation

Hi Thomas,

 

 

I read your article and found it very helpful and easy to read.

 

 

Thank you very much for the reference.

 

 

Kind regards,

 

 

Jan Gerrit Kootstra

This is how I'm doing it, I was just wondering if there was a better and faster way to do that.

 

Like on solaris with the mpathadm command :

 

ex :

 

mpathadm list lu | grep dsk | while read LU ; do mpathadm disable path -i 10000000c991a7f6 -t 50060e8005714440 -l $LU ; done

 

Cause when you have a lot of devnodes to delete, this is not a rely good method.

The Solaris methods were actually fairly late to the game. While Sun was always pretty good about providing tools for Sun-branded HBAs and Sun-branded arrays, you started to get murky if you didn't use OEMed HBAs or arrays. Took Sun a number of years to generalize their SAN management tools enough for them to be useful outside of the OEMed device space. It was one of the reasons I tended to prefer AIX and IRIX for managing large-scale fibre environments.

 

Lets face it: it's only been within the last five years or so that you've been seeing Linux being deployed as large-scale single node systems. So, it's not terribly surprising that the tools aren't quite as mature, yet, as they were on AIX/IRIX/Solaris/etc. That said, it's fairly trivial to script-out things to make what methods *are* available a bit more "friendly".

 

Depending on what HBAs you're using, you may be able to use the HBAs' CLI or GUI tools to effect similar results. Obviously, if you use different HBAs in different systems, you won't have vendor-neutral methods for accomplishing your tasks.

Thanks a lot for your answears. Yes your right about Linux, hes only at the begining for the large scale systems. But I think he iss already making him self a nice place, eaven replacing older and much more mature systems.

 

The best example is my company. In 1 year we installed almost 200 linux servers,and may be 2 solair zones, and we think we'll not eaven make the migration to solaris 11.

 

So Linux has his best days a head of it.

 

Thanks a lot for your help.

Bogdan.