Multipath output
this is the output of multipath -ll on one of our servers.
\_ round-robin 0 [prio=100][enabled]
\_ 0:0:1:1 sdd 8:48 [active][ready]
\_ 1:0:0:1 sdf 8:80 [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 0:0:0:1 sdb 8:16 [active][ready]
\_ 1:0:1:1 sdh 8:112 [active][ready]
if you look at the groupings it appears that we have each group with one path to SPA and one to SPB. if I am reading this correctly - host-channel-target-lunid.
We are using ALUA surley this would introduced 50% of i/o that would be non optimized.
CX4 using ALUA failover mode 4
Responses
You'd need to get us the mappings for the sdX devices. But, in general, with CLARiiONs in ALUA mode, one SP will be at a higher priority than the other. If you have two (or more) paths through your fabric to a particular SP, those paths will be grouped together. The relative priorities of the path-groups will give you an indication as to which is the preferred path. In general, you'll want your I/Os to go only to the preferred path so that you don't cause trespass events. Trespassing kills your performance.
Normally, when you issue `multipath -ll` it prints out your mutipathing options, youd disk ID, the dev-mapper name, etc. That's missing from the output you've posted, thus far.
As to "LUN accessd down both controllers" it depends on what you mean by both controllers. With any external storage array, you're going to have the host's controller(s) (i.e., your HBAs) and the array's controller(s) (i.e., your storage processors).
If you look at the device info, the N:N:N:N ID, you can see that both path-groups have both HBA0 and HBA1 in them (the first digit in the N:N:N:N ID is your HBA number). So, for a given priority- or path-group, your I/Os will be sent down both host controllers (that is, your Linux HBAs) but not to both SPs under normal operating conditions. You can verify this behavior by starting a large I/O job and using iostat to see which /dev/sdN or dm-*device-nodes are active.
I have a installed a RHEL 5.6 and does not arise because the initial errors.
[root@srxen15 ~]# multipath -ll
C9 Inquiry of device </dev/sdk> failed.
C9 Inquiry of device </dev/sdl> failed.
C9 Inquiry of device </dev/sdm> failed.
C9 Inquiry of device </dev/sdq> failed.
C9 Inquiry of device </dev/sdr> failed.
C9 Inquiry of device </dev/sds> failed.
QASTAGE (2001738000a7a13c8) dm-4 IBM,2810XIV
[size=304G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 2:0:1:1 sdk 8:160 [active][ready]
\_ 2:0:2:1 sdl 8:176 [active][ready]
\_ 2:0:3:1 sdm 8:192 [active][ready]
\_ 1:0:2:1 sdq 65:0 [active][ready]
\_ 1:0:3:1 sdr 65:16 [active][ready]
\_ 1:0:4:1 sds 65:32 [active][ready]
IDS_OLVGUM (3600a0b800047bcbc0000fe8f4cb7f4e2) dm-2 IBM,1818 FAStT
[size=140G][features=0][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=200][active]
\_ 1:0:0:2 sdd 8:48 [active][ready]
\_ 2:0:0:2 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:1:2 sdo 8:224 [active][ready]
xen_lun_1 (3600a0b800047bcbc0000f5f54c1709eb) dm-0 IBM,1818 FAStT
[size=50G][features=0][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=200][active]
\_ 1:0:0:0 sdb 8:16 [active][ready]
\_ 2:0:0:0 sdf 8:80 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:1:0 sdj 8:144 [active][ready]
IDS (3600a0b800047bcbc0000fe8c4cb7f4ce) dm-1 IBM,1818 FAStT
[size=120G][features=0][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=200][active]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ 2:0:0:1 sdg 8:96 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:1:1 sdn 8:208 [active][ready]
QUORUM (3600a0b800047bcbc0000148d4d82df8b) dm-3 IBM,1818 FAStT
[size=200M][features=0][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:3 sde 8:64 [active][ready]
\_ 2:0:0:3 sdi 8:128 [active][ready]
\_ round-robin 0 [prio=100][active]
\_ 1:0:1:3 sdp 8:240 [active][ready]
# cat /etc/multipath.conf
# $Id: multipath.conf,v 1.4 2010/08/11 08:41:17 root Exp root $
# Blacklist all devices by default. Remove this to enable multipathing
# on the default devices.
blacklist {
devnode "sda"
}
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
polling_interval 30
flush_on_last_del yes
path_grouping_policy group_by_serial
no_path_retry fail
path_checker tur
failback manual
prio_callout "/sbin/mpath_prio_rdac /dev/%n"
}
devices {
device {
vendor "IBM"
product "1818"
hardware_handler "1 rdac"
}
device {
vendor "IBM"
product "2810XIV"
rr_min_io 1000
path_checker tur
failback immediate
no_path_retry queue
}
}
multipaths {
multipath {
wwid 3600a0b800047bcbc0000f5f54c1709eb
alias xen_lun_1
}
multipath {
wwid 3600a0b800047bcbc0000fe8c4cb7f4ce
alias IDS
}
multipath {
wwid 3600a0b800047bcbc0000fe8f4cb7f4e2
alias IDS_OLVGUM
}
multipath {
wwid 2001738000a7a13c8
alias QASTAGE
}
multipath {
wwid 3600a0b800047bcbc0000148d4d82df8b
alias QUORUM
}
}
If you look at the full H:C:T:L string for each of those, you'll notice you have one LUN from 0:0:0:X and 1:0:1:X in the active path group. I assume these are 2-port HBAs, so this means that you have a LUN from the 1st port on the 1st HBA and the 2nd port on the 2nd HBA. All this tells us is that I/O to that path group will be round-robined across 2 different HBAs on the *host* (which is good), but not necessarily across SPs (also good). It depends on your network layout and the zoning, but those 2 ports are likely both going to the same SP.
If that is not the case, then let us know.
Regards,
John Ruemker, RHCA
Red Hat Technical Account Manager