Link errors displayed with md software raid multipath device
Issue
- Customer uses 4 way fibre storage to build a multipath soft raid using md (not using dm-multipath).
- One path /dev/sdb shows faulty in the "mdadm --detail /dev/md1" output, as follows :
# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Fri Apr 15 09:44:35 2011
Raid Level : multipath
Array Size : 524287936 (500.00 GiB 536.87 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Oct 5 08:36:17 2013
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
UUID : 4146258a:1ed5e9e4:52b4f024:3bbe9ee3
Events : 0.9
Number Major Minor RaidDevice State
0 0 0 - removed
1 8 64 1 active sync /dev/sde
2 8 48 2 active sync /dev/sdd
3 8 32 3 active sync /dev/sdc
4 8 16 - faulty /dev/sdb
Environment
- RHEL4.7 and RHEL5.6 both have this problem
- md software raid with multipath personality, as per /proc/mdstat :
Personalities : [multipath]
md1 : active multipath sdb[4](F) sde[1] sdd[2] sdc[3]
524287936 blocks [4/3] [_UUU]
unused devices: <none>
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.