Under a high load situation, some devices didn't recover after connecting their paths
Issue
- Two of 256 devices didn't recover after connecting their paths under a high load situation.
Load: 32 Multiple +-- Server ---+ +--- Disk Array ---+ | SAS-HBA+------+ LUN * 256 | | SAS-HBA+------+ | +-------------+ +------------------+ * "32 Multiples" means that 32 dd commands were issued to each LUN at the same time.
- dm-52 and dm-79 didn't recover
Jun 30 10:39:51 | dm-50: add map (uevent) Jun 30 10:39:51 | dm-50: devmap already registered Jun 30 10:39:51 | dm-51: add map (uevent) Jun 30 10:39:51 | dm-51: devmap already registered Jun 30 10:39:51 | dm-53: add map (uevent) Jun 30 10:39:51 | dm-53: devmap already registered Jun 30 10:39:51 | dm-54: add map (uevent) Jun 30 10:39:51 | dm-54: devmap already registered : : Jun 30 10:39:51 | dm-77: add map (uevent) Jun 30 10:39:51 | dm-77: devmap already registered Jun 30 10:39:51 | dm-78: add map (uevent) Jun 30 10:39:51 | dm-78: devmap already registered Jun 30 10:39:51 | dm-80: add map (uevent) Jun 30 10:39:51 | dm-80: devmap already registered Jun 30 10:39:51 | dm-81: add map (uevent) Jun 30 10:39:51 | dm-81: devmap already registered
- A symptom is outputing the following error with "multipathd -v3 -d".
Aug 11 10:49:08 | failed to open /dev/sddw
Aug 11 10:49:08 | sddw:
failed to get path uid
Aug 11 10:49:08 | uevent trigger error
Environment
- Red Hat Enterprise Linux 5.5
- device-mapper-multipath-0.4.7-34.el5.x86_64
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.