Faulty multipath devices left on compute nodes after deleting instances booted from Cinder volumes
Issue
After running a stress test which creates a Cinder volume from an image, boots that image as an instance and then deletes the instance and volume, the multipath command produces output like the following:
# multipath -ll
30000000000000000 dm-11 3PARdata,VV
size=38G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
|- 8:0:2:1 sdj 8:144 failed faulty running
|- 7:0:2:1 sdh 8:112 failed faulty running
`- 8:0:2:4 sdp 8:240 failed faulty running
Environment
- Red Hat Enterprise Linux OpenStack Platform
- Multipath Fibre Channel Storage Area Network
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.