Should I configure the storage multipath software to queue I/O when all paths have failed in my RHEL High Availability cluster ?
Issue
- I am setting up a cluster with
device-mapper-multipathmanaging the redundant storage links, and am not sure if I should configure the devices to queue I/O when all paths have failed, or to fail I/O back to the application when this happens. - Should I use
multipath'squeue_if_no_pathfeature in a cluster? - What should I set
no_path_retryto for mymultipathdevices when using GFS2? - Should I configure EMC PowerPath to block when all paths have failed on a cluster node?
Environment
- Red Hat Enterprise Linux (RHEL) 5, 6, or 7 with the High Availability or Resilient Storage Add On
- Some sort of multipath software managing redundant storage links on each server in the cluster
device-mapper-multipath- EMC PowerPath
- Hitachi Dynamic Link Manager
- Other 3rd party multipath solutions
- Shared storage accessed by applications or file systems managed by the cluster
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
