Should I configure the storage multipath software to queue I/O when all paths have failed in my RHEL High Availability cluster ?

Solution Unverified - Updated -

Issue

  • I am setting up a cluster with device-mapper-multipath managing the redundant storage links, and am not sure if I should configure the devices to queue I/O when all paths have failed, or to fail I/O back to the application when this happens.
  • Should I use multipath's queue_if_no_path feature in a cluster?
  • What should I set no_path_retry to for my multipath devices when using GFS2?
  • Should I configure EMC PowerPath to block when all paths have failed on a cluster node?

Environment

  • Red Hat Enterprise Linux (RHEL) 5, 6, or 7 with the High Availability or Resilient Storage Add On
  • Some sort of multipath software managing redundant storage links on each server in the cluster
    • device-mapper-multipath
    • EMC PowerPath
    • Hitachi Dynamic Link Manager
    • Other 3rd party multipath solutions
  • Shared storage accessed by applications or file systems managed by the cluster

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.