Should I configure the storage multipath software to queue I/O when all paths have failed in my RHEL High Availability cluster ?

Solution Unverified - Updated -

Issue

  • I am setting up a cluster with device-mapper-multipath managing the redundant storage links, and am not sure if I should configure the devices to queue I/O when all paths have failed, or to fail I/O back to the application when this happens.
  • Should I use multipath's queue_if_no_path feature in a cluster?
  • What should I set no_path_retry to for my multipath devices when using GFS2?
  • Should I configure EMC PowerPath to block when all paths have failed on a cluster node?

Environment

  • Red Hat Enterprise Linux (RHEL) 5, 6, or 7 with the High Availability or Resilient Storage Add On
  • Some sort of multipath software managing redundant storage links on each server in the cluster
    • device-mapper-multipath
    • EMC PowerPath
    • Hitachi Dynamic Link Manager
    • Other 3rd party multipath solutions
  • Shared storage accessed by applications or file systems managed by the cluster

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content