How can I avoid fencing loops with 2 node clusters and Red Hat High Availability clusters?

Updated 2015-02-08T02:36:10+00:00


  • Why should I have my fencing devices on the same network as my cluster communication with Red Hat High Availability clusters?
  • How can I prevent fence loops with my cluster?
  • If one node in a 2-node cluster is up and running, and the other node boots up while there is a network issue, that booting node fences the active node
  • In a 2-node cluster configured with fence_scsi, if a node gets fenced due to a network issue and I then reboot it, the "active" node reports SCSI reservation conflicts and path failures as the rebooted node starts its services.


  • Red Hat Enterprise Linux (RHEL) 5, 6, and 7 with the High Availability Add On
  • Red Hat High Availability Cluster with 2 nodes (this issue does not apply to 3 or more node clusters).
  • cman-based clusters: No quorum disk is configured. Cluster must have only 2 votes with two_node="1" set.
  • corosync+votequorum-based clusters: wait_for_all is not set (it is set by default if two_node is enabled).
  • Cluster fencing devices are IP-based and are accessed over the network.
    • Fencing devices are reached over a different network to the network that the cluster communicates over. This effectively means both nodes can reach the fencing devices when the cluster interconnect is unavailable.

Subscriber content preview. For full access to the Red Hat Knowledgebase, please log in.

Not a subscriber? Learn more about the benefits of Red Hat Subscriptions.