When a node having network issues booted up and couldn't join the other members in the cluster, it fenced them even though a quorum device was configured and should have prevented fencing, in a RHEL 5 or 6 High Availability cluster
Issue
- A node rebooted due to a heuristic failure, and when it booted back up it couldn't join up with the other two nodes because it was having network issues. However, even though a quorum device was being used, that node still fenced the other two nodes that were still running and active.
- A single node that should not have quorum without the
qdiskfenced other nodes which were active
Environment
- Red Hat Enterprise Linux (RHEL) 5 or 6 with the High Availability Add On
- Cluster utilizing a quorum device (
<quorumd>in/etc/cluster/cluster.conf)
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.