Why does a node fence other(s) when it is first joining a High Availability cluster in RHEL 4, 5, or 6?
Issue
- Why do the nodes in my Red Hat Enterprise Linux cluster fence each other before they have even joined the cluster?
- Nodes are fencing each other on startup, one node that is starting
cman
boots another during boot. - Cluster nodes in two node cluster repeatedly fence each other at boot time in a loop. How to troubleshoot if the cluster formation fails at initial stage like this?
- Cluster server appeared hung, when the node 1 is rebooted manually, node 2 is fenced. What's the root cause, RCA of this behaviour?
- When powering off/on one of the nodes of a RHEL cluster, the rebooted node fenced off the surviving node during bootup, resulting in reservation conflict for the surviving node and it lost all cluster services.
- Not able to start cluster services, every time nodes are getting fenced. When try to start cluster services and also during manual reboot.
- RedHat cluster causes the active node to reboot, after fencing off the passive node that crashed?
- In active-active GFS2 cluster, if one server come up, second goes down automatically.
- In two nodes cluster when one node starts, it fences another node.
post_join_delay
,FENCE_MEMBER_DELAY
Environment
- Red Hat Cluster Suite (RHCS) 4
- Red Hat Enterprise Linux (RHEL) 5 or 6 with the High Availability Add On
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.