A service in a failoverdomain with nofailback="1" still fails back to another member that starts rgmanager while the service is starting in a RHEL 5 High Availability cluster
Issue
- A node gets fenced and the other nodes starts the service, but when the other node rejoins the service relocates back to it even if I have
nofailback="1"on thefailoverdomainfor that service
Mar 9 23:39:17 node2 fenced[7506]: fence "node1" success
Mar 9 23:39:18 node2 clurgmgrd[7819]: <notice> Taking over service service:myService from down member node1
Mar 9 23:44:27 node2 clurgmgrd[7819]: <notice> Relocating service:myService to better node node1
nofailbackisn't working- When a node rejoins the cluster, services in domains with
nofailbackare still failing back
Environment
- Red Hat Enterprise Linux (RHEL) 5 with the High Availability Add On
rgmanagerreleases prior to2.0.52-21.el5- One or more services in
/etc/cluster/cluster.confusing anfailoverdomainthat hasordered="1"andnofailback="1"- If
nofailbackis not enabled on the failoverdomain, then the behavior described in this issue is expected. Usenofailback="1"to prevent this.
- If
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
