A service in a failoverdomain with nofailback="1" still fails back to another member that starts rgmanager while the service is starting in a RHEL 5 High Availability cluster
Issue
- A node gets fenced and the other nodes starts the service, but when the other node rejoins the service relocates back to it even if I have
nofailback="1"
on thefailoverdomain
for that service
Mar 9 23:39:17 node2 fenced[7506]: fence "node1" success
Mar 9 23:39:18 node2 clurgmgrd[7819]: <notice> Taking over service service:myService from down member node1
Mar 9 23:44:27 node2 clurgmgrd[7819]: <notice> Relocating service:myService to better node node1
nofailback
isn't working- When a node rejoins the cluster, services in domains with
nofailback
are still failing back
Environment
- Red Hat Enterprise Linux (RHEL) 5 with the High Availability Add On
rgmanager
releases prior to2.0.52-21.el5
- One or more services in
/etc/cluster/cluster.conf
using anfailoverdomain
that hasordered="1"
andnofailback="1"
- If
nofailback
is not enabled on the failoverdomain, then the behavior described in this issue is expected. Usenofailback="1"
to prevent this.
- If
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.