bond0 looses ip address when cluster service stops
I am building a 2 node cluster and so far the only resources I have defined is a cluster-wide ip address (no filesystems or applications defined yet). When I test failing over the service from node A to node B, the cluser-wide ip address fails over sucessfully to node B. Then when I either fail the service back to node A or just stop the service when it is running on node B, the bond0 interface looses its ip address. I can add the ip address back to bond0 with the ifup bond0 command.
When the service starts on node B I get this entry in messages:
Sep 20 17:37:08 etvfdpd4 rg_test: [19735]: <info> Adding IPv4 address 166.68.70.157/24 to bond0
Which looks god and has the correct ip address
But when I stop the service on node B I get this entry in messages:
Sep 20 17:25:36 etvfdpd4 clurgmgrd: [10142]: <info> Removing IPv4 address from bond0
NOTE!!! The ip address is blank. And I think this is why the ip address of bond0 gets removed.
Also, this problem does not occur on node A.
Here's an excerpt from the related entries in the cluster.conf file
<resources>
<ip name="166.68.70.157" address="166.68.70.157" monitor_link="0"/>
<ip name="166.68.70.158" address="166.68.70.158" monitor_link="0"/>
</resources>
<service autostart="1" domain="etvfdpd3dom" exclusive="0" max_restarts="0" name="etvfdpd3svc" recovery="restart" restart_expire_time="0">
<ip ref="166.68.70.157"/>
</service>
So, any ideas on how to trouleshoot or fix this?
Thanks in advance,
Mark