- Posted In
- Red Hat Enterprise Linux
I am building a 2 node cluster and so far the only resources I have defined is a cluster-wide ip address (no filesystems or applications defined yet). When I test failing over the service from node A to node B, the cluser-wide ip address fails over sucessfully to node B. Then when I either fail the service back to node A or just stop the service when it is running on node B, the bond0 interface looses its ip address. I can add the ip address back to bond0 with the ifup bond0 command.
When the service starts on node B I get this entry in messages:
Sep 20 17:37:08 etvfdpd4 rg_test: : <info> Adding IPv4 address 126.96.36.199/24 to bond0
Which looks god and has the correct ip address
But when I stop the service on node B I get this entry in messages:
Sep 20 17:25:36 etvfdpd4 clurgmgrd: : <info> Removing IPv4 address from bond0
NOTE!!! The ip address is blank. And I think this is why the ip address of bond0 gets removed.
Also, this problem does not occur on node A.
Here's an excerpt from the related entries in the cluster.conf file
<ip name="188.8.131.52" address="184.108.40.206" monitor_link="0"/>
<ip name="220.127.116.11" address="18.104.22.168" monitor_link="0"/>
<service autostart="1" domain="etvfdpd3dom" exclusive="0" max_restarts="0" name="etvfdpd3svc" recovery="restart" restart_expire_time="0">
So, any ideas on how to trouleshoot or fix this?
Thanks in advance,