Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • bond0 looses ip address when cluster service stops

    Posted on

    I am building a 2 node cluster and so far the only resources I have defined is a cluster-wide ip address (no filesystems or applications defined yet). When I test failing over the service from node A to node B, the cluser-wide ip address fails over sucessfully to node B. Then when I either fail the service back to node A or just stop the service when it is running on node B, the bond0 interface looses its ip address. I can add the ip address back to bond0 with the ifup bond0 command.

     

    When the service starts on node B I get this entry in messages:

     

    Sep 20 17:37:08 etvfdpd4 rg_test: [19735]: Adding IPv4 address 166.68.70.157/24 to bond0
     

    Which looks god and has the correct ip address

     

    But when I stop the service on node B I get this entry in messages:

     

    Sep 20 17:25:36 etvfdpd4 clurgmgrd: [10142]: Removing IPv4 address  from bond0
     

     

    NOTE!!! The ip address is blank. And I think this is why the ip address of bond0 gets removed.

     

    Also, this problem does not occur on node A.

     

    Here's an excerpt from the related entries in the cluster.conf file

     

       
          
          

       

     

       
          
     

       
     

    So, any ideas on how to trouleshoot or fix this?

     

    Thanks in advance,

     

    Mark

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2026 Red Hat