Network bond not working; all slave interfaces as well as bond have the same IP address.

Solution Verified - Updated -

Issue

  • Bond interface not working as expected. The IP address of the bond is also seen assigned to the bond slave interfaces, which is incorrect and does not match the ifcfg configuration files:

    $ ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
    2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
        link/ether f8:bc:12:38:b5:d0 brd ff:ff:ff:ff:ff:ff
        inet 10.110.20.143/24 brd 10.110.20.255 scope global em1
    3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
        link/ether f8:bc:12:38:b5:d0 brd ff:ff:ff:ff:ff:ff
        inet 10.110.20.143/24 brd 10.110.20.255 scope global em1
    6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
        link/ether f8:bc:12:38:b5:d0 brd ff:ff:ff:ff:ff:ff
        inet 10.110.20.143/24 brd 10.110.20.255 scope global bond0
    
  • The server responds to ping about 3-4 times when it is booting and then it stops.

  • NetworkManager tries to manage bond slave interfaces even though the NM_CONTROLLED=no parameter is properly set in their interface configuration files.

Environment

  • Red Hat Enterprise Linux 6
  • Red Hat Enterprise Linux 5
  • NetworkManager
  • Bonding

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content