Issues enabling bond with non-zero `updelay` on RHEL 7.4 and system repeatedly logs "bond: link status up for interface, enabling it in ms" and bond slave MII status is DOWN

Solution Verified - Updated -

Issue

  • Starting a bonded network with a non-zero updelay configuration neither forms the bond interface properly, nor fails over if their primary interface becomes unresponsive.

  • RHEL 7.4 system repeatedly logs "bond: link status up for interface, enabling it in ms" and bonding slave MII status is DOWN

  • System repeatedly logs the same "link status up" message:

    Aug 17 14:21:00 localhost kernel: bond0: link status up for interface eth6, enabling it in 2000 ms
    Aug 17 14:21:00 localhost kernel: bond1: link status up for interface eth2, enabling it in 2000 ms
    Aug 17 14:21:00 localhost kernel: bond2: link status up for interface eth3, enabling it in 2000 ms
    Aug 17 14:21:00 localhost kernel: bond0: link status up for interface eth6, enabling it in 2000 ms
    Aug 17 14:21:00 localhost kernel: bond1: link status up for interface eth2, enabling it in 2000 ms
    Aug 17 14:21:00 localhost kernel: bond2: link status up for interface eth3, enabling it in 2000 ms
    Aug 17 14:21:00 localhost kernel: bond0: link status up for interface eth6, enabling it in 2000 ms
    Aug 17 14:21:01 localhost kernel: bond1: link status up for interface eth2, enabling it in 2000 ms
    Aug 17 14:21:01 localhost kernel: bond2: link status up for interface eth3, enabling it in 2000 ms
    Aug 17 14:21:01 localhost kernel: bond0: link status up for interface eth6, enabling it in 2000 ms
    Aug 17 14:21:01 localhost kernel: bond1: link status up for interface eth2, enabling it in 2000 ms
    Aug 17 14:21:01 localhost kernel: bond2: link status up for interface eth3, enabling it in 2000 ms
    Aug 17 14:21:01 localhost kernel: bond0: link status up for interface eth6, enabling it in 2000 ms
    
  • Additionally, the MII Status is always shown as down for these interfaces:

    $ cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
    
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: eth4 (primary_reselect always)
    Currently Active Slave: eth4
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 2000
    Down Delay (ms): 0
    
    Slave Interface: eth4
    MII Status: up
    Speed: 10000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 10:15:a4:d8:c2:1c
    Slave queue ID: 0
    
    Slave Interface: eth6
    MII Status: down
    Speed: 10000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 10:15:a4:d8:c1:e0
    Slave queue ID: 0
    
  • The bug doesn't trigger 100% reliably, but can be provoked by removing and re-adding interfaces to the bond via sysfs.

Environment

  • Red Hat Enterprise Linux 7.4
  • Bonding

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content