Packet loss reported on inactive slave interface of a bond or team in RHEL 7

Solution Verified - Updated -

Issue

  • An inactive bond or team slave interface is constantly reporting packet loss:

    $ cat proc/net/bonding/bond0 
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
    
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: enp17s0f0
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    
    Slave Interface: enp17s0f0
    MII Status: up
    Speed: 10000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:1f:f3:af:d3:f0
    Slave queue ID: 0
    
    Slave Interface: enp17s0f1
    MII Status: up
    Speed: 10000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:1f:f3:af:d3:f1
    Slave queue ID: 0
    
    $ cat proc/net/dev
    Inter-|   Receive                                                |  Transmit
     face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
    enp17s0f1: 227951462 2529811    0 179623    0     0          0   1456759   145246    2243    0    0    0     0       0          0
    enp17s0f0: 269687421 2731681    0    0    0     0          0   1456811 53785569  650914    0    0    0     0       0          0
     bond0: 497638883 5261492    0 179623    0     0          0   2913570 53930815  653157    0    0    0     0       0          0
        lo: 135522679  594683    0    0    0     0          0         0 135522679  594683    0    0    0     0       0          0
    

Environment

  • Red Hat Enterprise Linux 7
  • Bonding
  • Teaming

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content