Bonding over IPoIB (IP over InfiniBand) reporting incorrect speed and duplex

Solution Unverified - Updated -

Issue

  • Bonding over IPoIB (IP over InfiniBand) reporting incorrect speed and duplex
  • There are two IB interfaces on the RHEL6 server ib0 and ib1. Each interface is QDR 4x10Gb speed.
  • OS reports two different speeds, which is right:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active)
Primary Slave: ib0 (primary_reselect always)
Currently Active Slave: ib0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: ib0
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 80:00:00:aa:aa:aa
Slave queue ID: 0

Slave Interface: ib1
MII Status: down
Speed: 100 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 80:00:00:bb:bb:bb
Slave queue ID: 0

From mii-tool:

bond1: 10 Mbit, half duplex, link ok

Environment

  • Red Hat Enterprise Linux 6
  • Mellanox InfiniBand adaptor cards
  • IP over InfiniBand (IPoIB)
  • Bonding driver

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.