When we use Intel 10G NIC as a port of Linux bridge, performance may be very low

Solution Verified - Updated -

Issue

  • When we use Intel 10G NIC as a port of Linux bridge, performance may be very low for communications through bridge.

  • Now, we think about the composition described below as an example.

 ifcfg-br0
  DEVICE=br0
TYPE=Bridge
IPADDR=192.168.1.1
NETMASK=255.255.255.0
  ONBOOT=yes

 ifcfg-eth0
  DEVICE=eth0
 TYPE=Ethernet
  HWADDR=xx:xx:xx:xx:xx:xx 
  ONBOOT=yes
  BRIDGE=br0

ifcfg-eth1
  DEVICE=eth1
 TYPE=Ethernet
  HWADDR=yy:yy:yy:yy:yy:yy 
  ONBOOT=yes
  BRIDGE=br0
  • Here, The NICs for eth0 and eth1 are Intel 10G NICs driven by ixgbe.

  • Sometimes the throughput eth0 and eth1 is much smaller than one between eth0 and HOST on br0. This is because the RSC function of ixgbe driver, which assembles the small incoming packets to bigger one, is ON as default.

  • When this option is adapted to NIC, receive packets may be passed to bridge device with a big size unless its traffic is very less. In the case that these packets are passed to TCP/IP layer, it works fine because TCP/IP layer can receive any size of packets.

  • But, if these packets are passed to eth1 , the sending side of LAN driver checks the size of packets and the packets may be dropped because their size may exceed the MTU of eth1.

  • In many cases connections will be kept by resending. But, the performance may be made extremely low.

  • To avoid this problem, RSC function must be disabled under the environment with bridge.

Environment

  • Release Number: 6.0GA
  • Architecture: all
  • Kernel Version: linux-2.6.32-71.el6
  • Related Package Version: 2.0.62-k2

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content