Bonding Issue in RHEL 6.7

Latest response

I am trying to configure a Dual Port 10Gb NIC on a HP DL360 G9.
I would like to 'team' the 2 ports so that I can essentially have close to 20Gb of bandwidth and fault tolerance so that if 1 port goes down the other continues to function.
I would also like to plug 1 port into 1 of our Cisco Nexus 5672 switches and the 2nd port into another 5672 for availability.
How best to achieve this?

So far I've tried bonding the 2 NICS with Active load Balancing 'Mode 6' , but it appears only 1 NIC is being used and if I shut down the switch port for that NIC port, it doesn't fail over to the other NIC port.
Also, I can't find any examples of the 'ifcfg-' files for Eth0, Eth1 and the bond for a Mode 6 setup.
Thanks in advance for any help you can provide.

Responses

Have you followed the documentation? (Not trying to be rude, just making sure).

RHEL 6 Ethernet Bonding Documentation

RHEL Network Bonding Helper

Thanks for the reply Jason. Yes, I've read the documentation. Still no joy. I've tried the Bonding Helper, but no matter what selections I choose in the 'wizard', it only gives me sample config for Mode 1 or Mode 2, not Mode 6. I would prefer to use the bandwidth of both NICS instead of having 1 NIC port just sitting there waiting for fail over. I thought Mode 6 would be best. Maybe I should try Mode 2?

Depending on the RHEL version; 5, 6 etc the configuration varies. You can add the 'EXTRA_OPTIONS="mode 6"' to the ifcfg-* file(s). Are you sure the bonding is correct? What is the output of 'ethtool '

Our Bonding Helper intentionally doesn't offer Mode 5 or 6.

You'll never get 20Gbps of bandwidth for a single TCP stream, that's just not how bonding works.

You will however get 20Gbps of total throughput to multiple remote systems if you use Mode 2 (balance-xor) or Mode 4 (802.3ad). Both of these require switch config, with the latter mode requiring a bit more switch config.

ifcfg-files are provided at How do I configure a bonding device on Red Hat Enterprise Linux (RHEL)? or the Bonding Helper should provide the right files.

Thanks for the reply Jason. Yes, I believe I'm configuring bonding correctly. Ethtool shows no link detected. I wonder if there are some limitations with the drivers for the HP FlexFabric 10Gb 2-port 533FLR-T Adapter we are using. Thanks for the reply and explanation Jamie. The only Mode I've been successful with so far is Mode 1 with both NIC ports on the same switch. Next, I will test Mode 1 with each NIC port on a different switch. Then I will try testing Mode 2 (ensuring LACP is configured). Thanks again.

Keep in mind, with Mode 1 (active-backup) the switches don't require config but the switchports must be in the same broadcast domain.

Mode 2 (balance-xor) requires an EtherChannel or similar on the switch.

Mode 4 (802.3ad) requires an EtherChannel with LACP configured.

In general, multi-active bonds have limitations:

  • If you're communicating between two hosts with bonding enabled, only one channel in the bond will be used (hashing-algorithm limitations).

  • Unless your switches support cross-switch aggregation, you'll only have one switch's worth of links active at a time. With many switches, you can aggregate within a single switch but not across switches.

  • Depending on the multi-port card used, you may not actually be able to push 20Gbps worth of bandwidth through that single card (more problematic with older, quad-port cards than newer dual-port cards, but still something to be aware of).

Overall, bonding tends to get you closer to your desired N-by-X throughput goals when you're: trunked through one switch and your bonded-host is communicating with two (or more) different hosts.

Mode 2 and Mode 4 have the xmit_hash_policy option to address the hashing behaviour.

The ability to have HA bundles across switches which don't run MLAG is a strength of LACP. People always want to "bond bonds", which is not possible, but LACP lets you achieve the same end goal.

The hash policy doesn't help if there's a single session between a pair of endpoints. If I'm sending a single stream, I'm using a single path.

Correct, you cannot load balance a single stream. That's the only situation I can see where buying faster interfaces is required.

Which, when you're doing backups (or other longer-running transmissions - like some NFS connections) tends to be something you need to factor in.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.