Bonding Issue in RHEL 6.7
I am trying to configure a Dual Port 10Gb NIC on a HP DL360 G9.
I would like to 'team' the 2 ports so that I can essentially have close to 20Gb of bandwidth and fault tolerance so that if 1 port goes down the other continues to function.
I would also like to plug 1 port into 1 of our Cisco Nexus 5672 switches and the 2nd port into another 5672 for availability.
How best to achieve this?
So far I've tried bonding the 2 NICS with Active load Balancing 'Mode 6' , but it appears only 1 NIC is being used and if I shut down the switch port for that NIC port, it doesn't fail over to the other NIC port.
Also, I can't find any examples of the 'ifcfg-' files for Eth0, Eth1 and the bond for a Mode 6 setup.
Thanks in advance for any help you can provide.
Responses
Have you followed the documentation? (Not trying to be rude, just making sure).
Depending on the RHEL version; 5, 6 etc the configuration varies. You can add the 'EXTRA_OPTIONS="mode 6"' to the ifcfg-* file(s). Are you sure the bonding is correct? What is the output of 'ethtool '
Our Bonding Helper intentionally doesn't offer Mode 5 or 6.
You'll never get 20Gbps of bandwidth for a single TCP stream, that's just not how bonding works.
You will however get 20Gbps of total throughput to multiple remote systems if you use Mode 2 (balance-xor) or Mode 4 (802.3ad). Both of these require switch config, with the latter mode requiring a bit more switch config.
ifcfg-files are provided at How do I configure a bonding device on Red Hat Enterprise Linux (RHEL)? or the Bonding Helper should provide the right files.
In general, multi-active bonds have limitations:
If you're communicating between two hosts with bonding enabled, only one channel in the bond will be used (hashing-algorithm limitations).
Unless your switches support cross-switch aggregation, you'll only have one switch's worth of links active at a time. With many switches, you can aggregate within a single switch but not across switches.
Depending on the multi-port card used, you may not actually be able to push 20Gbps worth of bandwidth through that single card (more problematic with older, quad-port cards than newer dual-port cards, but still something to be aware of).
Overall, bonding tends to get you closer to your desired N-by-X throughput goals when you're: trunked through one switch and your bonded-host is communicating with two (or more) different hosts.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
