RHEV 3.1 - Available Network Bonds limited to 5?

Latest response

I have added some Quad-Ethernet cards to our 2 RHEV compute nodes and when I went to configure the additional interfaces as BONDs, the portal indicated "there are no available bonds".

Has anyone else ran into this issue?  If so, what was your resolution.

Unfortunately we can not use VLAN tagging for this environment (yet) and it is unfortunate that we need access to 9 VLANs from this single host, but I don't feel it is an unreasonable requirement.

 

Thanks!

Responses

Hi James. Certainly not a familiar issue to me, and the community seems quiet on this one too. I've reached out to a few folks here that may be able to help out.

Hi James,

It's likely down to the bonding configuration, typically the hypervisors only setup 4-5 bondX interfaces (as seen in /proc/net/bonding/), on a normal RHEL host, you can pass the 'max_bonds=X' parameter to the bonding module, to create more bonding interfaces.

For instance, modprobe bonding max_bonds=10, creates bond0 through to 9 entries in /proc/net/bonding/, you could then put 'options bonding max_bonds=10' into a /etc/modprobe.d/  file.

If the max_bonds option change would work in a RHEV-H environment, I haven't been able to determine, and it's possible that the RHEV Manager or VDSM may impose a limit on the number of bonds it will handle.

I'd recommend, try adding the max_bonds=10 as an option to the bonding module, (persist'ing the change if on a RHEV-H based system) for all hypervisors in the cluster, and ensuring all the devices appear in /proc/net/bonding, and try and create the 6th bond.

If that doesn't work, then you may need to open a support case so we can look further into what is preventing creation of the 6th bond on a formal basis.

Cheers,

Nigel

Hi James,

This is a known issue that seems to have slipped our attention, we now have it reevaluated for importance, so if you can, please open a support case and request that RHEV will support mode than 5 bonds. This will help us better prioritize the issue - customer cases generally have more weight than internal requests.

 

Thanks,

Dan

Thanks guys - the first node has been updated and successfully tested.

I added the miimon=100 as it was consistent across all existing bonds.  However, I am not 100% sure it belongs in that global file, or whether the "local" config (i.e. ifcfg-bond0) takes precedence.

[root@myrhelhyp01 ~]# cat /etc/modprobe.d/bonding.conf 

options bonding miimon=100 max_bonds=10