Cloud VM with Multiple NICs as bonds Connectivity issue

Latest response

Hi All,

I have an issue with a Redhat 8.6 VM in the Azure Cloud.

On a local VM I can add multiple NICs and create a bond for each NIC. I can ping/ssh into each bonds external IP after setup.

On a Azure Cloud VM before creating a bond for each NIC I can ssh/ping each NIC external IP. On the cloud system I have installed nm-cloud-setup rpm.

The commands I have run that allow access on each bond locally, but not in cloud are:

$ nmcli con add type bond ifname bond1 con-name bond1 bond.options "mode=active-backup,miimon=100"
$ nmcli con down bond1
$ nmcli con del system-con2
$ nmcli con add type ethernet con-name eth1 ifname eth1 master bond1
$ nmcli con up bond1

Where eth1 is the secondary NIC device. system-con2 is the connection for that device.

On a local VM when I do $ nmcli c show bond1, the gateway and routing information is populated. In the cloud the same command does not show populated routing and gateway information.

Any insight or advice would be greatly appreciated.

Responses