KVM not bridgeing to the ib0.

Latest response

I have two servers hosting kvms in an HPC cluster. The infiniband network is the link to the Panasas storage array. I have Mellanox routers and ConnectX 3 infiniband cards. I recently did a security update to the installed RHEL6.5 OS. The update used Sept 2017 repo patchset. The kernel update necessitated the reinstall of Mellanox drivers to match the kernel. I updated to MLNX_OFED-4.3 from MLNX_OFED-3.4. Since the update the kvms cannot connect to the storage array. They cannot ping the router address or the array address.
I downgraded the Mellanox drive back to MLNX_OFED-3.4, with no change.
I've noticed on the working system the virtual nic has these configs in VirtualManager:
Host device is vnet1 (Bridge br1)
Device model: Hypervisor Default

On the broken system VirtualManger shows:
Source device: Specify shared device name
Bridge name: br1
Device model: virtio
I'm not sure of the best way to make these configurations the same. There is no option similar to vnet1 (Bridge br1) on the broken system. How can I create this bridge/vnet option?

Responses

Hi David, I admit that i don't know your HW setup ... Could you please post the output of the "brctl show br1" on both systems. the output will look like this:

yn:~# brctl show vmbr550
bridge name bridge id       STP enabled interfaces
vmbr550     8000.002655228dd2   no      vnet1
                                                                            vnet2
                                                                            vlan550

The interfaces you see in the last column are the interfaces connected to the Bridge! Those need to be in the state "UP" so the concerning "ip link list" or "ifconfig ..." would be great aswell

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.