Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • KVM not bridgeing to the ib0.

    Posted on

    I have two servers hosting kvms in an HPC cluster. The infiniband network is the link to the Panasas storage array. I have Mellanox routers and ConnectX 3 infiniband cards. I recently did a security update to the installed RHEL6.5 OS. The update used Sept 2017 repo patchset. The kernel update necessitated the reinstall of Mellanox drivers to match the kernel. I updated to MLNX_OFED-4.3 from MLNX_OFED-3.4. Since the update the kvms cannot connect to the storage array. They cannot ping the router address or the array address.
    I downgraded the Mellanox drive back to MLNX_OFED-3.4, with no change.
    I've noticed on the working system the virtual nic has these configs in VirtualManager:
    Host device is vnet1 (Bridge br1)
    Device model: Hypervisor Default

    On the broken system VirtualManger shows:
    Source device: Specify shared device name
    Bridge name: br1
    Device model: virtio
    I'm not sure of the best way to make these configurations the same. There is no option similar to vnet1 (Bridge br1) on the broken system. How can I create this bridge/vnet option?

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2026 Red Hat