Bridge Network question

Latest response

Hello, I was hoping someone might be able to help me out with bridge networking.

I've been trying to learn KVM and I'm unsure if I'm doing things the right way or not and was hoping to get some feedback.

So I come from a Hyper-V/VMware background and have been working with VMware primarily for the past few years. I've been working through setting up a KVM host by command line to try to learn it.

I did end up getting it to work but I want to know if this would be the preferred way to do it. I saw there were multiple ways.

I have a host set up using 2 nics. Eno1 is for management of the host and eno2 will be used for a bridge network (virtual switch) for the VMs.

I've been using nmcli to set things up, if this is not the preferred way, I'd love to know.

My steps are:
nmcli add ifname br0 type bridge con-name br0
nmcli con add type bridge-slave autoconnect yes ifname eno2 master br0

But it would seem the only way I can get it to work is assign an IP address, DNS server, gateway and turn of stp to the bridge nic to get things to work properly.

Am I doing this the correct way? Is the bridge nic supposed to have an IP? I mean for my testing it's whatever, but I just think about large datacenters running 1000's of KVM hosts. Would each host need to have an IP assigned to the bridge networks?

Looking at VMware and Hyper-v, I don't see the virtual switches set up with IP addresses, so I was thinking I was setting it up wrong. I've tried many other combinations, but just can't seem to get it working without an IP.

I'm still extremely interested in learning Linux and KVM, but it would be extremely helpful to get real world examples, and not necessarily hosts with just 1 nic like in most of the articles I've found.

Responses

There are several different ways to set up KVM networking. I'll describe the two most common ways.

The first is like you describe. A physical interface already in an existing network. Add the physical interface into a bridge on the hypervisor (eg: eno2 in br0). The hypervisor's IP address is then added to the bridge, you don't use the underlying physical NIC anymore.

The bridge acts as a Layer 2 switch (similar to VMware vSwitch), so any VM added to this bridge is connected to the same broadcast domain as the physical interface. Say your hypervisor is 10.0.0.20/24 with the gateway at 10.0.0.1, then you'd add an IP in the VM in the same range 10.0.0.0/24 and use the same gateway 10.0.0.1.

You don't even have to add an IP on the hypervisor here. In that situation the bridge acts solely as a network switch to connect VMs to the outside physical interface.

This first type of setup is more common for infrastructure servers. You can treat the VMs as just another server plugged into the same network.

The second way is using libvirt's default internal bridge virbr0. That bridge is still a network switch, but it isn't connected to anything external.

In this situation, libvirt runs a DHCP server and DNS forwarder (dnsmasq) on the hypervisor which is available to VMs connected to that virbr0. VMs get a DHCP address from dnsmasq on the hypervisor, and dnsmasq forwards any DNS queries to the hypervisor's DNS setup. libvirt sets up firewall rules so VMs NAT off the hypervisor's external address.

That NAT means you can't connect in to VMs from outside (unless you specifically add a port forward inwards). VMs can get out, and all outgoing VM traffic appears to come from the hypervisor's IP address.

This second sort of setup is more common for test setups, like VMs on your laptop.

We've put a fair amount of effort into the documentation with nice diagrams to make this clear. Check out the chapter here:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/configuring-virtual-machine-network-connections_configuring-and-managing-virtualization

Thank you this is very helpful.