== KVM / VLAN / Bonding / Bridging ==
Hi I would like to know the best setup up if I have a 3 VLANs will be used over two physical links enp2s0 , enp3s0 trunked server by default has 2 NICs (note : server capable of creating vNICs as well) so in the current case server by default have vNIC enp2s0 , enp3s0 prepresented to OS
So I will have VLAN 100 with one IP address for management Host bare metal RHEL 7.0
On top of the host will install KVM to deploy 2 VMs
VM1 : using VLAN 200 and - one interface - one IP address VM2 : using VLAN 200, VLAN300 and two interfaces two IP addresses
-
How many bond should I use ... I assume one ? (enp2s0 , enp3s0) ?
Do I need to create all the VLANs over that bond ? or create multiple vNIC
like enp4 , enp5 ... on the physical server ?
what is the proper pinning VLANs ? -
How many bridge should be created ? knowing the VMs also need to be communicated with each other.
I am looking at any help on how this can be implemented when it comes to routing , pinning ,,, networking etc ?
Thanks in advance
Responses
There are several ways you could set this up, but I would do it like this:
Hypervisor
- enp2s0 and enp3s0 in bond0
- bond0.100 with hypervisor management IP
- bond0.200 in br200
- bond0.300 in br300
VMs
- VLAN200 interface in br200
- VLAN300 interface in br300
All the VLAN tagging is done on the hypervisor, the VMs just need to have the correct IP addresses for the correct interface.
Think of the bridge as a Layer 2 network switch. It just learns location of stations based on source MAC, and forwards frames based on dest MAC.
If the hypervisor needs an IP in VLAN200 or VLAN300, then add that hypervisor IP on br200 or br300 respectively.
I'm not quite sure what in-band out OOB would be, but you can access the VMs by the VNC/Spice VM console on the hypervisor, and via the VLAN200 and VLAN300 IPs if the rest of your network (firewalls/routers/etc) allows that access.
If you need a VM management IP in VLAN100, then make a br100 which contains bond0.100, put the hypervisor management IP on br100 instead of bond0.100, and put the VM's VLAN100 management interface in br100 as well. The same setup as VLAN200 and 300.
As long as the guest OS supports it, use virtio-net, it will provide the best performance. If the guest OS doesn't support virtio-net then the emulated Intel e1000 will be the next best choice. The emulated Realtek card has the same limitations on max frame size as the real hardware (4000-something bytes) and is just a super old driver.
If virtio-net doesn't meet your performance requirements, then see if your hypervisor's physical NIC supports SR-IOV. If so, create SR-IOV Virtual Functions and pass those through to the VM. That setup is covered in the product doc.
It has previously been not possible to migrate VMs using SR-IOV VFs. I haven't kept up on new developments with that, so it's something to look into if migrating VMs is of concern to you.
You can easily do live and offline migration of VMs using virtio or emulated NICs.
Hi, This is similar to what I am trying to accomplish. I have 4 physical NICs, one I have made a bridge already; ifcfg-bridge1-en1 ifcfg-enp2 ifcfg-enp3 and ifcfg-enp4. Each of those NICs are different VLANs and Subnets. VLAN 24, 101, 102 and 104 respectively. I want to have a KVM/QEMU qcow/VM attach to all of those subnets/VLANs. I am not sure how achieve this. In short I would do this; in VMware by creating virtual switches and add 4 NICs to the VM and add each NIC to each Virtual switch. I tried creating 4 bridges but as soons as I create the second bridge I lost connectivity to VLAN 24 (ifcfg-bridge1-en1). I then have to iLO to the physical box and reset network-scripts back to normal.
Could someone point me in the right direction or if this is even possible with KVM.
Example:
[root@hostname network-scripts]# cat ifcfg-bridge1-en1
DEVICE="bridge1-en7"
ONBOOT="yes"
TYPE="Bridge"
BOOTPROTO="none"
IPADDR="10.24.0.20"
NETMASK="255.255.255.224"
GATEWAY="10.24.0.1"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
DHCPV6C="no"
STP="on"
DELAY="0.0"
[root@hostname bkup]# cat ifcfg-bridge2-en2
DEVICE="bridge2-en8"
ONBOOT="yes"
TYPE="Bridge"
BOOTPROTO="none"
IPADDR="192.168.1.1"
NETMASK="255.255.255.0"
GATEWAY="192.168.1.2"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
DHCPV6C="no"
STP="on"
DELAY="0.0"
As I understand, the switch is doing the VLAN tagging, so you've currently got something like:
VLAN24 ---- NIC en1 ---- bridge-en1
VLAN101 ---- NIC enp2
VLAN102 ---- NIC enp3
VLAN104 ---- NIC enp4
You are correct that you create more bridges, to end up with something like:
VLAN24 ---- NIC en1 ---- bridge-en1
VLAN101 ---- NIC enp2 ---- bridge-enp2
VLAN102 ---- NIC enp3 ---- bridge-enp3
VLAN104 ---- NIC enp4 ---- bridge-enp4
Where you probably went wrong is adding that second GATEWAY parameter to the second bridge, which likely started sending all external traffic via 192.168.1.2 instead of 10.24.0.1 and so you lost connectivity to the hypervisor. You can only have one default gateway on the system.
The bridge is a software implementation of a Layer 2 switch. It just shuffles frames back and forth between the external NIC and any VMs which are placed in the bridge. I haven't used VMware in a long time, but think of the bridge like a vSwitch and that'll be pretty close.
Inside the VM you can use whatever routing and gateway config you like. The Layer 3 configuration inside the VMs is independent of the Layer 3 configuration on the hypervisor.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
