Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

11.2.5. Configuring a VLAN over a Bond

This section will show configuring a VLAN over a bond consisting of two Ethernet links between a server and an Ethernet switch. The switch has a second bond to another server. Only the configuration for the first server will be shown as the other is essentially the same apart from the IP addresses.

Warning

The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding in not supported with direct connection using crossover cables? for more information.

Note

The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation supplied with your switch and the bonding.txt file in the kernel-doc package (see Section 31.9, “Additional Resources”).
Check the available interfaces on the server:
~]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff

Procedure 11.1. Configuring the Interfaces on the Server

  1. Configure a slave interface using eth0:
    ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
    NAME=bond0-slave0
    DEVICE=eth0 
    TYPE=Ethernet
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    NM_CONTROLLED=no
    The use of the NAME directive is optional. It is for display by a GUI interface, such as nm-connection-editor and nm-applet.
  2. Configure a slave interface using eth1:
    ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
    NAME=bond0-slave1
    DEVICE=eth1
    TYPE=Ethernet
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    NM_CONTROLLED=no
    The use of the NAME directive is optional. It is for display by a GUI interface, such as nm-connection-editor and nm-applet.
  3. Configure a channel bonding interface ifcfg-bond0:
    ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0
    NAME=bond0
    DEVICE=bond0
    BONDING_MASTER=yes
    TYPE=Bond
    IPADDR=192.168.100.100
    NETMASK=255.255.255.0
    ONBOOT=yes
    BOOTPROTO=none
    BONDING_OPTS="mode=active-backup miimon=100"
    NM_CONTROLLED=no
    The use of the NAME directive is optional. It is for display by a GUI interface, such as nm-connection-editor and nm-applet. In this example MII is used for link monitoring, see the Section 31.8.1.1, “Bonding Module Directives” section for more information on link monitoring.
  4. Check the status of the interfaces on the server:
    ~]$ ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
        inet6 fe80::5054:ff:fe19:28fe/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff
        inet6 fe80::5054:ff:fef6:639a/64 scope link
           valid_lft forever preferred_lft forever

Procedure 11.2. Resolving Conflicts with Interfaces

The interfaces configured as slaves should not have IP addresses assigned to them apart from the IPv6 link-local addresses (starting fe80). If you have an unexpected IP address, then there may be another configuration file with ONBOOT set to yes.
  1. If this occurs, issue the following command to list all ifcfg files that may be causing a conflict:
    ~]$ grep -r "ONBOOT=yes" /etc/sysconfig/network-scripts/ | cut -f1 -d":" | xargs grep -E "IPADDR|SLAVE"
    /etc/sysconfig/network-scripts/ifcfg-lo:IPADDR=127.0.0.1
    The above shows the expected result on a new installation. Any file having both the ONBOOT directive as well as the IPADDR or SLAVE directive will be displayed. For example, if the ifcfg-eth1 file was incorrectly configured, the display might look similar to the following:
    ~]# grep -r "ONBOOT=yes" /etc/sysconfig/network-scripts/ | cut -f1 -d":" | xargs grep -E "IPADDR|SLAVE"
    /etc/sysconfig/network-scripts/ifcfg-lo:IPADDR=127.0.0.1
    /etc/sysconfig/network-scripts/ifcfg-eth1:SLAVE=yes
    /etc/sysconfig/network-scripts/ifcfg-eth1:IPADDR=192.168.55.55
  2. Any other configuration files found should be moved to a different directory for backup, or assigned to a different interface by means of the HWADDR directive. After resolving any conflict set the interfaces down and up again or restart the network service as root:
    ~]# service network restart
    Shutting down interface bond0:                             [  OK  ]
    Shutting down loopback interface:                          [  OK  ]
    Bringing up loopback interface:                            [  OK  ]
    Bringing up interface bond0:  Determining if ip address 192.168.100.100 is already in use for device bond0...
                                                               [  OK  ]
    If you are using NetworkManager, you might need to restart it at this point to make it forget the unwanted IP address. As root:
    ~]# service NetworkManager restart

Procedure 11.3. Checking the bond on the Server

  1. Bring up the bond on the server as root:
    ~]# ifup /etc/sysconfig/network-scripts/ifcfg-bond0
    Determining if ip address 192.168.100.100 is already in use for device bond0...
  2. Check the status of the interfaces on the server:
    ~]$ ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
        link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
    3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
        link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff
    4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
        link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
        inet 192.168.100.100/24 brd 192.168.100.255 scope global bond0
        inet6 fe80::5054:ff:fe19:28fe/64 scope link 
           valid_lft forever preferred_lft forever
    Notice that eth0 and eth1 have master bond0 state UP and bond0 has status of MASTER,UP.
  3. View the bond configuration details:
    ~]$ cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
    
    Bonding Mode: transmit load balancing
    Primary Slave: None
    Currently Active Slave: eth0
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    
    Slave Interface: eth0
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 52:54:00:19:28:fe
    Slave queue ID: 0
    
    Slave Interface: eth1
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 52:54:00:f6:63:9a
    Slave queue ID: 0
  4. Check the routes on the server:
    ~]$ ip route
    192.168.100.0/24 dev bond0  proto kernel  scope link  src 192.168.100.100
    169.254.0.0/16 dev bond0  scope link  metric 1004

Procedure 11.4. Configuring the VLAN on the Server

Important

At the time of writing, it is important that the bond has slaves and that they are up before bringing up the VLAN interface. At the time of writing, adding a VLAN interface to a bond without slaves does not work. In Red Hat Enterprise Linux 6, setting the ONPARENT directive to yes is important to ensure that the VLAN interface does not attempt to come up before the bond is up. This is because a VLAN virtual device takes the MAC address of its parent, and when a NIC is enslaved, the bond changes its MAC address to that NIC's MAC address.

Note

A VLAN slave cannot be configured on a bond with the fail_over_mac=follow option, because the VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, traffic would still be sent with the now incorrect source MAC address.
Some older network interface cards, loopback interfaces, Wimax cards, and some Infiniband devices, are said to be VLAN challenged, meaning they cannot support VLANs. This is usually because the devices cannot cope with VLAN headers and the larger MTU size associated with VLANs.
  1. Create a VLAN interface file bond0.192:
    ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0.192
    DEVICE=bond0.192
    NAME=bond0.192
    BOOTPROTO=none
    ONPARENT=yes
    IPADDR=192.168.10.1
    NETMASK=255.255.255.0
    VLAN=yes
    NM_CONTROLLED=no
  2. Bring up the VLAN interface as root:
    ~]# ifup /etc/sysconfig/network-scripts/ifcfg-bond0.192
    Determining if ip address 192.168.10.1 is already in use for device bond0.192...
  3. Enabling VLAN tagging on the network switch. Consult the documentation for the switch to see what configuration is required.
  4. Check the status of the interfaces on the server:
    ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
        link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
    3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
        link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff
    4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
        link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
        inet 192.168.100.100/24 brd 192.168.100.255 scope global bond0
        inet6 fe80::5054:ff:fe19:28fe/64 scope link 
           valid_lft forever preferred_lft forever
    5: bond0.192@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
        link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff
        inet 192.168.10.1/24 brd 192.168.10.255 scope global bond0.192
        inet6 fe80::5054:ff:fe19:28fe/64 scope link
           valid_lft forever preferred_lft forever
    Notice there is now bond0.192@bond0 in the list of interfaces and the status is MASTER,UP.
  5. Check the route on the server:
    ~]$ ip route
    192.168.100.0/24 dev bond0  proto kernel  scope link  src 192.168.100.100
    192.168.10.0/24 dev bond0.192  proto kernel  scope link  src 192.168.10.1
    169.254.0.0/16 dev bond0  scope link  metric 1004 
    169.254.0.0/16 dev bond0.192  scope link  metric 1005
    Notice there is now a route for the 192.168.10.0/24 network pointing to the VLAN interface bond0.192.

Configuring the Second Server

Repeat the configuration steps for the second server, using different IP addresses but from the same subnets respectively.
Test the bond is up and the network switch is working as expected:
~]$ ping -c4 192.168.100.100
PING 192.168.100.100 (192.168.100.100) 56(84) bytes of data.
64 bytes from 192.168.100.100: icmp_seq=1 ttl=64 time=1.35 ms
64 bytes from 192.168.100.100: icmp_seq=2 ttl=64 time=0.214 ms
64 bytes from 192.168.100.100: icmp_seq=3 ttl=64 time=0.383 ms
64 bytes from 192.168.100.100: icmp_seq=4 ttl=64 time=0.396 ms

--- 192.168.100.100 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.214/0.586/1.353/0.448 ms

Testing the VLAN

To test that the network switch is configured for the VLAN, try to ping the first servers' VLAN interface:
~]# ping -c2 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=0.781 ms
64 bytes from 192.168.10.1: icmp_seq=2 ttl=64 time=0.977 ms
--- 192.168.10.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.781/0.879/0.977/0.098 ms
No packet loss suggests everything is configured correctly and that the VLAN and underlying interfaces are up.

Optional Steps

  • If required, perform further tests by removing and replacing network cables one at a time to verify that failover works as expected. Make use of the ethtool utility to verify which interface is connected to which cable. For example:
    ethtool --identify ifname integer
    Where integer is the number of times to flash the LED on the network interface.
  • The bonding module does not support STP, therefore consider disabling the sending of BPDU packets from the network switch.
  • If the system is not linked to the network except over the connection just configured, consider enabling the switch port to transition directly to sending and receiving. For example on a Cisco switch, by means of the portfast command.