RHEL 7 nodes do not send out gratuitous neighbor advertisements when flapping OVS VLAN internal ports or restarting the network in Red Hat OpenStack Platform

Solution In Progress - Updated -

Issue

Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

RHEL 7 nodes do not send out gratuitous neighbor advertisements when flapping OVS VLAN internal ports or restarting the network in Red Hat OpenStack Platform

The following issues arises when using IPv6 networking for overcloud networks such as InternalApi. When rebooting a node, or restarting the node's network, RHEL will delete and recreate the internal VLAN ports. In these cases, Open vSwitch will automatically assign a (new) MAC address to the port, given that for OVS, this is a newly created interface. Given that the port is deleted from the Open vSwitch database and then recreated, this is only logical:

[root@node-1 ~]# ip link ls dev vlan495
46: vlan495: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 1e:54:40:34:26:78 brd ff:ff:ff:ff:ff:ff
[root@node-1 ~]# ifdown vlan495
[root@node-1 ~]# ifup vlan495
(...)
[root@qacloud-controller-1 ~]# ip link ls dev vlan495
47: vlan495: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 9e:81:d2:8b:69:7f brd ff:ff:ff:ff:ff:ff

When using IPv6, a problem arises with the way how Red Hat Enterprise Linux 7 announces the new IPv6 + MAC address combination to the L2 segment. Indeed, RHEL 7 by default does not send out the equivalent of a gratuitous ARP to the all nodes multicast address FF01:0:0:0:0:0:0:1 or any other multicast address that the gateway is listening on.

Instead, RHEL 7 will only perform DAD (Duplicate Address Detection) and send Neighbor Solicitations (NS) to the multicast address groups for the respective link-local and global IPv6 addresses:

   89  14.668075 0.000000 9e:81:d2:8b:69:7f 33:33:ff:8b:69:7f   9e:81:d2:8b:69:7f → 33:33:ff:8b:69:7f           :: → ff02::1:ff8b:697f   ICMPv6 90  Neighbor Solicitation for fe80::9c81:d2ff:fe8b:697f
   90  14.979990 0.311915 9e:81:d2:8b:69:7f 33:33:ff:00:00:1b   9e:81:d2:8b:69:7f → 33:33:ff:00:00:1b           :: → ff02::1:ff00:1b   ICMPv6 90  Neighbor Solicitation for <64 bit prefix>::1b

The router with IPv6 address <64 bit prefix>::1 will listen on multicast group ff02::1:ff00:1 for DAD, and hence will not receive either of the 2 NS above. The consequence is that other nodes on the network will not be advised to update their neighbor caches. This particularly affects Spine/Leaf designs in OpenStack, where nodes from other subnets rely on routing through the overcloud-node's gateway. Because the first hop router has no means to know that and how it needs to update its neighbor cache, other hosts in the cloud cannot reach the node in question.

User intervention is required where the user (or a script) triggers a ping towards the node's router's IPv6 address. This will cause a NS for the router's IPv6 address on multicast group ff02::1:ff00:1 and as such will reach the router and update its neighbor cache:

  201  29.441796 0.022841 9e:81:d2:8b:69:7f 33:33:ff:00:00:01   9e:81:d2:8b:69:7f → 33:33:ff:00:00:01 <64 bit prefix>::1b → ff02::1:ff00:1   ICMPv6 90  Neighbor Solicitation for <64 bit prefix>::1 from 9e:81:d2:8b:69:7f

Environment

Red Hat OpenStack Platform 10
Red Hat OpenStack Platform 13
Red Hat Enterprise Linux 7.6

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In