Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 5. Overview of features available with Red Hat OpenStack Platform 13

The key OpenDaylight project in this solution is NetVirt, with support for the OpenStack Neutron API.

Note

Red Hat recommends that you use VXLAN tenant networks and not VLAN tenants networks.

Red Hat OpenStack Platform 13 supports the following features in OpenDaylight:

5.1. Integration with Red Hat OpenStack Platform director

The Red Hat OpenStack Platform director is a tool set that you can use to install and manage a complete OpenStack environment. With Red Hat OpenStack Platform 13, use Red Hat OpenStack director to deploy and configure OpenStack to work with OpenDaylight. OpenDaylight can run together with the OpenStack overcloud controller role, or as a separate custom role on a different node in several possible scenarios.

In Red Hat OpenStack Platform, you install and run OpenDaylight in containers which provides more flexibility to its maintenance and use.

For more information, see the Red Hat OpenDaylight Installation and Configuration Guide.

5.2. L2 Connectivity between OpenStack instances

OpenDaylight provides the required Layer 2 (L2) connectivity among VM instances belonging to the same neutron virtual network. Each time a user creates a neutron network, OpenDaylight automatically sets the required Open vSwitch (OVS) parameters on the relevant compute nodes to ensure that instances, belonging to the same network, can communicate with each other over a shared broadcast domain.

While VXLAN is the recommended encapsulation format for tenant networks traffic, 802.1q VLANs are also supported. In the case of VXLAN, OpenDaylight creates and manages the virtual tunnel endpoints (VTEPs) between the OVS nodes automatically to ensure efficient communication between the nodes and without relying on special features on the underlying fabric. The only requirement from the underlying network is support for unicast IP routing between the nodes.

5.3. IP Address Management (IPAM)

VM instances get automatically assigned with an IPv4 address using the DHCP protocol, according to the tenant subnet configuration. This is done by leveraging the neutron DHCP agent. Each tenant is completely isolated from other tenants, so that IP addresses can overlap.

Note

OpenDaylight can operate as a DHCP server. However, using the neutron DHCP agent provides High Availability (HA) and support for VM instance metadata (cloud-init). Therefore Red Hat recommends to deploy the DHCP agent, rather than relying on OpenDaylight for such functionality.

Note

Red Hat OpenStack Platform supports both IPv4 and IPv6 tenant networks.

5.4. Routing between OpenStack networks

OpenDaylight provides support for Layer 3 (L3) routing between OpenStack networks, whenever a user defines virtual router device. Routing is supported between different networks of the same project (tenant), which is also commonly referred to as East-West routing.

OpenDaylight uses a distributed virtual routing paradigm, so that the forwarding jobs are done locally on each compute node.

Note

Red Hat OpenStack Platform supports both IPv4 and IPv6 tenant networks.

5.5. Floating IPs

A floating IP is a 1-to-1 IPv4 address mapping between a floating address and the fixed IP address, assigned to the instance in the tenant network. When a user assigns a floating IP address to a VM instance, the IP address is used for any incoming or outgoing external communication. The Red Hat OpenStack Platform director includes a default template, where each compute role has external connectivity for floating IPs communication. These external connections support both flat (untagged) and VLAN based networks.

5.6. Security Groups

OpenDaylight provides support for tenant configurable Security Groups that allow a tenant to control what traffic can flow in and out VM instances. Security Groups can be assigned per VM port or per neutron network, and filter traffic based on TCP/IP characteristics such as IP address, IP protocol numbers, TCP/UDP port numbers, and ICMP codes.

By default, each instance is assigned a default Security Group, where outgoing traffic is allowed, but all incoming traffic to the VM is blocked. The only exception is the trusted control plane traffic, such as ARP and DHCP. In addition, anti-spoofing rules are present, so a VM cannot send or receive packets with MAC or IP addresses that are unknown to neutron. OpenDaylight provides support for the neutron port security extension, that allows tenants to turn on or off security filtering on a per port basis.

OpenDaylight implements the Security Groups rules within OVS in a stateful manner, by leveraging OpenFlow and conntrack.

5.7. IPv6

IPv6 is an Internet Layer protocol for packet-switched networking and provides end-to-end datagram transmission across multiple IP networks, similarly to the previous implementation known as IPv4. IPv6 offers more IP addresses to connect various devices into the network, and has other features such as stateless address autoconfiguration, network renumbering, and router announcements.

OpenDaylight in Red Hat OpenStack Platform brings some feature parity in IPv6 use cases with OpenStack Networking (neutron). The following features are supported in OpenDaylight:

  • IPv6 addressing support including stateless address autoconfiguration (SLAAC), stateless DHCPv6 and stateful DHCPv6 modes
  • IPv6 Security Groups along with allowed address pairs
  • IPv6 VM to VM communication in same network
  • IPv6 East-West routing
  • Dual Stack (IPv4/IPv6) networks

5.8. VLAN-aware VMs

VLAN-aware VMs (or VMs with trunking support) can connect to one or more networks over one virtual NIC (vNIC). Multiple networks can present to an instance by connecting it to a single port. With network trunking, you can create a port, associate it with a trunk, and launch an instance on that port. Later, additional networks can attach to or detach from the instance dynamically without interrupting the operations of the instance.

The trunk provides a parent port, which the trunk is associated with, and can have any number of child ports (subports). When you wants to create instances, you must specify the parent port of the trunk to attach the instance to it. The network presented by the subport is the network of the associated port. The VMs see the parent port as an untagged VLAN and the child ports are tagged VLANs.

5.9. SNAT

With SNAT (Source Network Address Translation), VMs in a tenant network can access the external network without using floating IPs. It uses NAPT (Network Address Port Translation) for the communication of multiple virtual machines over the same router gateway to use the same external IP address.

OpenDaylight supports the conntrack-based SNAT where it uses OVS netfilter integration. Netfilter maintains the translations. One switch is designated as a NAPT switch, and performs the centralized translation role. All of the other switches send the packet to centralized switch for SNAT. If a NAPT switch fails, an alternate switch is selected for the translations, but the existing translations are lost on a failover.

5.10. OVS-DPDK

Open vSwitch is a multilayer virtual switch that uses the OpenFlow protocol and the OVSDB interface to control the switch.

The native Open vSwitch uses the kernel space to deliver data to the applications. The kernel creates the so-called flow table which holds rules to forward the passing packets. Packets that do not match any rule are sent to an application in the user space for further processing. When the application (a daemon) handles the packet, it makes a record in the flow table, so that the next packets can use a faster path. Therefore, OVS can save a reasonable amount of time by by-passing the time consuming switching between the kernel and the applications. This approach can still have limitations in the bandwidth of the Linux network stack, which is unsuitable for use cases that require high rate of packet processing, such as telecommunications.

DPDK is a set of user space libraries that enable a user to build applications that can process the data faster. It offers several Poll Mode Drivers (PMDs), that enable the packets to bypass the kernel stack and go directly to the user space. Such behaviour speeds up the communication because it handles the traffic outside of the kernel space completely.

OpenDaylight can be deployed with Open vSwitch Data Plane Development Kit (DPDK) acceleration with director. This deployment offers higher data plane performance as packets are processed in user space rather than in the kernel.

5.11. SR-IOV integration

The Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can project a single networking device to multiple virtual machines and improve their performance. For example, SR-IOV enables a single Ethernet port to appear as multiple, separate, physical devices. A physical device with SR-IOV capabilities can be configured to appear in the PCI configuration space as multiple functions. SR-IOV distinguishes between Physical Functions (PFs) and Virtual Functions (VFs). PFs are full PCIe devices with SR-IOV capabilities. They provide the same functionality as PCI devices and can be assigned the VFs.

VFs are simple PCIe functions that derive from PFs. The number of VFs a device can have is limited by the device hardware. A single Ethernet port, the physical device, can map to many VFs that can be shared to virtual machines through the hypervisor. It maps one or more VFs to a VM.

Each VF can be mapped to a single guest at a time only, because it requires real hardware resources. A virtual machine can have more VFs. To the virtual machine, the VF appears as a networking interface.

The main advantage is that the SR-IOV devices can share a single physical port with multiple virtual machines. Furthermore, the VFs have near-native performance and provide better performance than para-virtualized drivers and emulated access, and they provide data protection between virtual machines on the same physical server.

OpenDaylight in Red Hat OpenStack Platform 13 can be deployed with compute nodes that support SR-IOV. The SR-IOV deployment requires the neutron SR-IOV agent to configure the VFs, which are directly passed to the compute instance when it is deployed as a network port.

5.12. Controller clustering

High availability (HA) is the continued availability of a service when individual systems providing it fail. The OpenDaylight Controller in Red Hat OpenStack Platform supports a cluster based High Availability model. Several instances of the OpenDaylight Controller form a controller cluster. Together they work as one logical controller. The service provided by the controller is viewed as a logical unit and continues to operate as long as a majority of the controller instances are functional and able to communicate with each other.

Note

Red Hat recommends that you deploy OpenDaylight as a three-node cluster. After you deploy the cluster, do not modify it.

5.13. Hardware VXLAN VTEP (L2GW)

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Layer 2 (L2) gateway services allow a tenant’s virtual network to bridge to a physical network. This integration provides users with the capability to access resources on a physical server through a layer 2 network connection rather than through a routed layer 3 (L3) connection. That means extending the layer 2 broadcast domain instead of going through L3 or Floating IPs.

To implement this, create a bridge between the virtual workloads running inside an overlay (VXLAN) and workloads running in physical networks (normally using VLAN). This requires some sort of control over the physical top-of-rack (ToR) switch to which the physical workload is connected. Hardware VXLAN Gateway (HW VTEP) can help with that.

HW VTEP (VXLAN Tunnel End Point) resides on the ToR switch itself and performs VXLAN encapsulation and de-encapsulation. Each VTEP device has two interfaces. One device is a VLAN interface facing the physical server and the other device is an IP interface to other VTEPs. The idea behind hardware VTEPs is to create an overlay network that connects VMs and physical servers and for them to communicate as if they are on the same L2 network.

Red Hat OpenStack customers can benefit from an L2GW to integrate traditional bare metal services into a neutron overlay. This is useful for bridging external physical workloads into a neutron tenant network, BMaaS/Ironic for bringing a bare metal server (managed by OpenStack) into a tenant network, and bridging SR-IOV traffic into a VXLAN overlay; taking advantage of the line-rate speed of SR-IOV and the benefits of an overlay network to interconnect SR-IOV VMs.