Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

2.11. OpenDaylight (Technology Preview)

This section outlines the top new features for the OpenDaylight service.
Improved Red Hat OpenStack Platform director integration
The Red Hat OpenStack Platform director installs and manages a complete OpenStack environment. With Red Hat OpenStack Platform 12, the director can deploy and configure OpenStack to work with OpenDaylight. OpenDaylight can run together with the OpenStack overcloud controller role, or in a separate custom role on a different node.
In Red Hat OpenStack Platform 12, OpenDaylight is installed and run in containers. This provides more flexibility to its maintenance and use.
IPv6
OpenDaylight in Red Hat OpenStack Platform 12 brings some feature parity in IPv6 use-cases with OpenStack neutron ML2/OVS implementation. These use-cases include:
  • IPv6 addressing support including SLAAC
  • Stateless and Stateful DHCPv6
  • IPv6 Security Groups with allowed address pairs
  • IPv6 communication among virtual machines in the same network
  • IPv6 East-West routing support
VLAN-aware virtual machines
VLAN-aware virtual machines (or virtual machines with trunking support) allow an instance to be connected to one or more networks over one virtual NIC (vNIC). Multiple networks can be presented to an instance by connecting it to a single port. Network trunking lets users create a port, associate it with a trunk, and launch an instance on that port. Later, additional networks can be attached to or detached from the instance dynamically without interrupting the operations of that instance.
SNAT
Red Hat OpenStack Platform 12 introduces the conntrack-based SNAT, where it uses OVS netfilter to maintain translations. A switch is selected as the NAPT switch per router, and does the centralized translation. All the other switches send the packet to centralized switch for SNAT. If a NAPT switch goes down, an alternate switch is selected for the translations and the existing translations will be lost on a failover.
SR-IOV Integration
OpenDaylight in Red Hat OpenStack Platform 12 can be deployed with compute nodes that support SR-IOV. It is also possible to create mixed environments with both OVS and SR-IOV nodes in a single OpenDaylight installation. The SR-IOV deployment requires the neutron SR-IOV agent in order to configure the virtual functions (VFs), which are directly passed to the compute instance when it is deployed as a network port.
Controller Clustering
The OpenDaylight Controller in Red Hat OpenStack Platform 12 supports a cluster-based High Availability model. Several instances of the OpenDaylight Controller form a Controller Cluster. Together, they work as one logical controller. The service provided by the controller (viewed as a logical unit) will continue to operate as long as a majority of the controller instances are functional and able to communicate with each other.
The Red Hat OpenDaylight Clustering model provides both High Availability and horizontal scaling: more nodes can be added to absorb more load, if necessary.
OVS-DPDK
OpenDaylight in Red Hat OpenStack Platform 12 may be deployed with Open vSwitch Data Plane Development Kit (DPDK) acceleration with director. This deployment offers higher data plane performance as packets are processed in user space rather than in the kernel.
L2GW/HW-VTEP
Red Hat OpenStack Platform 12 supports L2GW to integrate traditional bare-metal services into a neutron overlay. This is especially useful for bridging external physical workloads into a neutron tenant network, for bringing a bare metal server (managed by OpenStack) into a tenant network, and bridging SR-IOV traffic into a VXLAN overlay. This fact takes advantage of the line-rate speed of SR-IOV and the benefits of an overlay network to interconnect SR-IOV virtual machines.
The networking-odl Package
Red Hat OpenStack Platform 12 offers a new version of the networking-odl package that brings important changes. It introduces the port status update support command that provides accurate information on the port status and when the port is available for a virtual machine to use. The default port binding changes from network topology to pseudo agent based. The network topology binding support is not available in this release. Customers using their network topology based on port binding should migrate to pseudo agent based port binding (pseudo-agentdb-binding).