Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 1. Overview

1.1. What is OpenDaylight?

The OpenDaylight platform is a Java-based programmable SDN controller that can be used for network virtualization for OpenStack environments. The controller architecture consists of separated northbound and southbound interfaces. For OpenStack integration purposes, the main northbound interface uses the NeutronNorthbound project, which communicates with neutron, the OpenStack Networking service. The southbound OpenDaylight projects, the OVSDB and the OpenFlow plug-ins, are used to communicate with the Open vSwitch (OVS) control and the data plane. The main OpenDaylight project that translates the neutron configuration into network virtualization is the NetVirt project.

1.2. How does OpenDaylight work with OpenStack?

1.2.1. The default neutron architecture

The neutron reference architecture uses a series of agents to manage networks within OpenStack. These agents are provided to neutron as different plug-ins. The core plug-ins are used to manage the Layer 2 overlay technologies and data plane types. The service plug-ins are used to manage network operations for Layer 3 or higher in the OSI model, such as firewall, DHCP, routing and NAT.

By default, Red Hat OpenStack Platform uses the Modular Layer 2 (ML2) core plug-in with the OVS mechanism driver, that provides an agent to configure OVS on each Compute and Controller node. The service plug-ins, the DHCP agent, the metadata agent, along with the L3 agent, run on controllers.

1.2.2. Networking architecture based on OpenDaylight

OpenDaylight integrates with the ML2 core plug-in by providing its own driver called networking-odl. This eliminates the necessity to use the OVS agent on every node. OpenDaylight is able to program each OVS instance across the environment directly, without any per-node agents. For Layer 3 services, neutron is configured to use the OpenDaylight L3 plug-in. This approach reduces the number of agents on multiple nodes that handle routing and network address translation (NAT), because OpenDaylight can handle the distributed virtual routing functionality by programming the data plane directly. The neutron DHCP and metadata agents are still used for managing DHCP and metadata (cloud-init) requests.

Note

OpenDaylight is able to provide DHCP services. However, when deploying the current Red Hat OpenStack Platform director architecture, using the neutron DHCP agent provides High Availability (HA) and support for the VM instance metadata (cloud-init), and therefore Red Hat recommends you deploy the neutron DHCP agent rather than rely on OpenDaylight for such functionality.

1.3. What is Red Hat OpenStack Platform director and how is it designed?

The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is primarily based on the OpenStack TripleO (OpenStack-On-OpenStack) program.

The project uses OpenStack components to install a fully operational OpenStack environment. It also includes new OpenStack components that provision and control bare metal systems to work as OpenStack nodes. With this approach, you can install a complete Red Hat OpenStack Platform environment, that is both lean and robust.

The Red Hat OpenStack Platform director uses two main concepts: an undercloud and an overcloud. The undercloud installs and configures the overcloud. For more information about the Red Hat OpenStack Platform director architecture, see the Director Installation and Usage guide.

Figure 1.1. OpenStack Platform Director — undercloud and overcloud

OpenStack Platform director

1.3.1. Red Hat OpenStack Platform director and OpenDaylight

Red Hat OpenStack Platform director 10 introduces the new feature of composable services and custom roles. They form isolated resources, that can be included and enabled per role, when they are needed. Custom roles enable users to create their own roles, independent from the default controller and Compute roles. In other words, users now have the option to choose which OpenStack services they will deploy, and which node will host them.

Two new services have been added for OpenDaylight integration:

  • The OpenDaylightApi service for running the OpenDaylight SDN controller, and
  • The OpenDaylightOvs service for configuring OVS on each node to properly communicate with OpenDaylight.

By default, the OpenDaylightApi service is configured to run on the controller role, while the OpenDaylightOvs service is configured to run on controller and Compute roles.

Note

In Red Hat OpenStack Platform 10 only a single instance of OpenDaylight is supported. In a deployment with multiple overcloud controllers, the OpenDaylightApi service will be applied to each controller role, however only the first controller will actually enable OpenDaylight.

Figure 1.2. OpenDaylight and OpenStack — base architecture

OpenStack and OpenDaylight Architecture

1.3.2. Network isolation in Red Hat OpenStack Platform director

Red Hat OpenStack Platform director is capable of configuring individual services to specific, predefined network types. These network traffic types include:

IPMI

The network used for the power management of nodes. This network must be set up before the installation of the undercloud.

Provisioning (ctlplane)

The director uses this network traffic type to deploy new nodes over the DHCP and PXE boot and orchestrates the installation of OpenStack Platform on the overcloud bare metal servers. The network must be set up before the installation of the undercloud. Alternatively, operating system images can be deployed directly by ironic. In that case, the PXE boot is not necessary.

Internal API (internal_api)

The Internal API network is used for communication between the OpenStack services using API communication, RPC messages, and database communication, as well as for internal communication behind the load balancer.

Tenant (tenant)

neutron provides each tenant with their own networks using either VLANs (where each tenant network is a network VLAN), or overlay tunnels. Network traffic is isolated within each tenant network. If tunneling is used, multiple tenant networks can use the same IP address range without any conflicts.

Note

While both Generic Routing Encapsulation (GRE) and Virtual eXtensible Local Area Network (VXLAN) are available in the codebase, VXLAN is the recommended tunneling protocol to use with OpenDaylight. VXLAN is defined in RFC 7348. The rest of this document is focused on VXLAN whenever tunneling is used.

Storage (storage)

Block Storage, NFS, iSCSI, and others. Ideally, this would be isolated to an entirely separate switch fabric for performance reasons.

Storage Management (storage_mgmt)

OpenStack Object Storage (swift) uses this network to synchronize data objects between participating the replica nodes. The proxy service acts as an intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph backend connect over the Storage Management Network, since they do not interact with Ceph directly but rather use the frontend service. Note that the RBD driver is an exception, as this traffic connects directly to Ceph.

External/Public API

This API hosts the OpenStack Dashboard (horizon) for graphical system management, the public APIs for OpenStack services, and performs SNAT for incoming traffic going to the instances. If the external network uses private IP addresses (as per RFC-1918), then further NAT must be performed for any traffic coming in from the internet.

Floating IPs

Allows incoming traffic to reach instances using 1-to-1 IPv4 address mapping between the floating IP address and the fixed IP address, assigned to the instance in the tenant network. A common configuration is to combine the external and the floating IPs network instead of maintaining a separate one.

Management

Provides access for system administration functions such as SSH access, DNS traffic, and NTP traffic. This network also acts as a gateway for non-controller nodes.

In a typical Red Hat OpenStack Platform installation, the number of network types often exceeds the number of physical network links. In order to connect all the networks to the proper hosts, the overcloud may use the 802.1q VLAN tagging to deliver more than one network per interface. Most of the networks are isolated subnets but some require a Layer 3 gateway to provide routing for Internet access or infrastructure network connectivity.

For OpenDaylight, the relevant networks include Internal API, Tenant, and External services, that are mapped to each network inside of the ServiceNetMap. By default, the ServiceNetMap maps the OpenDaylightApi network to the Internal API network. This configuration means that northbound traffic to neutron as well as southbound traffic to OVS are isolated to the Internal API network.

As OpenDaylight uses a distributed routing architecture, each Compute node should be connected to the Floating IP network. By default, Red Hat OpenStack Platform director assumes that the External network will run on the physical neutron network data centre, which is mapped to the OVS bridge br-ex. Therefore, you must include the br-ex bridge in the default configuration of the Compute node NIC templates.

Figure 1.3. OpenDaylight and OpenStack — Network isolation example

OpenStack and OpenDaylight Network Isolation

1.3.3. Network and firewall configuration

On some deployments, such as those where restrictive firewalls are in place, you might need to configure the firewall manually in order to enable OpenStack and OpenDaylight service traffic.

By default, OpenDaylight Northbound uses the 8080 and 8181 ports. In order not to conflict with the swift service, that also uses the 8080 port, the OpenDaylight ports are set to 8081 and 8181 when installed with Red Hat OpenStack Platform director. The Southbound, in Red Hat OpenDaylight solution, is configured to listen on ports 6640 and 6653, that the OVS instances usually connect to.

Warning

The default OpenDaylight southbound port 6633 is also open and available. If you do not want to use that port, you can consider closing it, in order to prevent security issues.

In OpenStack, each service typically has its own virtual IP address (VIP) and OpenDaylight behaves the same way. HAProxy is configured to open the 8081 port to the public and control the plane’s VIPs that are already present in OpenStack. The VIP and the port are presented to the ML2 plug-in and neutron sends all communication through it. The OVS instances connect directly to the physical IP of the node where OpenDaylight is running for Southbound.

ServiceProtocolDefault PortsNetwork

OpenStack Neutron API

TCP

9696

Internal API

OpenStack Neutron API (SSL)

TCP

13696

Internal API

OpenDaylight Northbound

TCP

8081, 8181

Internal API

OpenDaylight Southbound: OVSDB

TCP

6640

Internal API

OpenDaylight Southbound: OpenFlow

TCP

6653

Internal API

VXLAN

UDP

4789

Tenant

Table 1: Network and Firewall configuration

Note

The above section focuses on the services and protocols relevant to the OpenDaylight integration and is not exhaustive. For a complete list of network ports required for services running on Red Hat OpenStack, see the Configure Firewall Rules for Red Hat OpenStack Platform Director guide.