Chapter 4. Networking

4.1. Logical Networks

Typical Red Hat OpenStack Platform using Red Hat OpenStack director will require creating a number of logical networks, some meant for internal traffic, such as the storage network or OpenStack internal API traffic, and others for external and tenant traffic. For validation lab the following types of networks have been created as shown in Table 2:



Hosts the OpenStack Dashboard (horizon) for graphical system management, the public APIs for OpenStack services, and performs SNAT for incoming traffic destined for instances. Floating IP as well. Allows incoming traffic to reach instances using 1-to-1 IP address mapping between the floating IP address, and the IP address actually assigned to the instance in the tenant network. It should be noted that for convenience in the NFV lab, VLAN 100 combines both the public API and external networks, for production deployments it is a best practice to segregate public API and Floating IP into their own VLANs.


NIC 3/4 Bond

Provisioning (PXE)

The director uses this network traffic type to deploy new nodes over PXE boot and orchestrate the installation of OpenStack Platform on the Overcloud bare metal servers. This network is predefined before the installation of the Undercloud.



Internal API

The Internal API network is used for communication between the OpenStack services using API communication, RPC messages, and database communication.


NIC 3/4 Bond


Neutron provides each tenant with their own networks using either VLAN segregation (where each tenant network is a network VLAN), or tunneling (through VXLAN or GRE). Network traffic is isolated within each tenant network. Each tenant network has an IP subnet associated with it, and network namespaces means that multiple tenant networks can use the same address range without causing conflicts.


NIC 3/4 Bond

Provider Network (SR-IOV & OVS-DPDK)

It should be noted for SR-IOV and OVS-DPDK, we will need to create several VLANs (network segments) in order to support mobile network deployment. These include dataplane, NFV control plane, NFV management.


NIC 5/6


Block Storage, NFS, iSCSI, and others. Ideally, this would be isolated to an entirely separate switch fabric for performance reasons.


NIC 3/4 Bond

Storage Management

OpenStack Object Storage (swift) uses this network to synchronize data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph backend connect over the Storage Management network, since they do not interact with Ceph directly but rather use the frontend service. Note that the RBD driver is an exception, as this traffic connects directly to Ceph.


NIC 3/4 Bond.

Table 2: NFV validation lab components for HA deployment using Red Hat OpenStack director


That the controller nodes can be connected to the provider networks if DHCP functionality is required on provider networks.


When bonding two or more ports, best practice is to avoid bonding ports that are on different NUMA nodes as this will force lookups across the Quick Path Interconnect (QPI) bus resulting in sub-optimal performance. This is generally true regardless of whether the bond is being created for control plane, management, internal API traffic (NIC 3/4 bond) or for data plane traffic (NIC 5/6 bond).


Some networks used for deploying mobile VNFs require larger Maximum Transmit Unit (MTU) sizes to be configured on the interfaces. Since this document focuses on generic infrastructure deployment, the configuration templates (yaml files) required for setting the MTU sizes are shared in the GitHub listed in the appendix.

fig1 network connectivity overview

Figure 1: Network connectivity overview