4.2. Planning Networks

It is important to plan your environment's networking topology and subnets so that you can properly map roles and services to correctly communicate with each other. Red Hat Enterprise Linux OpenStack Platform 7 uses the Neutron networking service, which operates autonomously and manages software-based networks, static and floating IP addresses, and DHCP. The director deploys this service on each Controller node in an Overcloud environment.
Red Hat Enterprise Linux OpenStack Platform maps different services to separate network traffic types assigned to the various subnets in your environments. These network traffic types include:

Table 4.2. Network Type Assignments

Network Type
Description
Used By
IPMI
Network used for power management of nodes. This network is predefined before the installation of the Undercloud.
All nodes
Provisioning
The director uses this network traffic type to deploy new nodes over PXE boot and orchestrate the installation of OpenStack Platform on the Overcloud bare metal servers.  This network is predefined before the installation of the Undercloud.
All nodes
Internal API
The Internal API network is used for communication between the OpenStack services via API communication, RPC messages, and database communication.
Controller, Compute, Cinder Storage, Swift Storage
Tenant
Neutron provides each tenant with their own networks using either VLAN segregation, where each tenant network is a network VLAN, or tunneling through VXLAN or GRE. Network traffic is isolated within each tenant network. Each tenant network has an IP subnet associated with it, and multiple tenant networks may use the same addresses.
Controller, Compute
Storage
Block Storage, NFS, iSCSI, and others. Ideally, this would be isolated to an entirely separate switch fabric for performance reasons.
All nodes
Storage Management
OpenStack Object Storage (swift) uses this network to synchronize data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph backend connect over the Storage Management network, since they do not interact with Ceph directly but rather use the frontend service. Note that the RBD driver is an exception; this traffic connects directly to Ceph.
Controller, Ceph Storage, Cinder Storage, Swift Storage
External
Hosts the OpenStack Dashboard (horizon) for graphical system management, Public APIs for OpenStack services, and performs SNAT for incoming traffic destined for instances. If the external network uses private IP addresses (as per RFC-1918), then further NAT must be performed for traffic originating from the internet.
Controller
Floating IP
Allows incoming traffic to reach instances using 1-to-1 IP address mapping between the floating IP address, and the IP address actually assigned to the instance in the tenant network. If hosting the Floating IPs on a VLAN separate from External, trunk the Floating IP VLAN to the Controller nodes and add the VLAN through Neutron after Overcloud creation. This provides a means to create multiple Floating IP networks attached to multiple bridges. The VLANs are trunked but not configured as interfaces. Instead, Neutron creates an OVS port with the VLAN segmentation ID on the chosen bridge for each Floating IP network.
Controller
In a typical Red Hat Enterprise Linux OpenStack Platform installation, the number of network types often exceeds the number of physical network links. In order to connect all the networks to the proper hosts, the Overcloud uses VLAN tagging to deliver more than one network per interface. Most of the networks are isolated subnets but some require a Layer 3 gateway to provide routing for Internet or infrastructure network connectivity.

Note

It is recommended that you deploy a project network (tunneled with GRE or VXLAN) even if you intend to use a neutron VLAN mode (with tunneling disabled) at deployment time. This requires minor customization at deployment time and leaves the option available to use tunnel networks as utility networks or virtualization networks in the future. You still create Tenant networks using VLANs, but you can also create VXLAN tunnels for special-use networks without consuming tenant VLANs. It is possible to add VXLAN capability to a deployment with a Tenant VLAN, but it is not possible to add a Tenant VLAN to an existing Overcloud without disruption.
The director provides a method for mapping five of these traffic types to certain subnets or VLANs. These traffic types include:
  • Internal API
  • Storage
  • Storage Management
  • Tenant Networks
  • External
Any unassigned networks are automatically assigned to the same subnet as the Provisioning network.
The diagram below provides an example of a network topology where the networks are isolated on separate VLANs. Each Overcloud node uses two interfaces (nic2 and nic3) in a bond to deliver these networks over their respective VLANs. Meanwhile, each Overcloud node communicates with the Undercloud over the Provisioning network through a native VLAN using nic1.
Example VLAN Topology using Bonded Interfaces

Figure 4.1. Example VLAN Topology using Bonded Interfaces

This guide provides multiple scenarios based on the desired environment you want. The following table defines the network traffic mappings for each scenario:

Table 4.3. Network Mappings

Mappings
Total Interfaces
Total VLANs
Basic Environment
Network 1 - Provisioning, Internal API, Storage, Storage Management, Tenant Networks
Network 2 - External, Floating IP (mapped after Overcloud creation)
2
2
Advanced Environment with Ceph Storage
Network 1 - Provisioning
Network 2 - Internal API
Network 3 - Tenant Networks
Network 4 - Storage
Network 5 - Storage Management
Network 6 - External, Floating IP (mapped after Overcloud creation)
3 (includes 2 bonded interfaces)
6