7.2. OpenStack Networking Installation Overview

7.2.1. OpenStack Networking Architecture

OpenStack Networking provides cloud administrators with flexibility in deciding which individual services should run on which physical systems. All service daemons can be run on a single physical host for evaluation purposes. Alternatively each service can have its own physical host or even be replicated across multiple hosts for redundancy.
This chapter focuses on an architecture that combines the role of cloud controller with that of the network host while allowing for multiple compute nodes on which virtual machine instances run. The networking services deployed on the cloud controller in this chapter may also be deployed on a separate network host entirely. This is recommended for environments where it is expected that significant amounts of network traffic will need to be routed from virtual machine instances to external networks.
Example network diagram
The placement of Networking services and agents can vary depending on requirements. This diagram is an example of a common deployment model, utilizing a dedicated Networking node and tenant networks:
Two compute nodes run the Open vSwitch (ovs-agent), and one Network node performs the network functions: L3 routing, DHCP, NAT, including services such as FWaaS and LBaaS. The Compute nodes have two physical network cards each, one for tenant traffic, and another for management connectivity. The Networking node has a third network card specifically for provider traffic:
Example network diagram

Figure 7.1. Example network diagram

7.2.2. OpenStack Networking API

OpenStack Networking provides a powerful API to define the network connectivity and addressing used by devices from other services, such as OpenStack Compute.
The OpenStack Compute API has a virtual server abstraction to describe compute resources. Similarly, the OpenStack Networking API has virtual network, subnet, and port abstraction layers to describe network resources. In more detail:
Network
An isolated L2 segment, analogous to VLAN in the physical networking world.
Subnet
A block of v4 or v6 IP addresses and associated configuration state.
Port
A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like OpenStack Compute to attach virtual devices to ports on these networks. In particular, OpenStack Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme, even if those IP addresses overlap with those used by other tenants. This enables very advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses.
Even if a cloud administrator does not intend to expose the above capabilities to tenants directly, the OpenStack Networking API can be very useful for administrative purposes. The API provides significantly more flexibility for the cloud administrator when customizing network offerings.

7.2.3. OpenStack Networking API Extensions

The OpenStack Networking API allows plug-ins to provide extensions to enable additional networking functional not available in the core API itself.
Provider Networks
Provider networks allow the creation of virtual networks that map directly to networks in the physical data center. This allows the administrator to give tenants direct access to a public network such as the Internet or to integrate with existing VLANs in the physical networking environment that have a defined meaning or purpose.
When the provider extension is enabled OpenStack Networking users with administrative privileges are able to see additional provider attributes on all virtual networks. In addition such users have the ability to specify provider attributes when creating new provider networks.
Both the Open vSwitch and Linux Bridge plug-ins support the provider networks extension.
Layer 3 (L3) Routing and Network Address Translation (NAT)
The L3 routing API extensions provides abstract L3 routers that API users are able to dynamically provision and configure. These routers are able to connect to one or more Layer 2 (L2) OpenStack Networking controlled networks. Additionally the routers are able to provide a gateway that connects one or more private L2 networks to an common public or external network such as the Internet.
The L3 router provides basic NAT capabilities on gateway ports that connect the router to external networks. The router supports floating IP addresses which give a static mapping between a public IP address on the external network and the private IP address on one of the L2 networks attached to the router.
This allows the selective exposure of compute instances to systems on an external public network. Floating IP addresses are also able to be reallocated to different OpenStack Networking ports as necessary.
Security Groups
Security groups and rules filter the type and direction of network traffic sent to a given network port. This provides an additional layer of security to complement any firewall rules present on the Compute instance. The security group is a container object with one or more security rules. A single security group can manage traffic to multiple compute instances.
Ports created for floating IP addresses, Networking LBaaS VIPs, router interfaces, and instances are associated with a security group. If none is specified, then the port is associated with the default security group. By default, this group will drop all inbound traffic and allow all outbound traffic. Additional security rules can be added to the default security group to modify its behaviour or new security groups can be created as necessary.
The Open vSwitch, Linux Bridge, VMware NSX, NEC, and Ryu networking plug-ins currently support security groups.

Note

Unlike Compute security groups, OpenStack Networking security groups are applied on a per port basis rather than on a per instance basis.

7.2.4. OpenStack Networking Plug-ins

The original OpenStack Compute network implementation assumed a basic networking model where all network isolation was performed through the use of Linux VLANs and firewalls. OpenStack Networking introduces the concept of a plug-in, which is a pluggable back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests.  Some OpenStack Networking plug-ins might use basic Linux VLANs and firewalls, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits.
Plug-ins for these networking technologies are currently tested and supported for use with Red Hat Enterprise Linux OpenStack Platform:
  • Open vSwitch (openstack-neutron-openvswitch)
  • Linux Bridge (openstack-neutron-linuxbridge)
Other plug-ins that are also packaged and available include:
  • Cisco (openstack-neutron-cisco)
  • NEC OpenFlow (openstack-neutron-nec)
  • VMware (openstack-neutron-vmware)
  • Ryu (openstack-neutron-ryu)
Plug-ins enable the cloud administrator to weigh different options and decide which networking technology is right for the deployment.

7.2.5. VMware NSX Integration

OpenStack Networking uses the NSX plugin for Networking to integrate with an existing VMware vCenter deployment. When installed on the network nodes, the NSX plugin enables a NSX controller to centrally manage configuration settings and push them to managed network nodes. Network nodes are considered managed when they're added as hypervisors to the NSX controller.
The diagram below depicts an example NSX deployment and illustrates the route inter-VM traffic takes between separate Compute nodes. Note the placement of the VMware NSX plugin and the neutron-server service on the network node. The NSX controller features centrally with a green line to the network node to indicate the management relationship:
VMware NSX overview

Figure 7.2. VMware NSX overview

7.2.6. Open vSwitch Overview

Open vSwitch is a software-defined networking (SDN) virtual switch designed to supersede the heritage Linux software bridge. It provides switching services to virtualized networks with support for industry standard NetFlow, OpenFlow, and sFlow. Open vSwitch is also able to integrate with physical switches due to its support for the layer 2 features STP, LACP, and 802.1Q VLAN tagging.
Open vSwitch tunneling is supported with Open vSwitch version 1.11.0-1.el6 or later. Refer to the table below for specific kernel requirements:
Feature Kernel Requirement
GRE Tunneling 2.6.32-358.118.1.openstack.el6 or later
VXLAN Tunneling 2.6.32-358.123.4.openstack.el6 or later

7.2.7. Modular Layer 2 (ML2) Overview

ML2 is the new Networking core plug-in introduced in OpenStack's Havana release. Superseding the previous model of singular plug-ins, ML2's modular design enables the concurrent operation of mixed network technologies. The monolithic Open vSwitch and linuxbridge plug-ins have been deprecated and will be removed in a future release; their functionality has instead been re-implemented as ML2 mechanisms.

Note

ML2 is the default Networking plug-in, with Open vSwitch configured as the default mechanism driver.
The requirement behind ML2
Previously, Networking deployments were only able to use the plug-in that had been selected at implementation time. For example, a deployment running the Open vSwitch plug-in was only able to use Open vSwitch exclusively; it wasn't possible to simultaneously run another plug-in such as linuxbridge. This was found to be a limitation in environments with heterogeneous requirements.
ML2 network types
Multiple network segment types can be operated concurrently. In addition, these network segments can interconnect using ML2's support for multi-segmented networks. Ports are automatically bound to the segment with connectivity; it is not necessary to bind them to a specific segment. Depending on the mechanism driver, ML2 supports the following network segment types:
  • flat
  • GRE
  • local
  • VLAN
  • VXLAN
The various Type drivers are enabled in the ML2 section of the ml2_conf.ini file:
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
ML2 Mechanisms
Plug-ins have been reimplemented as mechanisms with a common code base. This approach enables code reuse and eliminates much of the complexity around code maintenance and testing. Supported mechanism drivers currently include:
  • Arista
  • Cisco
  • Nexus
  • Hyper-V Agent
  • L2 Population
  • Linuxbridge Agent
  • Open vSwitch Agent
  • Tail-f NCS
The various mechanism drivers are enabled in the ML2 section of the ml2_conf.ini file:
[ml2]
mechanism_drivers = openvswitch,linuxbridge,l2population

7.2.8. Choose a Network Back-end

The Red Hat Enterprise Linux OpenStack Platform offers two distinctly different networking back-ends: Nova networking and OpenStack Networking (Neutron). Nova networking has been deprecated in the OpenStack technology roadmap, but still remains currently available. OpenStack Networking is considered the core Software-defined networking (SDN) component of OpenStack's forward-looking roadmap and is under active development.
It is important to consider that there is currently no migration path between Nova networking and OpenStack Networking. This would impact an operator's plan to deploy Nova networking with the intention of upgrading to OpenStack Networking at a later date. At present, any attempt to switch between these technologies would need to be performed manually, and would likely require planned outages.
Choose OpenStack Networking (Neutron)
  • If you require an overlay network solution: OpenStack Networking supports GRE or VXLAN tunneling for virtual machine traffic isolation. With GRE or VXLAN, no VLAN configuration is required on the network fabric and the only requirement from the physical network is to provide IP connectivity between the nodes. Furthermore, VXLAN or GRE allows a theoretical scale limit of 16 million unique IDs which is far beyond the 4094 limitation of 802.1q VLAN ID. Nova networking is basing the network segregation on 802.1q VLANs and do not support tunneling with GRE or VXLAN.
  • If you require overlapping IP addresses between tenants: OpenStack Networking uses the network namespace capabilities in the Linux kernel, which allows different tenants to use the same subnet range (e.g. 192.168.1/24) on the same Compute node without any risk of overlap or interference. This is suited for large multi-tenancy deployments. By comparison, Nova networking offers flat topologies that must remain mindful of subnets used by all tenants.
  • If you require a Red Hat-certified third-party Networking plug-in: By default, Red Hat Enterprise OpenStack Platform 5 uses the open source ML2 core plug-in with the Open vSwitch (OVS) mechanism driver. Based on the physical network fabric and other network requirements, third-party Networking plug-ins can be deployed instead of the default ML2/Open vSwitch driver due to the pluggable architecture of OpenStack Networking. Red Hat is constantly working to enhance our Partner Certification Program to certify more Networking plugins against Red Hat Enterprise OpenStack Platform. You can learn more about our Certification Program and the certified Networking plug-ins at http://marketplace.redhat.com.
  • If you require VPN-as-a-service (VPNaaS), Firewall-as-a-service (FWaaS), or Load-Balancing-as-a-service (LBaaS): These network services are only available in Networking and are not available for Nova networking. Dashboard allows Tenants to manage these services with no need for administrator intervention.
Choose Nova networking
  • If your deployment requires flat (untagged) or VLAN (802.1q tagged) networking:This implies scalabilty requirements (theoretical scale limit of 4094 VLAN IDs, where in practice physical switches tend to support a much lower number) as well as management and provisioning requirements, as proper configuration is necessary on the physical network to trunk the required set of VLANs between the nodes.
  • If your deployment does not require overlapping IP addresses between tenants: This is usually suitable only for small, private deployments.
  • If you do not need a Software-defined networking (SDN) solution, or the ability to interact with the physical network fabric.
  • If you do not need self-service VPN, Firewall, or Load-Balancing services.

7.2.9. Configure the L2 Population mechanism driver

The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast.
Enable the L2 population driver by adding it to the list of mechanism drivers. You also need to have at least one tunneling driver enabled; either GRE, VXLAN, or both. Add the appropriate configuration options to the ml2_conf.ini file:
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population
Enable L2 population in the ovs_neutron_plugin.ini file. This must be enabled on each node running the L2 agent:
[agent]
l2_population = True

7.2.10. OpenStack Networking Agents

As well as the OpenStack Networking service and the installed plug-in a number of components combine to provide networking functionality in the OpenStack environment.
L3 Agent
The L3 agent is part of the openstack-neutron package. It acts as an abstract L3 router that can connect to and provide gateway services for multiple L2 networks.
The nodes on which the L3 agent is to be hosted must not have a manually configured IP address on a network interface that is connected to an external network. Instead there must be a range of IP addresses from the external network that are available for use by OpenStack Networking. These IP addresses will be assigned to the routers that provide the link between the internal and external networks.
The range selected must be large enough to provide a unique IP address for each router in the deployment as well as each desired floating IP.
DHCP Agent
The OpenStack Networking DHCP agent is capable of allocating IP addresses to virtual machines running on the network. If the agent is enabled and running when a subnet is created then by default that subnet has DHCP enabled.
Plug-in Agent
Many of the OpenStack Networking plug-ins, including Open vSwitch and Linux Bridge, utilize their own agent. The plug-in specific agent runs on each node that manages data packets. This includes all compute nodes as well as nodes running the dedicated agents neutron-dhcp-agent and neutron-l3-agent.

7.2.11. Tenant and Provider networks

The following diagram presents an overview of the tenant and provider network types, and illustrates how they interact within the overall Networking topology:
Tenant and Provider networks
Tenant networks
Tenant networks are created by users for connectivity within projects; they are fully isolated by default and are not shared with other projects. Networking supports a range of tenant network types:
Flat
All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation takes place.
Local
Instances reside on the local Compute host and are effectively isolated from any external networks.
VLAN
Networking allows users to create multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other networking infrastructure on the same layer 2 VLAN.
VXLAN/GRE
VXLAN and GRE use network overlays to support private communication between instances. A Networking router is required to enable traffic to traverse outside of the GRE or VXLAN tenant network. A router is also required to connect directly-connected tenant networks with external networks, including the Internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses.
Provider networks
Provider networks are created by the OpenStack administrator and map directly to an existing physical network in the data center. Useful network types in this category are flat (untagged) and VLAN (802.1Q tagged). It is possible to allow provider networks to be shared among tenants as part of the network creation process.

7.2.12. Multiple Networks on a Single Node

Multiple external networks can now be operated on a single Networking node running either the ML2 Open vSwitch mechanism driver, or the deprecated Open vSwitch plug-in.

Procedure 7.1. Create multiple Provider networks

  1. Create an Open vSwitch bridge, which will provide connectivity to the new external network using eth1:
    # ovs-vsctl add-br br-eth1
    # ovs-vsctl add-port br-eth1 eth1
    # ip link set eth1 up
  2. The bridge and router interfaces are automatically mapped by the L3 agent, by relaying any physnet-to-bridge mappings created in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. For example, in tunneling mode, the L3 agent will patch br-int to the external bridges, and set the external q router interfaces on br-int. Enable this behavior in /etc/neutron/l3_agent.ini, by setting the options below as empty values:
    gateway_external_network_id =
    external_network_bridge =
    On the Networking nodes, edit ovs_neutron_plugin.ini to map the physical networks to the corresponding external bridges.
    bridge_mappings = physnet1:br-ex,physnet2:br-eth1
    Restart the L3 agent and Open vSwitch agent services for the changes to take effect:
    # service neutron-l3-agent restart
    # service neutron-openvswitch-agent restart
    The following steps demonstrate the creation of two external networks on the same Networking node. Ensure the l3_agent.ini configuration is present before proceeding.
  3. Create the new Provider network physnet1, and the related topology:
    # neutron net-create ext_net -provider:network_type flat -provider:physical_network physnet1 -router:external=True
    # neutron subnet-create ext_net -gateway 172.16.0.1 172.16.0.0/24 - -enable_dhcp=False
    # neutron router-create router1
    # neutron router-gateway-set router1 ext_net
    # neutron net-create privnet
    # neutron subnet-create privnet -gateway 192.168.123.1 192.168.123.0/24 -name privnet_subnet
    # neutron router-interface-add router1 privnet_subnet
  4. Create the new Provider network physnet2, and the related topology:
    # neutron net-create ext_net2 -provider:network_type flat -provider:physical_network physnet2 -router:external=True
    # neutron subnet-create ext_net2 -allocation-pool start=192.168.1.200,end=192.168.1.222 -gateway=192.168.1.1 -enable_dhcp=False 192.168.1.0/24
    # neutron router-create router2
    # neutron router-gateway-set router2 ext_net2
    # neutron net-create privnet2
    # neutron subnet-create privnet2 -gateway 192.168.125.1 192.168.125.0/24 -name privnet2_subnet
    # neutron router-interface-add router2 privnet2_subnet

7.2.13. Recommended Networking Deployment

OpenStack Networking provides an extreme amount of flexibility when deploying networking in support of a compute environment. As a result, the exact layout of a deployment will depend on a combination of expected workloads, expected scale, and available hardware.
Reference networking architecture.

Figure 7.3. Deployment Architecture

For demonstration purposes, this chapter concentrates on a networking deployment that consists of these types of nodes:
Service Node
The service node exposes the networking API to clients and handles incoming requests before forwarding them to a message queue to be actioned by the other nodes. The service node hosts both the networking service itself and the active networking plug-in.
In environments that use controller nodes to host the client-facing APIs and schedulers for all services, the controller node would also fulfil the role of service node as it is applied in this chapter.
Network Node
The network node handles the majority of the networking workload. It hosts the DHCP agent, the Layer 3 (L3) agent, the Layer 2 (L2) Agent, and the metadata proxy. In addition to plug-ins that require an agent, it runs an instance of the plug-in agent (as do all other systems that handle data packets in an environment where such plug-ins are in use). Both the Open vSwitch and Linux Bridge plug-ins include an agent.
Compute Node
The compute hosts the compute instances themselves. To connect compute instances to the networking services, compute nodes must also run the L2 agent. Like all other systems that handle data packets it must also run an instance of the plug-in agent.
This deployment type and division of responsibilities is made only as a suggestion. Other divisions are equally valid, in particular in some environments the network and service nodes may be combined.

Warning

Environments that have been configured to use Compute networking, either using the packstack utility or manually, can be reconfigured to use OpenStack Networking. This is however currently not recommended for environments where Compute instances have already been created and configured to use Compute networking. If you wish to proceed with such a conversion, stop the openstack-nova-network service on each Compute node first; to do so, run:
# service openstack-nova-network stop
You must also disable the openstack-nova-network service permanently on each node using the chkconfig command:
# chkconfig openstack-nova-network off

Important

When running 2 or more active controller nodes, do not run nova-consoleauth on more than one node. Running more than one instance of nova-consoleauth causes a conflict between nodes with regard to token requests which may cause errors.

7.2.14. Kernel Requirements

Nodes that handle OpenStack networking traffic require a Red Hat Enterprise Linux kernel that supports the use of network namespaces. In addition, the Open vSwitch plug-in requires kernel version 2.6.32-431.el6.x86_64 or later.
The default kernel included in supported Red Hat Enterprise Linux versions fulfills both requirements.
Previous Red Hat Enterprise Linux OpenStack Platform releases required manually installing and booting to a custom Red Hat Enterprise Linux kernel that supported network namespaces and other OpenStack features. As of this release, doing so is no longer required. All OpenStack Networking requirements are now built into the default kernel of supported Red Hat Enterprise Linux versions.