Chapter 1. Introduction to OpenStack networking

The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat OpenStack Platform (RHOSP). The RHOSP Networking service manages internal and external traffic to and from virtual machine instances and provides core services such as routing, segmentation, DHCP, and metadata. It provides the API for virtual networking capabilities and management of switches, routers, ports, and firewalls.

1.1. Managing your RHOSP networks

With the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) you can effectively meet your site’s networking goals. You can:

  • Provide connectivity to VM instances within a project.

    Project networks primarily enable general (non-privileged) projects to manage networks without involving administrators. These networks are entirely virtual and require virtual routers to interact with other project networks and external networks such as the Internet. Project networks also usually provide DHCP and metadata services to instances. RHOSP supports the following project network types: flat, VLAN, VXLAN, GRE, and GENEVE.

    For more information, see Managing project networks.

  • Connect VM instances to networks outside of a project.

    Provider networks provide connectivity like project networks. But only administrative (privileged) users can manage those networks because they interface with the physical network infrastructure. RHOSP supports the following provider network types: flat and VLAN.

    Inside project networks, you can use pools of floating IP addresses or a single floating IP address to direct ingress traffic to your VM instances. Using bridge mappings, you can associate a physical network name (an interface label) to a bridge created with OVS or OVN to allow provider network traffic to reach the physical network.

    For more information, see Connecting VM instances to physical networks.

  • Create a network that is optimized for the edge.

    Operators can create routed provider networks that are typically used in edge deployments, and rely on multiple layer 2 network segments instead of traditional networks that have only one segment.

    Routed provider networks simplify the cloud for end users because they see only one network. For cloud operators, routed provider networks deliver scalabilty and fault tolerance. For example, if a major error occurs, only one segment is impacted instead of the entire network failing.

    For more information, see Deploying routed provider networks.

  • Make your network resources highly available.

    You can use availability zones (AZs) and Virtual Router Redundancy Protocol (VRRP) to keep your network resources highly available. Operators group network nodes that are attached to different power sources on different AZs. Next, operators schedule crucial services such as DHCP, L3, FW, and so on to be on separate AZs.

    RHOSP uses VRRP to make project routers and floating IP addresses highly available. An alternative to centralized routing, Distributed Virtual Routing (DVR) offers an alternative routing design based on VRRP that deploys the L3 agent and schedules routers on every Compute node.

    For more information, see Using availability zones to make network resources highly available.

  • Secure your network at the port level.

    Security groups provide a container for virtual firewall rules that control ingress (inbound to instances) and egress (outbound from instances) network traffic at the port level. Security groups use a default deny policy and only contain rules that allow specific traffic. Each port can reference one or more security groups in an additive fashion. The firewall driver translates security group rules to a configuration for the underlying packet filtering technology such as iptables.

    For more information, see Configuring shared security groups.

  • Manage port traffic.

    With allowed address pairs you identify a specific MAC address, IP address, or both to allow network traffic to pass through a port regardless of the subnet. When you define allowed address pairs, you are able to use protocols like VRRP (Virtual Router Redundancy Protocol) that float an IP address between two VM instances to enable fast data plane failover.

    For more information, see Configuring allowed address pairs.

  • Optimize large overlay networks.

    Using the L2 Population driver you can enable broadcast, multicast, and unicast traffic to scale out on large overlay networks.

    For more information, see Configuring the L2 population driver.

  • Set ingress and egress limits for traffic on VM instances.

    You can offer varying service levels for instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic. You can apply QoS policies to individual ports. You can also apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.

    For more information, see Configuring Quality of Service (QoS) policies.

  • Manage the amount of network resources RHOSP projects can create.

    With the Networking service quota options you can set limits on the amount of network resources project users can create. This includes resources such as ports, subnets, networks, and so on.

    For more information, see Managing project quotas.

  • Optimize your VM instances for Network Functions Virtualization (NFV).

    Instances can send and receive VLAN-tagged traffic over a single virtual NIC. This is particularly useful for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single virtual NIC to serve multiple customers or services.

    In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are transferred over the network and consumed by the VM instances on the same VLAN, and ignored by other instances and devices. VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port.

    For more information, see VLAN-aware instances.

  • Control which projects can attach instances to a shared network.

    Using role-based access control (RBAC) policies in the RHOSP Networking service, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project.

    For more information, see Configuring RBAC policies.

1.2. Networking service components

The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) includes the following components:

  • API server

    The RHOSP networking API includes support for Layer 2 networking and IP Address Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. RHOSP networking includes a growing list of plug-ins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN) controllers.

  • Modular Layer 2 (ML2) plug-in and agents

    ML2 plugs and unplugs ports, creates networks or subnets, and provides IP addressing.

  • Messaging queue

    Accepts and routes RPC requests between agents to complete API operations. Message queue is used in the ML2 plug-in for RPC between the neutron server and neutron agents that run on each hypervisor, in the ML2 mechanism drivers for Open vSwitch and Linux bridge.

1.3. Modular Layer 2 (ML2) networking

Modular Layer 2 (ML2) is the Red Hat OpenStack Platform (RHOSP) networking core plug-in. The ML2 modular design enables the concurrent operation of mixed network technologies through mechanism drivers. Open Virtual Network (OVN) is the default mechanism driver used with ML2.

The ML2 framework distinguishes between the two kinds of drivers that can be configured:

Type drivers

Define how an RHOSP network is technically realized.

Each available network type is managed by an ML2 type driver, and they maintain any required type-specific network state. Validating the type-specific information for provider networks, type drivers are responsible for the allocation of a free segment in project networks. Examples of type drivers are GENEVE, GRE, VXLAN, and so on.

Mechanism drivers

Define the mechanism to access an RHOSP network of a certain type.

The mechanism driver takes the information established by the type driver and applies it to the networking mechanisms that have been enabled. Examples of mechanism drivers are Open Virtual Networking (OVN) and Open vSwitch (OVS).

Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or controllers. You can use multiple mechanism and type drivers simultaneously to access different ports of the same virtual network.

1.4. ML2 network types

You can operate multiple network segments at the same time. ML2 supports the use and interconnection of multiple network segments. You don’t have to bind a port to a network segment because ML2 binds ports to segements with connectivity. Depending on the mechanism driver, ML2 supports the following network segment types:

  • Flat
  • VLAN
  • GENEVE tunnels
  • VXLAN and GRE tunnels
Flat
All virtual machine (VM) instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs.
VLAN

With RHOSP networking users can create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN.

You can use VLANs to segment network traffic for computers running on the same switch. This means that you can logically divide your switch by configuring the ports to be members of different networks — they are basically mini-LANs that you can use to separate traffic for security reasons.

For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18 to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a router as if they were two separate physical switches. Firewalls can also be useful for governing which VLANs can communicate with each other.

GENEVE tunnels
GENEVE recognizes and accommodates changing capabilities and needs of different devices in network virtualization. It provides a framework for tunneling rather than being prescriptive about the entire system. Geneve defines the content of the metadata flexibly that is added during encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic in size using extensible option headers. Geneve supports unicast, multicast, and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver.
VXLAN and GRE tunnels
VXLAN and GRE use network overlays to support private communication between instances. An RHOSP networking router is required to enable traffic to traverse outside of the GRE or VXLAN project network. A router is also required to connect directly-connected project networks with external networks, including the internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses. VXLAN and GRE type drivers are compatible with the ML2/OVS mechanism driver.

1.5. Modular Layer 2 (ML2) mechanism drivers

Modular Layer 2 (ML2) plug-ins are implemented as mechanisms with a common code base. This approach enables code reuse and eliminates much of the complexity around code maintenance and testing.

You enable mechanism drivers using the Orchestration service (heat) parameter, NeutronMechanismDrivers. Here is an example from a heat custom environment file:

parameter_defaults:
  ...
  NeutronMechanismDrivers: ansible,ovn,baremetal
  ...

The order in which you specify the mechanism drivers matters. In the earlier example, if you want to bind a port using the baremetal mechanism driver, then you must specify baremetal before ansible. Otherwise, the ansible driver will bind the port, because it precedes baremetal in the list of values for NeutronMechanismDrivers.

Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP 15 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers today. Those advantages multiply with each release while we continue to enhance and improve the ML2/OVN feature set.

Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases. During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal support, and most new feature development happens in the ML2/OVN mechanism driver.

In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop supporting it.

If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, start now to evaluate a plan to migrate to the mechanism driver. Migration is supported in RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test purposes only.

Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to open a proactive case for a planned activity on Red Hat OpenStack Platform?

Additional resources

1.6. Open vSwitch

Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software bridge. OVS provides switching services to virtualized networks with support for industry standard OpenFlow and sFlow. OVS can also integrate with physical switches using layer 2 features, such as STP, LACP, and 802.1Q VLAN tagging. Open vSwitch version 1.11.0-1.el6 or later also supports tunneling with VXLAN and GRE.

For more information about network interface bonds, see Network Interface Bonding in the Advanced Overcloud Customization guide.

Note

To mitigate the risk of network loops in OVS, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.

1.7. Open Virtual Network (OVN)

Open Virtual Network (OVN), is a system to support logical network abstraction in virtual machine and container environments. Sometimes called open source virtual networking for Open vSwitch, OVN complements the existing capabilities of OVS to add native support for logical network abstractions, such as logical L2 and L3 overlays, security groups and services such as DHCP.

A physical network comprises physical wires, switches, and routers. A virtual network extends a physical network into a hypervisor or container platform, bridging VMs or containers into the physical network. An OVN logical network is a network implemented in software that is insulated from physical networks by tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to overlap with those used on physical networks without causing conflicts. Logical network topologies can be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs that are part of a logical network can migrate from one physical machine to another without network disruption.

The encapsulation layer prevents VMs and containers connected to a logical network from communicating with nodes on physical networks. For clustering VMs and containers, this can be acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical networks. OVN provides multiple forms of gateways for this purpose. An OVN deployment consists of several components:

Cloud Management System (CMS)
integrates OVN into a physical network by managing the OVN logical network elements and connecting the OVN logical network infrastructure to physical network elements. Some examples include OpenStack and OpenShift.
OVN databases
stores data representing the OVN logical and physical networks.
Hypervisors
run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual machine.
Gateways
extends a tunnel-based OVN logical network into a physical network by forwarding packets between tunnels and the physical network infrastructure.

1.8. Modular Layer 2 (ML2) type and mechanism driver compatibility

Refer to the following table when planning your Red Hat OpenStack Platform (RHOSP) data networks to determine the network types each Modular Layer 2 (ML2) mechanism driver supports.

Table 1.1. Network types supported by ML2 mechanism drivers

Mechanism driver

Supports these type drivers

Flat

GRE

VLAN

VXLAN

GENEVE

Open Virtual Network (OVN)

Yes

No

Yes

Yes [1]

Yes

Open vSwitch (OVS)

Yes

Yes

Yes

Yes

No

[1] ML2/OVN VXLAN support is limited to 4096 networks and 4096 ports per network. Also, ACLs that rely on the ingress port do not work with ML2/OVN and VXLAN, because the ingress port is not passed.

1.9. Extension drivers for the RHOSP Networking service

The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) is extensible. Extensions serve two purposes: they allow the introduction of new features in the API without requiring a version change and they allow the introduction of vendor specific niche functionality. Applications can programmatically list available extensions by performing a GET on the /extensions URI. Note that this is a versioned request; that is, an extension available in one API version might not be available in another.

The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include support for QoS, port security, and so on.