Chapter 6. Networking

Earlier networks typically required for Red Hat OpenStack Platform director during installation of OpenStack was discussed. They are:

  • IPMI
  • Provisioning
  • Internal API
  • Tenant
  • Storage
  • Storage Management
  • External
  • Floating IP
  • Management

Figure 14 illustrates the typical OpenStack networking topology.

fig14 osp network

Figure 14: Typical OpenStack Network Topology

Virtual mobile VNFs (vEPC, GiLAN & IMS) are considered complex VNFs that is made up of several VMs, may contain its own Element Manager (EM) and typically consists of multiple networks. Different NEPs may call these networks by different names and may have have one or more of each type. For instance, vEPC will have networks for:

  • vEPC Management network (virtio): Used for management/Operations and Management (OAM) through floating IP addresses as well as for control traffic of vEPC itself. Keeping track of liveliness of nodes/functions, application level session management, redundancy, error logging, recovery, auto-scaling etc. This is typically deployed as VX-LAN tenant network.
  • External/Internet (Typically virtio). This also deployed as VX-LAN tenant network.
  • Internal Traffic - Typically East-West Traffic, between control modules and switching modules. This network can get used the following:

    • Storage access
    • Inter-node communication by vEPC application

      • Control
      • Data plane (sometimes during failover)
    • OpenStack API traffic
    • If the vEPC (mobile) application is only passing control traffic on this network, it can be configured as VX-LAN tenant network that uses virtio. However, some NEPs may actually choose to send dataplane traffic on this network during failure of a service module. In such cases, the NEP may chose to use SR-IOV provider network to optimize throughput.
  • Application traffic - This is actual North-South subscriber traffic. In the context of 3gpp it is the GTPu (User Plane) traffic. This network is typically designed to carry huge amounts of traffic. Most current deployments of vEPC use SR-IOV (Single Root Input Output Virtualization) to allow high throughput (close to 10G on a 10G port) with minimal latency. The SR-IOV networks are typically created as provider networks in Openstack opposed to tenant network. An important note regarding SR-IOV ports is that they show up as Virtual Functions (VFs) on the guest VM. SR-IOV ports cannot be bonded using LACP at the host level. It is the responsibility of the VNF to bond/bundle two or more VNICs (Virtual Network Interface Cards) derived from VFs to achieve higher throughput beyond 10G or more typically to ensure high availability. This is achieved by mapping the PNICs (Physical NICs) of the SR-IOV ports to different datacenter switches. This ensures that the guest VM continues to have connectivity if and when an uplink switch fails. Several NEPs are testing OVS-DPDK as an alternative datapath to SR-IOV as it has several shortcomings which will be discussed later under the performance optimization section. With OVS-DPDK, bonding of Ethernet ports will be supported at host level similar to what we can do with virtio.

6.1. Neutron Plug-ins

NEPs may chose to use some commercial neutron plug-in instead of using OVS for OpenStack networking. This is supported by Red Hat OpenStack Platform and is done as a part of customization during deployment of OpenStack at the Telco by the NEP or their partner.

Red Hat certified plug-ins for Neutron can be found at https://access.redhat.com/articles/1535373. Red Hat has a plethora of 3rd party plug-ins. Information regarding this can be found at https://access.redhat.com/ecosystem.

6.2. IPv6 Networking

Mobile operators deploy dual-stack IP networks. Whether IPv4 or IPv6 is used depends on the APN (Access Point Name) or context. Typically for LTE(Long Term Evolution), IPv6 is used for VoLTE APN. Internet APN could use both IPv4 and IPv6. IPv4 is being maintained for backward compatibility as certain applications still don’t work with IPv6.

OpenStack Networking supports IPv6 tenant subnets in dual-stack configuration, so that projects can dynamically assign IPv6 addresses to virtual machines using Stateless Address Autoconfiguration (SLAAC) or DHCPv6. OpenStack Networking is also able to integrate with SLAAC on your physical routers, so that virtual machines can receive IPv6 addresses from your existing infrastructure. For more information, see Tenant Networking with IPv6.