Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 4. Datapath Supportability Matrix

With the introduction of NFV, more networking vendors are starting to implement their traditional devices as VNFs. While the majority of them are looking into virtual machines (VMs), some are also looking at container-based approach, per design choice. OpenStack-based solution should be rich and flexible due to two primary reasons:

  • Application readiness - Network vendors are currently in the process of transforming their devices into VNFs. So different VNFs in the market have different maturity levels; common barriers to this readiness include enabling RESTful interfaces in their APIs, evolving their data models to become stateless, and providing automated management operations. OpenStack should provide a common platform for all.
  • Broad use-cases - NFV includes a broad range of applications that serve different use-cases. For example, Virtual Customer Premise Equipment (vCPE) aims at providing a number of network functions such as routing, firewall, VPN, and NAT at customer premises. Virtual Evolved Packet Core (vEPC), is a cloud architecture that provides a cost-effective platform for the core components of Long-Term Evolution (LTE) network, allowing dynamic provisioning of gateways and mobile endpoints to sustain the increased volumes of data traffic from smartphones and other devices.

    These use-cases, by nature, are implemented using different network applications and protocols, and require different connectivity, isolation and performance characteristics from the infrastructure. It is also common to separate between control plane interfaces and protocols and the actual forwarding plane. It is necessary for OpenStack to be flexible enough to offer different datapath connectivity options.

In principle, there are two common approaches for providing data plane connectivity to virtual machines:

  • Direct hardware access bypasses the linux kernel and provides secure direct memory access (DMA) to the physical Network Interface Card (NIC) using technologies such as PCI Passthrough (denominated SR-IOV PF in OpenStack now) or single root I/O virtualization (SR-IOV) for both Virtual Function (VF) and Physical Function (PF) pass-through.
  • Using a virtual switch (vswitch), implemented as a software service of the hypervisor. Virtual machines are connected to the vSwitch using virtual interfaces (vNICs), and the vSwitch is capable of forwarding traffic between virtual machines as well as between virtual machines and the physical network.

Some of the common datapath options are as follows:

  • Single Root I/O Virtualization (SR-IOV) - is a standard that makes a single PCI hardware device appear as multiple virtual PCI devices. It works by introducing Physical Functions (PFs) which are the full featured PCIe functions representing the physical hardware ports and Virtual Functions (VFs) that are lightweight functions that can be assigned to the virtual machines. The virtual machines see the VF as a regular NIC that communicates directly with the hardware. NICs support multiple VFs.
  • Open vSwitch (OVS) - is an open source software switch that is designed to be used as a virtual switch within a virtualized server environment. OVS supports the capabilities of a regular L2-L3 switch and also offers support to the SDN protocols such as OpenFlow to create user-defined overlay networks (for example, VXLAN). OVS uses linux kernel networking to switch packets between virtual machines and across hosts using physical NIC. OVS now supports connection tracking (Conntrack) and built-in firewall capability which avoids overhead of linux bridge with iptables/ebtables. Open vSwitch for Red Hat OpenStack Platform environments offer out-of-box OpenStack Networking (neutron) integration with OVS.
  • Data Plane Development Kit (DPDK) - consists of a set of libraries and poll mode drivers (PMD) for fast packet processing. It is designed to run mostly in the user-space, enabling applications to perform their own packet processing directly from/to the NIC. DPDK reduces latency and allows more packets to be processed. DPDK Poll Mode Drivers (PMDs) run in busy loop, constantly scanning the NIC ports on host and vNIC ports in guest for arrival of packets.
  • DPDK accelerated Open vSwitch (OVS-DPDK) - is Open vSwitch bundled with DPDK for high performance user-space solution with linux kernel bypass and direct memory access (DMA) to physical NICs. The idea is to replace the standard OVS kernel datapath with a DPDK-based datapath, creating a user-space vSwitch on the host, which uses DPDK internally for its packet forwarding. The advantage of this architecture is that it is mostly transparent to users as the basic OVS features as well as the interfaces it exposes (such as OpenFlow, OVSDB, the command line) remain mostly the same.