Network Functions Virtualization Planning Guide

Red Hat OpenStack Platform 10

Planning for NFV in Red Hat OpenStack Platform 10

OpenStack Documentation Team

Abstract

This guide helps you plan your Red Hat OpenStack Platform 10 with NFV. It contains information to allow you to successfully setup and install a NFV enabled Red Hat OpenStack Platform 10.

Chapter 1. Introduction

Network Functions Virtualization (NFV) is a software-based solution that helps Communication Service Providers (CSPs) to move beyond the traditional, proprietary hardware to achieve greater efficiency and agility while reducing the operational costs.

For a high level overview of the NFV concepts, see the Network Functions Virtualization Product Guide.

For information on configuring SR-IOV and OVS-DPDK with Red Hat OpenStack Platform 10 director, see the Network Functions Virtualization Configuration Guide.

Chapter 2. Software Requirements

This chapter describes the software architecture, supported configurations and drivers, and subscription details necessary for NFV.

2.1. Software Architecture

OpenStack NFV Reference Arch 422691 1116 JCS

NFV platform has the following components:

  • Virtualized Network Functions (VNFs) - the software implementation of network functions, such as routers, firewalls, load balancers, broadband gateways, mobile packet processors, servicing nodes, signalling, location services, and so on.
  • NFV Infrastructure (NFVi) - the physical resources (compute, storage, network) and the virtualization layer that make up the infrastructure. The network includes the datapath for forwarding packets between virtual machines and across hosts. This allows you to install VNFs without being concerned about the details of the underlying hardware. NFVi forms the foundation of the NFV stack. NFVi supports multi-tenancy and is managed by the Virtual Infrastructure Manager (VIM). Enhanced Platform Awareness (EPA) allows Red Hat OpenStack Platform to improve the virtual machine packet forwarding performance (throughput, latency, jitter) by exposing low-level CPU and NIC acceleration components to the VNF.
  • NFV Management and Orchestration (MANO) - the management and orchestration layer focuses on all the service management tasks required throughout the lifecycle of the VNF. The main goals of MANO is to allow service definition, automation, error-correlation, monitoring and lifecycle of the network functions offered by the operator to its customers, decoupled from the physical infrastructure. This decoupling requires additional layer of management, provided by the Virtual Network Function Manager (VNFM). VNFM manages the lifecycle of the virtual machines and VNFs by either interacting directly with them or through the Element Management System (EMS) provided by the VNF vendor. The other important component defined by MANO is the Orchestrator also known as NFVO. NFVO interfaces with various databases and systems including Operations/Business Support Systems (OSS/BSS) on the top and with the VNFM on the bottom. If the NFVO wants to create a new service for a customer, it will ask the VNFM to trigger the instantiation of a VNF (which may result in multiple virtual machines).
  • Operations and Business Support Systems (OSS/BSS) - provides the essential business function applications, for example, operations support and billing. The OSS/BSS needs to be adapted to NFV, integrating with both legacy systems and the new MANO components. The BSS systems set policies based on service subscriptions and manage reporting and billing.
  • Systems Administration, Automation and Life-Cycle Management - manages system administration, automation of the infrastructure components and life-cycle of the NFVI platform.

2.2. Supported Configurations for NFV Deployments

Red Hat OpenStack Platform 10 supports NFV deployments for SR-IOV and OVS-DPDK installation using the director. Using the composable roles feature available in the Red Hat OpenStack Platform 10 director, you can create custom deployment roles. Hyper-converged Infrastructure (HCI), available with limited support for this release, allows you to co-locate the Compute node with Red Hat Ceph Storage nodes for distributed NFV. To increase the performance in HCI, CPU pinning is used. The HCI model allows more efficient management in the NFV use cases. This release also provides OpenDaylight and Real-Time KVM as technology support features. OpenDaylight is an open source modular, multi-protocol controller for Software-Defined Network (SDN) deployments. For more information on the support scope for features marked as technology previews, see Technology Preview

2.3. Supported Drivers

For a complete list of supported drivers, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform .

For a complete list of network adapters, see Network Adapter Feature Support in Red Hat Enterprise Linux.

2.4. Compatibility With Third Party Software

For a complete list of products and services tested, supported, and certified to perform with Red Hat technologies (Red Hat OpenStack Platform), see Third Party Software compatible with Red Hat OpenStack Platform. You can filter the list by product version and software category.

For a complete list of products and services tested, supported, and certified to perform with Red Hat technologies (Red Hat Enterprise Linux), see Third Party Software compatible with Red Hat Enterprise Linux. You can filter the list by product version and software category.

2.5. Subscription Basics

To install Red Hat OpenStack Platform 10, you must register all systems in the OpenStack environment either through the Red Hat Content Delivery Network, or through Red Hat Satellite 6. If using a Red Hat Satellite Server, synchronize the required repositories to your OpenStack Platform environment. Subscribing to the right channels allows you access to certain repositories and each repository allows you to download the necessary packages required for installing and configuring Red Hat OpenStack Platform.

2.6. Manage Subscriptions

The following procedure lists the steps to follow for subscribing to the necessary channels for Red Hat OpenStack Platform 10 using the Content Delivery Network.

Subscribing to the Required Channels

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
  2. Obtain detailed information about the Red Hat OpenStack Platform subscription available to you:

    # subscription-manager list --available --matches '*OpenStack Platform*'

    This command should print output similar to the following:

    +-------------------------------------------+
        	Available Subscriptions
    +-------------------------------------------+
    
    Subscription Name:   Red Hat Enterprise Linux OpenStack Platform, Standard (2-sockets)
    Provides:            Red Hat Beta
    ...
             			   Red Hat OpenStack
    ...
    SKU:                 ABC1234
    Contract:            12345678
    Pool ID:             0123456789abcdef0123456789abcdef
    Provides Management: No
    Available:           Unlimited
    Suggested:           1
    Service Level:       Standard
    Service Type:        L1-L3
    Subscription Type:   Stackable
    Ends:                12/31/2099
    System Type:         Virtual
  3. Use the Pool ID printed by this command to attach the Red Hat OpenStack Platform entitlement:

    # subscription-manager attach --pool= _Pool ID_
  4. Disable any unnecessary channels, and enable the required channels:

    # subscription-manager repos --disable=* \
    --enable=rhel-7-server-rpms \
    --enable=rhel-7-server-openstack-10-rpms \
    --enable=rhel-7-server-rh-common-rpms \
    --enable=rhel-7-server-extras-rpms
  1. Run the yum update command and reboot to ensure that the most up-to-date packages, including the kernel, are installed and running.

    # yum update
    # reboot

You have successfully configured your system to receive Red Hat OpenStack Platform packages. You may use the yum repolist command to confirm the repository configuration again at any time.

Chapter 3. Hardware

This chapter describes the hardware details necessary for NFV, for example the approved hardware, hardware capacity, topology and so on.

3.1. Approved Hardware

You can use Red Hat Technologies Ecosystem to check for a list of certified hardware, software, cloud provider, component by choosing the category and then selecting the product version.

For a complete list of the certified hardware for Red Hat OpenStack Platform, see Red Hat OpenStack Platform certified hardware.

The following hardware have been tested and approved to work with the Red Hat OpenStack Platform 10 NFV deployment:

Red Hat supports the SR-IOV 10G Mellanox, Qlogic and Intel cards. For more information on network adapters, see Network Adapter Feature Support for Red Hat Enterprise Linux.

Red Hat supports for OVS-DPDK on Intel 10G ports, NIC - Dual/4 for port Intel X520. For more information, see Intel Network Adapter Technical Specifications.

3.2. Hardware Capacities

This section describes the hardware capacities for SR-IOV and OVS-DPDK with NFV.

3.2.1. Hardware Partitioning for a NFV SR-IOV Deployment

For SR-IOV, to achieve high performance, you need to partition the resources between the host and the guest.

OpenStack NFV Hardware Capacities 436587 0217 ECE SR IOV

A typical topology includes 18 cores per NUMA node on dual socket Compute nodes. Both hyperthreaded (HT) and non-HT cores are supported. Each core has two hyperthreads. Two cores are dedicated to the host for managing Open vSwitch. The VNF handles the SR-IOV interface bonding. All the interrupt requests (IRQs) are routed on the host cores. The VNF cores are dedicated to the VNFs. They provide isolation from other VNFs as well as isolation from the host. Each VNF has to fit on a single NUMA node and use local SR-IOV NICs. This topology does not have a virtualization overhead. The host, OpenStack Networking (neutron) and Compute (nova) configuration parameters are exposed in a single file for ease, consistency and to avoid incoherences that are fatal to proper isolation, causing preemption and packet loss. The host and virtual machine isolation depend on a tuned profile, which takes care of the boot parameters and any OpenStack modifications based on the list of CPUs to isolate.

3.2.2. Hardware Partitioning for a NFV OVS-DPDK Deployment

For OVS-DPDK, similar to SR-IOV the resources are partitioned for host and the guest, but there is a third partition dedicated to OVS-DPDK. The OVS-DPDK Poll Mode Drivers (PMDs) run DPDK active loops, which require dedicated cores. This means a list of CPUs and Huge Pages are dedicated to OVS-DPDK. A typical topology includes 18 cores per NUMA node on dual socket Compute nodes. The traffic requires additional NICs since the NICs cannot be shared between the host and OVS-DPDK.

OpenStack NFV Hardware Capacities 436587 0217 ECE OVS DPDK

The Red Hat OpenStack director properly configures the Compute nodes to enforce resource partitioning and fine tuning to achieve a line rate performance for the guest VNFs. Some of the features to note are as follows: Huge pages are required by DPDK and VNF flavors for better performance. NUMA alignment allows proper performance. Avoid using remote devices or memory. For fast datapath NUMA awareness is important for the location of OVS-DPDK ports and KVM VirtIO ports to on the same socket. Host isolation and CPU Pinning improves the performance ensuring no spurious packet loss.

3.3. Hardware Topology

This section describes the deployment and topology for SR-IOV and OVS-DPDK with NFV.

3.3.1. Topology of a NFV SR-IOV Deployment

The following image has two VNFs each with two interfaces, namely, the management interface represented by mgt and the dataplane interface. The management interface manages the ssh access and so on. The dataplane interfaces bonds the VNFs to DPDK to ensure high availability (VNFs bond the dataplane interfaces using the DPDK library). The image also has two fabric networks, fabric0 and fabric1 corresponding to the control plane network and the dataplane network. The Compute node has two regular NICs bonded together and shared between the VNF management and the Red Hat OpenStack Platform API management.

NFV SR-IOV deployment

The image shows a VNF that leverages DPDK at an application level and has access to SR-IOV VF/PFs, together for better availability or performance (depending on the fabric configuration). The DPDK improves performance, while the VF/PF DPDK bonds are for failover (availability). The VNF vendor must ensure their DPDK PMD driver supports the SR-IOV card that is being exposed as a VF/PF, so there is a driver dependency. On the other hand, the management network uses OVS so the VNF will see a “mgmt” network device using the standard VirtIO drivers. Operators can use that device to initially connect to the VNF and ensure their DPDK application bonds properly the two VF/PFs.

NFV SR-IOV without HCI

The following image shows the topology for SR-IOV without HCI for the NFV use case. It consists of Compute and Controller nodes with 1 Gbps NICs, and the Director node.

NFV SR-IOV Topology without HCI

NFV SR-IOV with HCI

The following image shows the topology for SR-IOV with HCI for the NFV use case. It consists of Compute OSD node with HCI and a Controller node with 1 or 10 Gbps NICs, and the Director node.

NFV SR-IOV Topology with HCI

3.3.2. Topology of a NFV OVS-DPDK Deployment

Similar to the SR-IOV deployment for NFV, the OVS-DPDK deployment also consists of two VNFs each with two interfaces, namely, the management interface represented by mgt and the dataplane interface. In the OVS-DPDK deployment, the VNFs run with inbuilt DPDK that supports physical interface. OVS-DPDK takes care of the bonding at the vSwitch level. In an OVS-DPDK deployment, it is recommended that you do not mix kernel and OVS-DPDK NICs as it can lead to performance degradation. To separate the management (mgt) network, connected to the Base provider network for the virtual machine, you need to ensure you have additional NICs. The Compute node consists of two regular NICs for the OpenStack API management that can be reused by the Ceph API but cannot be shared with any OpenStack tenant.

NFV OVS-DPDK deployment

NFV OVS-DPDK Topology

The following image shows the topology for OVS_DPDK for the NFV use case. It consists of Compute and Controller nodes with 1 or 10 Gbps NICs, and the Director node.

NFV OVS-DPDK Topology

Chapter 4. Network Considerations

The undercloud host requires at least the following networks:

  • Provisioning network - Provides DHCP and PXE boot functions to help discover bare metal systems for use in the overcloud.
  • External network - A separate network for remote connectivity to all nodes. The interface connecting to this network requires a routable IP address, either defined statically, or dynamically through an external DHCP service.

The minimal overcloud network configuration includes:

  • Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types.
  • Dual NIC configuration - One NIC for the Provisioning network and the other NIC for the External network.
  • Dual NIC configuration - One NIC for the Provisioning network on the native VLAN and the other NIC for tagged VLANs that use subnets for the different overcloud network types.
  • Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.
Note

The Provisioning network only uses the native VLAN.

The overcloud network configuration for Ceph (HCI), with NFV SR-IOV topology (see NFV SR-IOV with HCI) includes:

  • 3x1G ports, for director, provisioning OVS (isolated in case of SR-IOV)
  • 6x10G, 2x10G for Ceph other for DPDK SR-IOV
Note

Ceph HCI is technology preview in Red Hat OpenStack Platform 10. For more information on the support scope for features marked as technology previews, see Technology Preview.

For more information on the networking requirements, see Networking Requirements.

Chapter 5. Managing Deployments

The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core Heat template collection on the director node.

With the Red Hat OpenStack Platform 10, you can create custom deployment roles, using the composable roles feature, adding or removing services from each role. For more information on Composable Roles, see Composable Roles and Services.

The following section lists the steps to create, add and modify a OVS-DPDK customized role during the deployment as an example. This example deployment consists of a single controller node, a single compute node, and a single compute node with OVS-DPDK (compute_with_ovs_dpdk) in a cluster. Here, the compute_with_ovs_dpdk is a customized role to enable DPDK only on the nodes which have the supported NICs. This example lists the changes required as a part of the composable role feature of the director in order to deploy a cluster with the OVS-DPDK role in the compute node.

Configure Deployment

To prepare the OVS-DPDK cluster, set the profile as computeovsdpdk for the bare metal node targeted to be deployed as the OVS-DPDK role. For example, the bare metal ironic node named compute-1 is targeted to deploy OVS-DPDK role.

The steps described here configure deployment which provides the ability to tag a specific bare meta node to a specific flavor.

# ironic node-update compute-1 add properties/capabilities='profile:computeovsdpdk,boot_option:local'

Since a specific node is being targeted in this example, add a new flavor in the undercloud, with profile set as computeovsdpdk which will be used by the new role to deploy on a specific bare metal node with the same profile value:

# openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 4 computeovsdpdk
# openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computeovsdpdk" computeovsdpdk

Role Definition

The new OVS-DPDK role should now be added to the roles definition and should be given as input to the deploy command. The existing set of roles present are stored in the /home/stack/roles_data.yaml file. Add the following new definition by copying it to your /home/stack/templates directory. In the following example, all the services of the new roles will be same as that of a regular compute role, with the exception of ComputeNeutronOvsAgent which should be replaced with ComputeNeutronOvsDpdkAgent which will map to the OVS-DPDK service.

- name: ComputeOvsDpdk
  CountDefault: 1
  HostnameFormatDefault: '%stackname%-computeovsdpdk-%index%'
  ServicesDefault:
    - OS::TripleO::Services::CACerts
    - OS::TripleO::Services::CephClient
    - OS::TripleO::Services::CephExternal
    - OS::TripleO::Services::Timezone
    - OS::TripleO::Services::Ntp
    - OS::TripleO::Services::Snmp
    - OS::TripleO::Services::NovaCompute
    - OS::TripleO::Services::NovaLibvirt
    - OS::TripleO::Services::Kernel
    - OS::TripleO::Services::ComputeNeutronCorePlugin
    - OS::TripleO::Services::ComputeNeutronOvsDpdkAgent
    - OS::TripleO::Services::ComputeCeilometerAgent
    - OS::TripleO::Services::ComputeNeutronL3Agent
    - OS::TripleO::Services::ComputeNeutronMetadataAgent
    - OS::TripleO::Services::TripleoPackages
    - OS::TripleO::Services::TripleoFirewall
    - OS::TripleO::Services::NeutronSriovAgent
    - OS::TripleO::Services::OpenDaylightOvs
    - OS::TripleO::Services::SensuClient
    - OS::TripleO::Services::FluentdClient
    - OS::TripleO::Services::VipHosts

Environment Template

The resource mapping for the OVS-DPDK service needs to be added in the network-environment.yaml file along with the network configuration for this node as follows:

resource_registry:
  OS::TripleO::Services::ComputeNeutronOvsDpdkAgent: /home/stack/templates/openstack-tripleo-heat-templates/puppet/services/neutron-ovs-dpdk-agent.yaml
  OS::TripleO::ComputeOvsDpdk::Net::SoftwareConfig: /home/stack/templates/nic-configs/computeovsdpdk.yaml

The neutron-ovs-dpdk.yaml is available in /home/stack/<relative-directory>/neutron-ovs-dpdk.yaml location of your deployment.

Deploy Command

To deploy the environment, the /home/stack/new-role.yaml file along with the network-environment.yaml file needs to be added as input parameters to the deploy command:

# openstack overcloud deploy --templates /home/stack/templates/openstack-tripleo-heat-templates/ \
    --timeout 180 \
    -r /home/stack/templates/roles_data.yaml \
    -e /home/stack/templates/network-environment.yaml

For more details on how to Configure DPDK-accelerated Open vSwitch (OVS) for Red Hat OpenStack Platform director, see Configure DPDK-accelerated Open vSwitch (OVS) for Networking.

Chapter 6. Performance

Red Hat OpenStack Platform 10 director configures the Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest VNFs. The key performance factors in the NFV use case are throughput, latency and jitter.

DPDK-accelerated OVS enables high performance packet switching between physical NICs and virtual machines. OVS 2.5 with DPDK 2.2 adds support for vhost-user multiqueue allowing scalable performance. OVS-DPDK provides line rate performance for guest VNFs.

SR-IOV networking provides enhanced performance characteristics, including improved throughput for specific networks and virtual machines.

Other important features for performance tuning include huge pages, NUMA alignment, host isolation and CPU pinning. VNF flavors require huge pages for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss.

For more details on these features and performance tuning for NFV, see NFV Tuning for Performance.

Chapter 7. Technical Support

The following table includes additional Red Hat documentation for reference:

The Red Hat OpenStack Platform documentation suite can be found here: Red Hat OpenStack Platform 10 Documentation Suite

Table 7.1. List of Available Documentation

ComponentReference

Red Hat Enterprise Linux

Red Hat OpenStack Platform is supported on Red Hat Enterprise Linux 7.3. For information on installing Red Hat Enterprise Linux, see the corresponding installation guide at: Red Hat Enterprise Linux.

Red Hat OpenStack Platform

To install OpenStack components and their dependencies, use the Red Hat OpenStack Platform director. The director uses a basic OpenStack installation as the undercloud to install, configure and manage the OpenStack nodes in the final overcloud. Be aware that you will need one extra host machine for the installation of the undercloud, in addition to the environment necessary for the deployed overcloud. For detailed instructions, see Red Hat OpenStack Platform director Installation and Usage.

For information on configuring advanced features for a Red Hat OpenStack Platform enterprise environment using the Red Hat OpenStack Platform director such as network isolation, storage configuration, SSL communication, and general configuration method, see Advanced Overcloud Customization.

You can also manually install the Red Hat OpenStack Platform components, see Manual Installation Procedures.

NFV Documentation

For a high level overview of the NFV concepts, see the Network Functions Virtualization Product Guide.

For information on configuring SR-IOV and OVS-DPDK with Red Hat OpenStack Platform 10 director, see the Network Functions Virtualization Configuration Guide.

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.