Deploying mobile networks using Red Hat OpenStack Platform 10
Abstract
The purpose of this document is to serve as a reference architecture for deploying mobile networks using Red Hat OpenStack Platform 10Comments and Feedback
In the spirit of open source, we invite anyone to provide feedback and comments on any reference architecture. Although we review our papers internally, sometimes issues or typographical errors are encountered. Feedback allows us to not only improve the quality of the papers we produce, but allows the reader to provide their thoughts on potential improvements and topic expansion to the papers. Feedback on the papers can be provided by emailing refarch-feedback@redhat.com. Please refer to the title within the email.
Chapter 1. Executive Summary
The reference architecture titled “Deploying mobile networks using network function virtualization” (https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_mobile_networks_using_network_functions_virtualization/) which was released in December 2016 discussed the conceptual aspects of designing and deploying mobile networks on NFV. This reference architecture describes how to deploy mobile networks using network functions virtualization (NFV) on Red Hat® OpenStack® Platform 10.
The approach taken in this document is to walk you through in logical order of hardware summary, software summary, solution design with specific emphasis given to high availability and performance aspects as they are key to NFV deployments.
The document is organized as follows:
- Chapter 1 provides the executive summary
- Chapter 2 describes the hardware that was used to validate the architecture in the NFV lab.
- Chapter 3 contains the software prerequisites for the undercloud and managing hardware with Ironic.
- Chapter 4 provides an overview of networking
- Chapter 5 describes the storage architecture used in the NFV lab.
- Chapter 6 describes how to deploy the Undercloud and Overcloud using Red Hat OpenStack director
- Chapter 7 deals with setting up and configuration details of SR-IOV and OVS-DPDK
- Chapter 8 details the additional steps required for tuning the servers for peak performance using OVS-DPDK
- Chapter 9 discusses how to achieve high availability with SR-IOV and OVS-DPDK
- Appendix A covers SR-IOV failure test
- Appendix B covers DPDK performance tests for various packet sizes
- Appendix C provides Github info where templates used in this setup is shared
- Appendix D lists the key contributors who made this document possible
- Appendix E shows the revision history
This reference architecture has been completed with Red Hat Enterprise Linux 7.3, Red Hat OpenStack Platform 10, Red Hat OpenStack Platform director 10. All of the steps listed were performed by the Red Hat Systems team.
The complete use case was deployed in the NFV lab on bare metal servers, except where otherwise noted.
Chapter 2. Hardware
Two key requirements of Telcos for any application are performance and high availability (HA). In the multi-layered world of NFV, HA cannot be achieved only at the infrastructure layer. HA needs to be expansive and overarching in all aspects of design including hardware support, following best practice for layer 2 and layer 3 design of the underlay network, at the OpenStack level and last but not the least at the application layer. ETSI NFV (ETSI GS NFV-REL 001 V1.1.1 (2015-01)) defines “Service Availability” rather than talking in terms of five 9s. “At a minimum, the Service Availability requirements for NFV should be the same as those for legacy systems (for the same service)”. This refers to the end-to-end service (VNFs and infrastructure components).
Red Hat recommends full HA deployment of Red Hat OpenStack Platform using Red Hat OpenStack Director for deployment.
The lab used to validate this architecture consisted of:
- 12 Servers
- 2 Switches
This is shown in Table 1:
Role | Servers |
---|---|
Controllers + Ceph Mon | 3 for full HA deployment |
Ceph OSDs | 3 for full HA deployment |
Compute Nodes | 5 used in validation lab |
Undercloud Server | 1 |
Table 1: NFV validation lab components for HA deployment using Red Hat OpenStack director
2.1. Servers
While both blade servers and rack mount servers are used to deploy virtual mobile networks and NFV in general, we have chosen to go with rack mount servers as it gives us a little bit more flexibility with:
- Plenty of bays for storage that we used for Ceph OSD
- Direct access to physical Network Interface Cards (NICs)
- Lastly, some telcos prefer to have the management port (IPMI) to be able to shutdown the entire server in the event it somehow gets compromised due to some security attack.
2.2. Server Specifications:
Physical CPU model is a moving target. Newer, better and faster CPUs are released by Intel and other vendors constantly. For validating this architecture the following specifications were used for server hardware:
- 2 Intel Xeon Processor E5-2650v4 12-Core
- 128GB of DRAM
Intel x540 10G NIC cards
- NIC 0: IPMI
- NIC 1: External OOB (Not used for OpenStack)
- NIC 2: PXE/Provisioning
- NIC 3: Network Isolation Bond
- NIC 4: Network Isolation Bond
- NIC 5: Dataplane Port (SR-IOV or DPDK)
- NIC 6: Dataplane Port (SR-IOV or DPDK)
It is important to note that in order to use fast datapath features such as SR-IOV and OVS-DPDK, it is required to use NIC cards that support these features. http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005722.html lists NICs that support SR-IOV.
DPDK supported NICs are listed here http://dpdk.org/doc/nics
Chapter 3. Software
3.1. Red Hat OpenStack Platform 10
Red Hat OpenStack Platform 10 is based on the Newton OpenStack release. The Newton OpenStack release is the 14 OpenStack release. This release improves reliability and provides new features and functionality.
3.2. New Features in Red Hat OpenStack Platform 10
There are many new features available in the Red Hat OpenStack Platform 10. Just a few of these features are listed below:
- Monitoring – A Sensu client is installed on each of the nodes to allow for monitoring of the individual OpenStack nodes and services.
- Graphical Installer UI – The deployment of the RHOSP overcloud deployment can now be performed using a Graphical User Interface.
- Composable Roles – OpenStack services can now be combined or segregated to create customized roles for OpenStack nodes.
- Director - Support as a virtual machine, with this release of Red Hat OpenStack Platform 10 the Red Hat OpenStack Platform 10 director is now supported to be deployed on a virtual machine.
3.3. Red Hat Enterprise Linux
The Red Hat Enterprise Linux bundled with Red Hat Platform 10 is version 7.3:
[root@overcloud-compute-0 ~]# uname -a Linux overcloud-compute-0.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Mon Feb 20 02:37:52 EST 2017 x86_64 x86_64 x86_64 GNU/Linux [root@overcloud-compute-0 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo)
Chapter 4. Networking
4.1. Logical Networks
Typical Red Hat OpenStack Platform using Red Hat OpenStack director will require creating a number of logical networks, some meant for internal traffic, such as the storage network or OpenStack internal API traffic, and others for external and tenant traffic. For validation lab the following types of networks have been created as shown in Table 2:
Network | Description | VLAN | Interface |
---|---|---|---|
External | Hosts the OpenStack Dashboard (horizon) for graphical system management, the public APIs for OpenStack services, and performs SNAT for incoming traffic destined for instances. Floating IP as well. Allows incoming traffic to reach instances using 1-to-1 IP address mapping between the floating IP address, and the IP address actually assigned to the instance in the tenant network. It should be noted that for convenience in the NFV lab, VLAN 100 combines both the public API and external networks, for production deployments it is a best practice to segregate public API and Floating IP into their own VLANs. | 100 | NIC 3/4 Bond |
Provisioning (PXE) | The director uses this network traffic type to deploy new nodes over PXE boot and orchestrate the installation of OpenStack Platform on the Overcloud bare metal servers. This network is predefined before the installation of the Undercloud. | 1038 | NIC2 |
Internal API | The Internal API network is used for communication between the OpenStack services using API communication, RPC messages, and database communication. | 1020 | NIC 3/4 Bond |
Tenant | Neutron provides each tenant with their own networks using either VLAN segregation (where each tenant network is a network VLAN), or tunneling (through VXLAN or GRE). Network traffic is isolated within each tenant network. Each tenant network has an IP subnet associated with it, and network namespaces means that multiple tenant networks can use the same address range without causing conflicts. | 1050 | NIC 3/4 Bond |
Provider Network (SR-IOV & OVS-DPDK) | It should be noted for SR-IOV and OVS-DPDK, we will need to create several VLANs (network segments) in order to support mobile network deployment. These include dataplane, NFV control plane, NFV management. | N/A | NIC 5/6 |
Storage | Block Storage, NFS, iSCSI, and others. Ideally, this would be isolated to an entirely separate switch fabric for performance reasons. | 1030 | NIC 3/4 Bond |
Storage Management | OpenStack Object Storage (swift) uses this network to synchronize data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph backend connect over the Storage Management network, since they do not interact with Ceph directly but rather use the frontend service. Note that the RBD driver is an exception, as this traffic connects directly to Ceph. | 1040 | NIC 3/4 Bond. |
Table 2: NFV validation lab components for HA deployment using Red Hat OpenStack director
That the controller nodes can be connected to the provider networks if DHCP functionality is required on provider networks.
When bonding two or more ports, best practice is to avoid bonding ports that are on different NUMA nodes as this will force lookups across the Quick Path Interconnect (QPI) bus resulting in sub-optimal performance. This is generally true regardless of whether the bond is being created for control plane, management, internal API traffic (NIC 3/4 bond) or for data plane traffic (NIC 5/6 bond).
Some networks used for deploying mobile VNFs require larger Maximum Transmit Unit (MTU) sizes to be configured on the interfaces. Since this document focuses on generic infrastructure deployment, the configuration templates (yaml files) required for setting the MTU sizes are shared in the GitHub listed in the appendix.

Figure 1: Network connectivity overview
Chapter 5. Storage
5.1. Red Hat Ceph Storage
OpenStack private clouds leverage both block and object storage. Ceph is the preferred storage platform to provide these services.
The director provides the ability to configure extra features for an Overcloud. One of these extra features includes integration with Red Hat Ceph Storage. This includes both Ceph Storage clusters created with the director or existing Ceph Storage clusters.
Red Hat Ceph Storage is a distributed data object store designed to provide excellent performance, reliability, and scalability. Distributed object stores are the future of storage, because they accommodate unstructured data, and because clients can use modern object interfaces and legacy interfaces simultaneously. At the heart of every Ceph deployment is the Ceph Storage Cluster, which consists of two types of daemons:
- Ceph Object Storage Daemon(OSD) - Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs utilize the CPU and memory of Ceph nodes to perform data replication, rebalancing, recovery, monitoring and reporting functions.
- Ceph Monitor - A Ceph monitor maintains a master copy of the Ceph storage cluster map with the current state of the storage cluster. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.
Red Hat OpenStack Platform director provides two main methods for integrating Red Hat Ceph Storage into an Overcloud.
- Creating an Overcloud with its own Ceph Storage Cluster. The director has the ability to create a Ceph Storage Cluster during the creation on the Overcloud. The director creates a set of Ceph Storage nodes that use the Ceph OSD to store the data. In addition, the director installs the Ceph Monitor service on the Overcloud’s Controller nodes. This means if an organization creates an Overcloud with three highly available controller nodes, the Ceph Monitor also becomes a highly available service.
- Integrating an Existing Ceph Storage into an Overcloud. If you already have an existing Ceph Storage Cluster, you can integrate this during an Overcloud deployment. This means you manage and scale the cluster outside of the Overcloud configuration.
In the NFV mobile validation lab, the first approach is used. The overcloud is created with its own Ceph storage cluster.
5.1.1. Ceph Node details:
To create Ceph cluster using Red Hat OpenStack Platform director the following requirements should be met for the Ceph nodes:
Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory Memory requirements depend on the amount of storage space. Red Hat recommends a baseline of 16GB of RAM with additional 2GB of RAM per OSD.
Disk Space Storage requirements depends on OpenStack requirement: ephemeral storage, image storage and volumes.
Disk Layout The recommended Red Hat Ceph Storage node configuration requires a disk layout similar to the following:
- /dev/sda - The root disk. The director copies the main Overcloud image to the disk.
- /dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For example, /dev/sdb1, /dev/sdb2, /dev/sdb3, and onward. The journal disk is usually a solid state drive (SSD) to aid with system performance.
- /dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage requirements.
The journal to osd ratio depends on the number of osd and the type of HDD for the journal (ssd/enterprise ssd/nvme).
If you need to set root_device, Ceph journals/OSDs, etc, for your deployment, you now have the nodes’ introspected data available for consumption. For instance:
[stack@undercloud-nfv-vepc2 ~]$ cd swift-data/ | [stack@undercloud-nfv-vepc2 swift-data]$ for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do echo "NODE: $node" ; cat inspector_data-$node | jq '.inventory.disks' ; echo "-----" ; done NODE: 2a16ae03-ae92-41ba-937b-b31163f8536e [ { "size": 1000204886016, "rotational": true, "vendor": "SEAGATE", "name": "/dev/sda", "wwn_vendor_extension": null, "wwn_with_extension": "0x5000c5008f32caab", "model": "ST1000NX0333", "wwn": "0x5000c5008f32caab", "serial": "5000c5008f32caab" }, { "size": 1000204886016, "rotational": true, "vendor": "SEAGATE", "name": "/dev/sdb", "wwn_vendor_extension": null, "wwn_with_extension": "0x5000c5008f33ca5f", "model": "ST1000NX0333", "wwn": "0x5000c5008f33ca5f", "serial": "5000c5008f33ca5f" }, { "size": 1000204886016, "rotational": true, "vendor": "SEAGATE", "name": "/dev/sdc", "wwn_vendor_extension": null, "wwn_with_extension": "0x5000c5008f32a15f", "model": "ST1000NX0333", "wwn": "0x5000c5008f32a15f", "serial": "5000c5008f32a15f" }, { "size": 1000204886016, "rotational": true, "vendor": "SEAGATE", "name": "/dev/sdd", "wwn_vendor_extension": null, "wwn_with_extension": "0x5000c5008f2e2e9b", "model": "ST1000NX0333", "wwn": "0x5000c5008f2e2e9b", "serial": "5000c5008f2e2e9b" }, { "size": 1000204886016, "rotational": true, "vendor": "SEAGATE", "name": "/dev/sde", "wwn_vendor_extension": null, "wwn_with_extension": "0x5000c5008f2be95b", "model": "ST1000NX0333", "wwn": "0x5000c5008f2be95b", "serial": "5000c5008f2be95b" }, { "size": 1000204886016, "rotational": true, "vendor": "SEAGATE", "name": "/dev/sdf", "wwn_vendor_extension": null, "wwn_with_extension": "0x5000c5008f337f83", "model": "ST1000NX0333", "wwn": "0x5000c5008f337f83", "serial": "5000c5008f337f83" }, { "size": 599013720064, "rotational": true, "vendor": "LSI", "name": "/dev/sdg", "wwn_vendor_extension": "0x200150182f9945c7", "wwn_with_extension": "0x6001636001aa98c0200150182f9945c7", "model": "MRROMB", "wwn": "0x6001636001aa98c0", "serial": "6001636001aa98c0200150182f9945c7" }, { "size": 400088457216, "rotational": false, "vendor": null, "name": "/dev/nvme0n1", "wwn_vendor_extension": null, "wwn_with_extension": null, "model": "INTEL SSDPE2MD400G4", "wwn": null, "serial": "CVFT5481009D400GGN" }, { "size": 400088457216, "rotational": false, "vendor": null, "name": "/dev/nvme1n1", "wwn_vendor_extension": null, "wwn_with_extension": null, "model": "INTEL SSDPE2MD400G4", "wwn": null, "serial": "CVFT548100AP400GGN" } ]
You can then use this data to set root devices on nodes (in this case, it is being set to /dev/sdg):
openstack baremetal node set --property root_device='{"serial": "6001636001aa98c0200150182f9945c7"}' 2a16ae03-ae92-41ba-937b-b31163f8536e
Additional configuration are set in storage-enviroment.yaml file as follows:
#### CEPH SETTINGS #### ## Whether to deploy Ceph OSDs on the controller nodes. By default ## OSDs are deployed on dedicated ceph-storage nodes only. # ControllerEnableCephStorage: false ## When deploying Ceph Nodes through the oscplugin CLI, the following ## parameters are set automatically by the CLI. When deploying via ## heat stack-create or ceph on the controller nodes only, ## they need to be provided manually. ## Number of Ceph storage nodes to deploy # CephStorageCount: 0 ## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' #CephClusterFSID: 'ab47eae6-83a4-4e2b-a630-9f3a4bb7f055' ## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ==' # CephMonKey: '' ## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' # CephAdminKey: '' # ExtraConfig: ceph::profile::params::osds: '/dev/sda': journal: '/dev/nvme0n1' '/dev/sdb': journal: '/dev/nvme0n1' '/dev/sdc': journal: '/dev/nvme0n1' '/dev/sdd': journal: '/dev/nvme1n1' '/dev/sde': journal: '/dev/nvme1n1' '/dev/sdf': journal: '/dev/nvme1n1'
Erase all existing partitions on the disks targeted for journaling and OSDs before deploying the Overcloud. In addition, the Ceph Storage OSDs and journal disks require GPT disk labels, which can be configured as a part of the deployment.
Wiping disks is performed through first-boot.yaml using wipe_disk functions as shown below:
resources: userdata: type: OS::Heat::MultipartMime properties: parts: … … - config: {get_resource: wipe_disks} wipe_disks: type: OS::Heat::SoftwareConfig properties: config: #!/bin/bash if [[ `hostname` = *"ceph"* ]] then echo "Number of disks detected: $(lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}' | wc -l)" for DEVICE in `lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}'` do ROOTFOUND=0 echo "Checking /dev/$DEVICE..." echo "Number of partitions on /dev/$DEVICE: $(expr $(lsblk -n /dev/$DEVICE | awk '{print $7}' | wc -l) - 1)" for MOUNTS in `lsblk -n /dev/$DEVICE | awk '{print $7}'` do if [ "$MOUNTS" = "/" ] then ROOTFOUND=1 fi done if [ $ROOTFOUND = 0 ] then echo "Root not found in /dev/${DEVICE}" echo "Wiping disk /dev/${DEVICE}" sgdisk -Z /dev/${DEVICE} sgdisk -g /dev/${DEVICE} else echo "Root found in /dev/${DEVICE}" fi done fi
Things to note about Ceph storage deployment:
- SSDs should be used for Ceph journals. This is done for speed, in order to support small writes as well as any bursty workloads. Ceph journals provides the speed and the consistency.
- Ceph monitors will run as daemons on the controller nodes.
OSDs are made up six 1TB disks
- No RAID is used here (“just a bunch of disks” or “JBODs”)
Chapter 6. Deployment using Red Hat OpenStack director
For the mobile NFV lab, Red Hat OpenStack director is used to install, configure and manage the OpenStack cloud. This document does not discuss basics of how to install Red Hat OpenStack Platform using director. However, details of how the undercloud and overcloud are set up in the lab, in particular, using a virtualized undercloud (VM) is covered.
6.1. Undercloud Deployment
Requirements for the undercloud server are:
- An 8-core 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
- A minimum of 16 GB of RAM.
- A minimum of 40 GB of available disk space on the root disk. Make sure to leave at least 10 GB free space before attempting an overcloud deployment or update. This free space accommodates image conversion and caching during the node provisioning process.
- A minimum of 2 x 1 Gbps Network Interface Cards. However, it is recommended to use a 10 Gbps interface for Provisioning network traffic, especially if provisioning a large number of nodes in your overcloud environment.
- Red Hat Enterprise Linux 7.3 installed as the host operating system.
- SELinux is enabled on the host.
More details are available at https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-requirements.
In the validation lab, the undercloud is set up in a virtual environment running on the top of Kernel-based Virtual Machine (KVM).
For sake of deploying Red Hat OpenStack Platform using director, the undercloud can either run on bare-metal or as a VM. We prefer to run it as a VM as it has several advantages. We can snapshot and backup the VMs periodically. Run multiple instances of undercloud for different OpenStack instances at the same time.
6.1.1. Network Considerations
The following network considerations are required for a virtualized undercloud deployment:
Power Management The undercloud VM requires access to the overcloud nodes' power management devices. This is the IP address set for the pm_addr parameter when registering nodes. This is shown as IPMI network in Figure 2
External network The external network (eth1 on undercloud VM) is used to provide access to outside for the undercloud VM.
Provisioning network The NIC used for the provisioning (ctlplane) network requires the ability to broadcast and serve DHCP requests to the NICs of the overcloud’s bare metal nodes. As a recommendation, create a bridge that connects the VM’s NIC to the same network as the bare metal NICs.
A common problem occurs when the hypervisor technology blocks the undercloud from transmitting traffic from an unknown address. - If using Red Hat Virtualization, disable anti-mac-spoofing to prevent this. - If using VMware ESX or ESXi, allow forged transmits to prevent this.
The KVM host uses two Linux bridges:
br0 (ens255f0)
- Provides outside access to the undercloud
- DHCP server on outside network assigns network configuration to undercloud using the virtual NIC (eth1)
- Provides access for the undercloud to access the power management interfaces for the bare metal servers
br-1038 (Control plane, ens255f1.1038 )
- Connects to the same network as the bare metal overcloud nodes
- Undercloud fulfills DHCP and PXE boot requests through virtual NIC (eth0)
- Bare metal servers for the overcloud boot through PXE over this network
On the host(where the undercloud VM lives), there are different underclouds running as VMs each having it’s own VLAN instance. For the NFV validation lab used in this document, the undercloud VM using VLAN 1038 is used.
[root@se-nfv-srv12 ~]# brctl show br0 bridge name bridge id STP enabled interfaces br0 8000.2c600c597fd2 no ens255f0 vnet1 vnet2 [root@se-nfv-srv12 ~]# brctl show br-1038 bridge name bridge id STP enabled interfaces br-1038 8000.2c600c597fd3 no ens255f1.1038 vnet2
The undercloud virtual machine status can be observed on the host below:
[root@se-nfv-srv12 ~]# virsh list Id Name State ---------------------------------------------------- 6 undercloud-nfv-vepc running
VLAN 1038 is used as the provisioning network on which the OpenStack controllers, compute and Ceph nodes boot using preboot execution environment (PXE).
VLAN tag 1038 is applied by the host for traffic from the “undercloud-nfv-vepc” VM towards the layer2 switches. The interface configuration file /etc/sysconfig/network-scripts/ifcfg-ens255f1.1038 looks like this:
DEVICE=ens255f1.1038 ONBOOT=yes BRIDGE=br-1038 VLAN=yes
On the switch towards each OpenStack node (controller, Ceph and compute), the VLAN tag is removed and traffic to and from the OpenStack nodes are untagged. Switch configurations for ports connecting to two of the OpenStack nodes - servers 7 and 8 are shown below. They connect to ports 25 and 26 respectively on the layer2 switch. Because the ports are configured as bridge-access, the packets towards server 7 and 8 PXE ports will be untagged.
auto swp25 iface swp25 bridge-access 1038 alias swp25 server7_pxe auto swp26 iface swp26 bridge-access 1038 alias swp26 server8_pxe
The connection towards server12 which hosts the undercloud VMs is a trunk port carrying VLANs 1038 through 1045:
# port 30 is attached to server12's pxe interface # server 12 is the Director node and can support # multiple clouds therefore the pxe interface must # be a trunk. # PXE vlans are number 1038, 1039, ..., 1045 auto swp30 iface swp30 bridge-vids 1038-1045 #bridge-pvid 1038 alias swp30 server12_pxe
instack.json contains information about the overcloud nodes:
{ "nodes":[ { "name": "overcloud-compute-0", "mac":[ "2C:60:0C:CA:AB:37" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.193" }, { "name": "overcloud-compute-1", "mac":[ "2C:60:0C:CD:09:2B" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.195" }, { "name": "overcloud-compute-2", "mac":[ "2C:60:0C:CA:AA:56" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.197" }, { "name": "overcloud-compute-3", "mac":[ "2C:60:0C:CA:37:3A" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.199" }, { "name": "overcloud-compute-4", "mac":[ "2C:60:0C:CA:A8:46" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.201" }, { "name": "overcloud-ceph-0", "mac":[ "2C:60:0C:CA:78:8F" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.203" }, { "name": "overcloud-ceph-1", "mac":[ "2C:60:0C:CA:35:78" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.205" }, { "name": "overcloud-ceph-2", "mac":[ "2C:60:0C:CA:34:FD" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.207" }, { "name": "overcloud-controller-0", "mac":[ "2C:60:0C:CA:A9:06" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.209" }, { "name": "overcloud-controller-1", "mac":[ "2C:60:0C:83:71:D0" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.211" }, { "name": "overcloud-controller-2", "mac":[ "2C:60:0C:CD:12:46" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"admin", "pm_addr":"10.19.109.213" } ] }
6.2. Overcloud Deployment
As mentioned in the previous section, the Red Hat OpenStack director runs on the undercloud. It interacts with the OpenStack nodes (controllers, computes and Ceph) using two networks:
- IPMI network
- Provisioning network

Figure 2: IPMI and Provisioning networks used for deploying overcloud
Figure 2 shows the IPMI and provisioning networks.
The services on the overcloud used in this lab follows the documentation provided at https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-introduction#sect-Overcloud and is listed in Table 3:
Service | Node(s) |
---|---|
OpenStack Dashboard (horizon) | Controllers |
OpenStack Identity (keystone) | Controllers |
OpenStack Compute (nova) API | Controllers |
OpenStack Networking (neutron) | Controllers |
OpenStack Image Service (glance) | Controllers |
OpenStack Block Storage (cinder) | Controllers |
OpenStack Object Storage (swift) | Controllers |
OpenStack Orchestration (heat) | Controllers |
OpenStack Telemetry (ceilometer) | Controllers |
OpenStack Telemetry Metrics (gnocchi) | Controllers |
OpenStack Telemetry Alarming (aodh) | Controllers |
OpenStack Clustering (sahara) | Controllers |
OpenStack Shared File Systems (manila) | Controllers |
OpenStack Bare Metal (ironic) | Controllers |
MariaDB | Controllers |
Ceph monitor (mon) | Controllers |
Open vSwitch | Controllers, computes, Ceph |
Pacemaker and Galera for high availability services | Controllers |
OpenStack Compute (nova) | Controllers |
KVM | Controllers |
OpenStack Telemetry (ceilometer) agent | Computes, Ceph |
OpenStack Block Storage (cinder) volume | Ceph |
Table 3: Services running on different nodes in validation lab
Chapter 7. Performance Optimization
The fundamental business model for Communication Service Providers (CSPs) and Telcos is based on providing mission critical applications to a large pool of subscribers with least amount of service disruption. Two key requirements for NFV are performance and high availability. The need for high performance for CSPs and Telcos stems from the fact that they need to be able to support the highest number of subscribers using the lowest amount of resources so as to maximize profits. Without being able to achieve very high throughputs that are comparable to what is achievable with purpose built hardware solutions, this whole business model breaks down. Detailed explanations of performance and optimization is covered under the “Performance and Optimization” section of the “Deploying mobile networks using network function virtualization” reference architecture document. This section will specifically cover the following:
- Interface queue tuning
Fast datapath using
- Single root input/output virtualization (SR-IOV)
- Data plane development kit (DPDK) accelerated Open vSwitch (OVS-DPDK)
NFV performance tuning
- CPU pinning
- NUMA
- Huge pages
The NFV product guide and the NFV configuration guide can be found on the Red Hat customer portal for Red Hat OpenStack Platform 10: http://access.redhat.com/documentation/en/red-hat-openstack-platform/10/
7.1. Fast datapath
For deploying mobile networking over virtualized cloud, NEPs and Telcos have been currently using SR-IOV and are transitioning to OVS-DPDK.
7.1.1. SR-IOV
To date, SR-IOV is a great way to achieve high network throughput while sharing a physical NIC amongst multiple VMs. It is being used in virtual mobile network deployments currently while vEPC vendors are evaluating and testing OVS-DPDK. This section covers configuration of SR-IOV over Red Hat OpenStack Platform 10 using director.
With SR-IOV it is possible to use Distributed Virtual Routing (DVR) enabling placement of L3 routers directly on compute nodes. As a result, instance traffic is directed between the compute nodes (East-West) without first requiring routing through a network node(controller). In addition, the floating IP namespace is replicated between all participating compute nodes, meaning that instances with assigned floating IPs can send traffic externally (North-South) without routing through the network node. Instances without floating IP addresses still route SNAT traffic through the networking node (https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/networking_guide/sec-dvr).
Figure 3 shows SR-IOV configured on NICs 5 & 6 of the compute nodes. Ports eth0 and eth1 on the VNFs 1 & 2 (VMs) use the virtual function (VF) drivers.

Figure 3: Fast datapath using SR-IOV
The logical networks used in OpenStack that was explained in the Networking section is referred to as “network isolation”. Network isolation in a Red Hat OpenStack Platform 10 installation is configured using template files. The specific template files relevant to networking are:
- network-environment.yaml
- compute.yaml
- controller.yaml
The following template files need to be created in a local directory on /home/stack. In the NFV validation lab, templates are stored in /home/stack/templates. A full copy of the working template files that were used to deploy the overcloud with SR-IOV can be found on GitHub. Details of GitHub repository is shared in the appendix section. This includes:
- network-environment.yaml
- first-boot.yaml
- post-install.yaml
- compute.yaml
- controller.yaml
- ceph-storage.yaml
- overcloud-deploy.sh
For the servers used in the validation lab, NIC 5 (ens6f0) and NIC 6 (ens6f1) are used for SR-IOV and are only relevant on compute nodes.
The template(YAML) files that need to be customized for configuring SR-IOV before deploying the overcloud is described below.
- The network-environment.yaml file should contain the following configuration to enable SR-IOV
Add first-boot.yaml to set the kernel arguments. In the validation lab environment, the first-boot.yaml is also configured to wipe the disks and start over clean
# first boot script for compute (set kernel args) and ceph nodes (wipe disks) OS::TripleO::NodeUserData: /home/stack/templates/first-boot.yaml
Add the ComputeKernelArgs parameters to the default grub file. Add this to the parameter defaults section:
parameter_defaults: # This section is where deployment-specific configuration is done … … # Compute node kernel args for SR-IOV ComputeKernelArgs: "iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12"
NoteYou will need to add “hw:mem_page_size=1GB” to the flavor that will be used along with adding default_hugepagesz=1GB to ComputeKernelArgs. This is how Hugepages is enabled for achieving high performance.
Enable the SR-IOV mechanism driver (sriovnicswitch):
NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
NoteWe need to enable openvswitch as it will be used for the tenant control and management planes traffic while SR-IOV is used for the dataplane.
Configure the compute pci_passthrough_whitelist parameter, and set devname as the SR-IOV interface:
NovaPCIPassthrough: - devname: "ens6f0" physical_network: "phy-sriov1" - devname: "ens6f1" physical_network: "phy-sriov2"
List the available scheduler filters. Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter .PciPassthroughFilter","nova.scheduler.filters.numa_topology_filter.NUMATopologyFilt er","nova.scheduler.filters.aggregate_instance_extra_specs.AggregateInstanceExtraSpe csFilter"]
The Nova scheduler uses an array of filters to filter a node. These filters are applied in the order they are listed.
NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','I magePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter', 'PciPassthroughFilter', 'NUMATopologyFilter', 'AggregateInstanceExtraSpecsFilter']
Set a list or range of physical CPU cores to reserve for virtual machine processes: Add a comma-separated list, or use “-” to specify a range of CPU cores.
NovaVcpuPinSet: ['16-23']
List the supported PCI vendor devices in the format VendorID:ProductID:
# Make sure this is for the VFs, not the PF NeutronSupportedPCIVendorDevs: ['8086:1515']
NoteThis information can be obtained from one of the compute nodes using lspci command. We are only interested in VFs.
root@overcloud-compute-0 ~]# lspci -nn | grep -i 'network\|ether' 01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 01:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 04:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 04:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 83:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 83:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01) 83:10.0 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.1 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.2 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.3 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.4 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.5 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.6 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:10.7 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:11.0 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01) 83:11.1 Ethernet controller [0200]: Intel Corporation X540 Ethernet Controller Virtual Function [8086:1515] (rev 01)
NoteIn order to obtain the NeutronSupportedPCIVendorDevs in Red Hat OpenStack Platform 10 using director, you’d have to configure a node manually (outside director), grab this value and add it back to the network-environment.yaml before deploying. This configuration parameter is deprecated. Later releases of OpenStack will not require this parameter. More details are available at https://bugs.launchpad.net/neutron/+bug/1611302
Specify the physical network and SR-IOV interface in the format - PHYSICAL_NETWORK:PHYSICAL DEVICE. All physical networks listed in the network_vlan_ranges on the server should have mappings to the appropriate interfaces on each agent.
# set them to in NovaPCIPassthrough up above NeutronPhysicalDevMappings: "phy-sriov1:ens6f0,phy-sriov2:ens6f1"
Provide a list of Virtual Functions (VFs) to be reserved for each SR-IOV interface:
# Number of VFs that needs to be configured for a physical interface NeutronSriovNumVFs: "ens6f0:5,ens6f1:5"
In the NFV validation lab 5 was selected arbitrarily since this was not able scale test.
Set the tunnel type for the tenant network (vxlan or gre). To disable the tunnel type parameter, set the value to "":
# The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: "vxlan"
NoteWe are adding vxlan to tunnel types because we want to use vxlans for tenant networking instead of VLANs.
Specify the tenant network type for OpenStack Networking. The options available are vlan or vxlan. By default, the value is set to vxlan:
NeutronNetworkType: 'vxlan,vlan'
Here we choose both vxlan and vlan to enable both types. vxlan is what neutron defaults to if you don’t specify the network-type.
Set the Open vSwitch logical to physical bridge mappings:
NeutronBridgeMappings: 'datacentre:br-ex,phy-sriov1:br-isolated1,phy-sriov2:br-isolated2'
Set the OpenStack networking ML2 and Open vSwitch VLAN mapping range:
NeutronNetworkVLANRanges: 'datacentre:4000:4070,phy-sriov1:4071:4093,phy-sriov2:4071:4093'
datacenter is the VLAN network that is used for example:
openstack network create --external --provider-network-type vlan --provider-physical-network datacenter.
For SR-IOV when passing traffic between VMs, will have to share a VLAN. Alternately, it is possible to route the traffic between VLANs using a router.
Physnet (physical network) is the value which connects bridge from the NeutronBridgeMapping and VLAN ID from NeutronNetworkVLANRanges, and is used later in order to create the network on overcloud. In this case physical networks are phy-sriov1 and phy-sriov2.
Set a list or range of physical CPU cores to be tuned: The given argument will be appended to the tuned (pronounced “tune-d”) cpu-partitioning profile.
HostCpusList: "'0,1,2,3,4,5,6,7,8,9,10,11,24,25,26,27,28,29,30,31,32,33,34,35'"
This reserves CPUs 0 through 11 and the corresponding hyperthreads siblings 24-35 for the host to use. Remaining CPUs will be available for the VMs to use. Tuned is executed through first-boot.yaml file.
Modify the compute.yaml file: Set the SR-IOV interface by adding the following to the compute.yaml file:
resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: … … - type: interface name: ens6f0 use_dhcp: false defroute: false - type: interface name: ens6f1 use_dhcp: false defroute: false
At this point you are ready to run the overcloud-deploy.sh script which looks like this:
openstack overcloud deploy \ --templates \ -e /home/stack/templates/network-isolation.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/storage-environment.yaml \ -e /home/stack/templates/ips-from-pool-all.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \ -e /home/stack/templates/neutron-sriov.yaml \ -r /home/stack/templates/roles_data.yaml
Creating network, subnet and launching VM using SR-IOV ports
After deploying Red Hat OpenStack Platform and validating that everything was deployed correctly, to use SR-IOV, the networks must be created. The following commands show creation of two network, subnet and port for SR-IOV:
neutron net-create --provider:network_type vlan --provider:physical_network phy-sriov1 --provider:segmentation_id 4071 sriov1 net_id=$(neutron net-list | grep sriov1 | awk '{print $2}') neutron subnet-create $net_id --allocation_pool start=192.21.0.2,end=192.21.1.0 --name sriov1-sub 192.21.0.0/16 sub_id=$(neutron subnet-list | grep sriov1-sub | awk '{print $2}') neutron port-create --name sriov1-port --fixed-ip subnet_id=$sub_id,ip_address=192.21.0.10 --vnic-type direct $net_id neutron net-create --provider:network_type vlan --provider:physical_network phy-sriov2 --provider:segmentation_id 4072 sriov2 net_id=$(neutron net-list | grep sriov2 | awk '{print $2}') neutron subnet-create $net_id --allocation_pool start=192.22.0.2,end=192.22.1.0 --name sriov2-sub 192.22.0.0/16 sub_id=$(neutron subnet-list | grep sriov2-sub | awk '{print $2}') neutron port-create --name sriov2-port --fixed-ip subnet_id=$sub_id,ip_address=192.22.0.10 --vnic-type direct $net_id
NoteWhile using neutron net-create for SR-IOV ports to achieve SR-IOV VF (also known as “passthrough”), vnic-type has to be “direct” not “normal”
Output of neutron net-list shows the SR-IOV networks created:
[stack@undercloud-nfv-vepc ~]$ neutron net-list +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ | 3e184953-6df2-45c0-a0d4-7ca837110740 | default | 4c3c2bc2-cf62-4519-87e9-d8b9a584a459 192.20.0.0/16 | | b918b2ce-ef92-444d-80a0-9bf1f93406f3 | sriov2 | cda8a86a-dcc5-4d6b-813e-f6a8a5e06c38 192.22.0.0/16 | | ec0bad75-3466-43fb-b240-2b7d53a1d2c7 | sriov1 | d05ffdcd-16c5-48bd-b1bd-eaec54c699fb 192.21.0.0/16 | | f76fcc81-0b57-4258-9e1a-04aa0c786428 | external | 4f539e6a-a0cd-4108-a43b-76d9cd970939 10.19.108.0/24 | +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
Ports created can be seen from the following output:
[stack@undercloud-nfv-vepc ~]$ neutron port-list | grep sriov | 0aa80c2f-f758-40e8-a533-aec7c34d2c5b | sriov-port3 | fa:16:3e:28:ec:b1 | {"subnet_id": "11d79cf8-aba3-42c3-902e-fb0c16908df4", "ip_address": "192.30.0.12"} | | 15bee61e-e892-4e78-a5b3-61b6dc42a5ec | sriov-port2 | fa:16:3e:89:a8:bd | {"subnet_id": "ae859232-4dbd-45a0-a128-022923ee6e12", "ip_address": "192.30.0.11"} | | 8d166ba1-cddf-4d6e-9775-04747942a0e7 | sriov-port4 | fa:16:3e:9b:48:17 | {"subnet_id": "ae859232-4dbd-45a0-a128-022923ee6e12", "ip_address": "192.30.0.13"} | | c52c292e-b86e-42e0-a5aa-1536197728b7 | sriov-port | fa:16:3e:fd:ac:20 | {"subnet_id": "11d79cf8-aba3-42c3-902e-fb0c16908df4", "ip_address": "192.30.0.10"} |
After validating network, subnet and port creation, VM instances can be launched to use the SR-IOV ports as follows:
nova boot --flavor large.1 --image 9ef2c0b5-51b1-410c-aebc-39d7e3bc8263 --nic port-id=c52c292e-b86e-42e0-a5aa-1536197728b7 --nic port-id=15bee61e-e892-4e78-a5b3-61b6dc42a5ec --nic net-id=3e184953-6df2-45c0-a0d4-7ca837110740 --security-groups all-access Test-sriov
NoteIt is possible to setup dedicated flavor that uses host aggregates to launch VNFs that use SR-IOV with the performance settings.
On the compute hosts the mapping of the VF to VLAN can be observed in the following output:
[root@overcloud-compute-0 ~]# ip l show ens6f0 6: ens6f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether a0:36:9f:47:e4:70 brd ff:ff:ff:ff:ff:ff vf 0 MAC 9e:f1:2f:a1:8f:ac, spoof checking on, link-state auto, trust off vf 1 MAC 96:4d:7e:1f:a5:38, spoof checking on, link-state auto, trust off vf 2 MAC b6:1e:0f:00:8d:87, spoof checking on, link-state auto, trust off vf 3 MAC 2e:51:7a:71:9c:c9, spoof checking on, link-state auto, trust off vf 4 MAC fa:16:3e:28:ec:b1, vlan 4075, spoof checking on, link-state auto, trust off [root@overcloud-compute-0 ~]# ip l show ens6f1 7: ens6f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether a0:36:9f:47:e4:72 brd ff:ff:ff:ff:ff:ff vf 0 MAC a6:5e:b0:68:1b:f9, spoof checking on, link-state auto, trust off vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto, trust off vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto, trust off vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto, trust off vf 4 MAC fa:16:3e:9b:48:17, vlan 4075, spoof checking on, link-state auto, trust off
7.1.2. OVS-DPDK
Although SR-IOV provides very high performance (Network I/O), The VNFs using SR-IOV need to have VF drivers that support the specific physical NIC (PNIC) that is being deployed in the OpenStack cloud. Network Equipment Providers (NEPs) who provide VNFs are trying to move away from SR-IOV because of this reason. However, they still require high network throughput. OVS-DPDK is an attractive alternative to SR-IOV moving forward. For OVS-DPDK, the cloud provider has to make sure they are using PNICs that have DPDK support. VNFs don’t need to have any drivers that are aware of the PNICs being used. Figure 4 shows shows OVS-DPDK on compute nodes 1 and 2 with bonds on NIC5 and NIC6.

Figure 4: Fast Datapath using OVS-DPDK
OVS-DPDK version 2.6 has the following enhancements that are useful for mobile VNFs and NFV in general:
- MTU above 1500 support (jumbo frames)
- Multi-queue support
- Full NIC support
- Long Term Support (LTS) on the DPDK (LTS version is DPDK 16.11).
- Security groups support
- Live migration support
Because of these reasons, it was decided to use OVS-DPDK 2.6 for this setup. Please note that all features above are not supported by Red Hat OpenStack director yet.
The following template files need to be created in a local directory on /home/stack. In this case /home/stack/templates. A full copy of the working template files that were used to deploy the overcloud with OVS-DPDK can be found in Github, details of which is in the appendix section:
- network-environment.yaml
- first-boot.yaml
- post-install.yaml
- compute.yaml
- controller.yaml
- ceph-storage.yaml
- overcloud-deploy.sh
For the servers used in the validation lab, NIC5(ens6f0) and NIC6 (ens6f1) are used for OVS-DPDK and are only relevant on compute nodes.
The YAML files that need to be customized for configuring OVS-DPDK before deploying the overcloud is described below.
- Modify the network-environment.yaml file:
Add the first-boot.yaml to set the kernel parameters. Add the following line under resource_registry:
resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/first-boot.yaml
Add the ComputeKernelArgs parameters to the default grub file. Add this to the default_parameters section:
default_parameters: ComputeKernelArgs: "iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12"
NoteThe resulting huge pages will be consumed by the virtual machines, and also by OVS-DPDK using the NeutronDpdkSocketMemory parameter as shown in this procedure. Huge pages available for the virtual machines is the boot parameter minus the NeutronDpdkSocketMemory. hw:mem_page_size=1GB needs to be added to the flavor you associate with the DPDK instance. Not doing this will result in the instance not getting a DHCP allocation.
Add post-install.yaml to perform customization such as setting root password:
OS::TripleO::NodeExtraConfigPost: /home/stack/templates/post-install.yaml
Provide a list of cores that can be used as DPDK PMDs in the format - [allowed_pattern: "'[0-9,-]+'"]:
# List of cores to be used for DPDK Poll Mode Driver NeutronDpdkCoreList: "'2,3,14,15'"
Provide the number of memory channels in the format - [allowed_pattern: "[0-9]+"]:
NeutronDpdkMemoryChannels: "4"
One way determine how many how many memory channels are present is:
[root@overcloud-compute-0 ~]# dmidecode -t memory | grep -i channel Bank Locator: _Node0_Channel0_Dimm0 Bank Locator: _Node0_Channel0_Dimm1 Bank Locator: _Node0_Channel0_Dimm2 Bank Locator: _Node0_Channel1_Dimm0 Bank Locator: _Node0_Channel1_Dimm1 Bank Locator: _Node0_Channel1_Dimm2 Bank Locator: _Node0_Channel2_Dimm0 Bank Locator: _Node0_Channel2_Dimm1 Bank Locator: _Node0_Channel2_Dimm2 Bank Locator: _Node0_Channel3_Dimm0 Bank Locator: _Node0_Channel3_Dimm1 Bank Locator: _Node0_Channel3_Dimm2 Bank Locator: _Node1_Channel0_Dimm0 Bank Locator: _Node1_Channel0_Dimm1 Bank Locator: _Node1_Channel0_Dimm2 Bank Locator: _Node1_Channel1_Dimm0 Bank Locator: _Node1_Channel1_Dimm1 Bank Locator: _Node1_Channel1_Dimm2 Bank Locator: _Node1_Channel2_Dimm0 Bank Locator: _Node1_Channel2_Dimm1 Bank Locator: _Node1_Channel2_Dimm2 Bank Locator: _Node1_Channel3_Dimm0 Bank Locator: _Node1_Channel3_Dimm1 Bank Locator: _Node1_Channel3_Dimm2
Set the memory allocated for each socket:
NeutronDpdkSocketMemory: "'1024,1024'"
NoteIn the NFV validation lab the servers have two cores. Hence NeutronDpdkSocketMemory has to be specified for each core. It is important to not have any spaces between the two comma separated values as it could result in an error deploying DPDK properly.
Set the DPDK driver type, the default value is vfio-pci module:
NeutronDpdkDriverType: "vfio-pci"
Reserve the RAM for the host processes:
NovaReservedHostMemory: "4096"
Add a list or range of physical CPU cores to be reserved for virtual machine processes:
NovaVcpuPinSet: ['16-23']
NoteFor optimal performance, the CPU cores specified for NovaVcpuPinSet should be from the same NUMA node as NeutronDpdkCoreList or the PMD threads. Not doing so will result in CPU cores being allocated from socket 1 for the VNFs (VMs) to use which in turn results in crossing the QPI bus and taking a penalty hit.
List the most restrictive filters first to make the filtering process for the nodes more efficient.
Nova scheduler uses an array of filters to filter a node. These filters are applied in the order they are listed.
NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter"
Set the tunnel type for tenant network (for example, vxlan or gre). To disable the tunnel type parameter, set the value to "":
NeutronTunnelTypes: ""
NoteFor the OVS-DPDK setup and testing only VLANs were used and hence this parameter is set to “”
Set the tenant network type for OpenStack Networking. The options available are vlan or vxlan:
NeutronNetworkType: 'vlan'
Set the Open vSwitch logical to physical bridge mappings:
NeutronBridgeMappings: 'datacentre:br-isolated, dpdk:br-link'
Set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range:
NeutronNetworkVLANRanges: 'datacentre:4000:4070, dpdk:4071:4071'
Here, the datacentre:4000:4070, dpdk:4071:4017 values represent the VLAN IDs that Neutron will use to create tenants, for each corresponding physical device. The physical devices are not directly listed, as we use their logical labels defined in NeutronBridgeMappings (datacenters and dpdk) associated to a given OVS bridge defined in compute.yaml.
Set a list or range of physical CPU cores for non-DPDK use.
The given argument will be appended to the tuned cpu-partitioning profile.
HostCpusList: "'0,1,2,3,4,5,6,7,8,9,10,11,24,25,26,27,28,29,30,31,32,33,34,35'"
This can be specified as a range using <start>-<end> syntax. In the above example CPUs 0-11 and 24-35, will be set aside for use by host for housekeeping tasks.
Set the Datapath type for OVS bridges
NeutronDatapathType: "netdev"
The vhost-user socket directory for OVS
NeutronVhostuserSocketDir: "/var/run/openvswitch"
- Modify the compute.yaml file:
Add the following lines to the compute.yaml file to set a bridge with DPDK port using the desired interface.
- type: ovs_user_bridge name: br-link use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 ovs_options: {get_param: DpdkBondInterfaceOvsOptions} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: ens6f0 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: ens6f1
NoteDPDK ports may be deployed in bonded mode as shown above or without bonds. Working sample files are provided in Github for both.
For OpenStack network isolation traffic (InternalAPI, Storage and Storage Management) in the NFV validation lab, Linux bonds are used on the compute node. On control node ovs_bridge is used for network isolation traffic. It is not recommended to use ovs_bridge (type: system) and ovs_user_bridge (type: netdev) on the same node. Since the compute node needs ovs_user_bridge for the higher performance OVS-DPDK, network isolation is done via Linux bond as follows:
- type: linux_bond name: bond1 bonding_options: {get_param: LinuxBondInterfaceOptions} members: - type: interface name: ens3f0 - type: interface name: ens3f1
- Controller template files:
controller.yaml definition for br-link and br-isolated look like this:
- type: ovs_bridge name: br-link use_dhcp: false members: - type: linux_bond name: bond2 bonding_options: {get_param: LinuxBondInterfaceOptions} members: - type: interface name: ens6f0 - type: interface name: ens6f1
controller.yaml definition for network isolation bridge br-isolated is as follows:
- type: ovs_bridge name: br-isolated use_dhcp: false members: - type: linux_bond name: bond1 bonding_options: {get_param: LinuxBondInterfaceOptions} members: - type: interface name: ens3f0 - type: interface name: ens3f1 -
Run the overcloud-deploy.sh script to deploy your overcloud with OVS-DPDK:
openstack overcloud deploy \ --templates \ -e /home/stack/templates/network-isolation.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/storage-environment.yaml \ -e /home/stack/templates/ips-from-pool-all.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml
NoteSome of the environment files included in the above deploy command need to be executed after others. It is important to maintain the order to avoid issues.
Creating network, subnet and launching VM using DPDK ports
source /home/stack/overcloudrc nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 (openstack) security group list +--------------------------------------+------------+------------------------+----------------------------------+ | ID | Name | Description | Project | +--------------------------------------+------------+------------------------+----------------------------------+ | 023f335b-cd84-452e-a268-68dc7b86662b | default | Default security group | | | 9d221043-9522-45c7-a24c-cb08ccef2338 | all-access | all-access | ff8dbcfd418943a3ba0b07aa46a44d43 | | a65150a4-6f6c-4081-bc52-a535222235b2 | default | Default security group | b295258c2ac44ec992a6c0095f7cb2f9 | | cdd33430-7a41-4f52-9ba0-23fe5eec3a20 | default | Default security group | ff8dbcfd418943a3ba0b07aa46a44d43 | +--------------------------------------+------------+------------------------+----------------------------------+ (openstack) security group show cdd33430-7a41-4f52-9ba0-23fe5eec3a20 +-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2017-04-11T17:57:45Z | | description | Default security group | | id | cdd33430-7a41-4f52-9ba0-23fe5eec3a20 | | name | default | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | revision_number | 3 | | rules | created_at='2017-04-11T17:57:46Z', direction='ingress', ethertype='IPv4', id='1ca35981-48a1-42ed-affa-76e659c7c04f', port_range_max='22', port_range_min='22', | | | project_id='ff8dbcfd418943a3ba0b07aa46a44d43', protocol='tcp', remote_ip_prefix='0.0.0.0/0', revision_number='1', updated_at='2017-04-11T17:57:46Z' | | | created_at='2017-04-11T17:57:53Z', direction='ingress', ethertype='IPv4', id='43c7460b-ed6c-48a0-9e54-5521e4832031', project_id='ff8dbcfd418943a3ba0b07aa46a44d43', | | | protocol='icmp', remote_ip_prefix='0.0.0.0/0', revision_number='1', updated_at='2017-04-11T17:57:53Z' | | | created_at='2017-04-11T17:57:45Z', direction='ingress', ethertype='IPv4', id='4bc7af90-b33b-45ef-8b59-85bb08ef6a81', project_id='ff8dbcfd418943a3ba0b07aa46a44d43', | | | remote_group_id='cdd33430-7a41-4f52-9ba0-23fe5eec3a20', revision_number='1', updated_at='2017-04-11T17:57:45Z' | | | created_at='2017-04-11T17:57:45Z', direction='ingress', ethertype='IPv6', id='82c0c79e-8bb1-438c-9343-8e8b40ec0b23', project_id='ff8dbcfd418943a3ba0b07aa46a44d43', | | | remote_group_id='cdd33430-7a41-4f52-9ba0-23fe5eec3a20', revision_number='1', updated_at='2017-04-11T17:57:45Z' | | | created_at='2017-04-11T17:57:45Z', direction='egress', ethertype='IPv4', id='b017697b-d23b-49ea-9091-3d1364655ce3', project_id='ff8dbcfd418943a3ba0b07aa46a44d43', | | | revision_number='1', updated_at='2017-04-11T17:57:45Z' | | | created_at='2017-04-11T17:57:45Z', direction='egress', ethertype='IPv6', id='c5a6febd-b5c9-48a9-b9bc-fcfa6d499a42', project_id='ff8dbcfd418943a3ba0b07aa46a44d43', | | | revision_number='1', updated_at='2017-04-11T17:57:45Z' | | updated_at | 2017-04-11T17:57:53Z | +-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create DPDK flavor for VNF to use:
openstack flavor create m1.dpdk_vnf --ram 4096 --disk 150 --vcpus 8
Here, m1.dpdk_vnf is the flavor name, 4096 is the memory size in MB, 150 is the disk size in GB (default 0G), and 8 is the number of vCPUs.
openstack flavor set --property hw:cpu_policy=dedicated --property hw:cpu_thread_policy=require --property hw:mem_page_size=large --property hw:numa_nodes=1 --property hw:numa_mempolicy=preferred --property hw:numa_cpus.1='0,1,2,3,4,5,6,7' --property hw:numa_mem.1=4096 m1.dpdk_vnf
Here, m1.medium_huge_4cpu is the flavor name and the remaining parameters set the other properties for the flavor.
NoteIt is important to create a flavor and assign the correct attributes to the flavor as shown above. Extra specs hw:cpu_policy=dedicated and hw:cpu_thread_policy=require are important. This is documented at https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/virt-driver-cpu-pinning.html
(openstack) flavor create m1.dpdk_vnf --ram 4096 --disk 150 --vcpus 8 +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 150 | | id | f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa | | name | m1.dpdk_vnf | | os-flavor-access:is_public | True | | properties | | | ram | 4096 | | rxtx_factor | 1.0 | | swap | | | vcpus | 8 | +----------------------------+--------------------------------------+ (openstack) flavor set --property hw:cpu_policy=dedicated --property hw:mem_page_size=large --property hw:numa_nodes=1 --property hw:numa_mempolicy=preferred --property hw:numa_cpus.1=0,1,2,3,4,5,6,7 --property hw:numa_mem.1=4096 m1.dpdk_vnf (openstack) flavor show m1.dpdk_vnf +----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 150 | | id | f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa | | name | m1.dpdk_vnf | | os-flavor-access:is_public | True | | properties | hw:cpu_policy='dedicated', hw:mem_page_size='large', hw:numa_cpus.1='0,1,2,3,4,5,6,7', hw:numa_mem.1='4096', hw:numa_mempolicy='preferred', hw:numa_nodes='1' | | ram | 4096 | | rxtx_factor | 1.0 | | swap | | | vcpus | 8 | +----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
Notehw:numa_cpus.1='0,1,2,3,4,5,6,7' in the above output correspond to the 8 CPU cores we are setting aside for VNFs (Guest VMs) to use when we set NovaVcpuPinSet: ['16-23'].
Create external provider network using physical network datacenter with segmentation ID 100:
neutron net-create --provider:network_type vlan --provider:segmentation_id 100 --provider:physical_network datacentre public --router:external (openstack) network list +--------------------------------------+----------------------------------------------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------------------------------------------------+--------------------------------------+ | 6451fda7-d444-4733-90ed-63587821a080 | HA network tenant ff8dbcfd418943a3ba0b07aa46a44d43 | 7a0b9b90-2ca1-446d-bc8b-143778dd48f8 | | 8062ef1e-a9ab-4b86-872d-cd13e835155e | public | fd6c8f66-e533-4286-b5ef-13075fede572 | | dd8e0237-4358-4fcb-9df4-773ec7b1b8be | dpdk0 | 2790eea7-ada8-4c4d-833a-0bf9b309ce3d | +--------------------------------------+----------------------------------------------------+--------------------------------------+ (openstack) network show public +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2017-04-11T17:58:01Z | | description | | | id | 8062ef1e-a9ab-4b86-872d-cd13e835155e | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | False | | mtu | 1496 | | name | public | | port_security_enabled | True | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | provider:network_type | vlan | | provider:physical_network | datacentre | | provider:segmentation_id | 100 | | qos_policy_id | None | | revision_number | 6 | | router:external | External | | shared | False | | status | ACTIVE | | subnets | fd6c8f66-e533-4286-b5ef-13075fede572 | | tags | [] | | updated_at | 2017-04-11T17:58:03Z | +---------------------------+--------------------------------------+
Create a external routable subnet with DHCP allocation pool starting from 10.19.108.150 and ending at 10.19.108.200:
neutron subnet-create --name public --allocation-pool start=10.19.108.150,end=10.19.108.200 --dns-nameserver 10.19.5.19 --gateway 10.19.108.254 --disable-dhcp public 10.19.108.0/24 (openstack) subnet list +--------------------------------------+---------------------------------------------------+--------------------------------------+------------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------------------------------------------------+--------------------------------------+------------------+ | 2790eea7-ada8-4c4d-833a-0bf9b309ce3d | dpdk0 | dd8e0237-4358-4fcb-9df4-773ec7b1b8be | 20.35.185.48/28 | | 7a0b9b90-2ca1-446d-bc8b-143778dd48f8 | HA subnet tenant ff8dbcfd418943a3ba0b07aa46a44d43 | 6451fda7-d444-4733-90ed-63587821a080 | 169.254.192.0/18 | | fd6c8f66-e533-4286-b5ef-13075fede572 | public | 8062ef1e-a9ab-4b86-872d-cd13e835155e | 10.19.108.0/24 | +--------------------------------------+---------------------------------------------------+--------------------------------------+------------------+ (openstack) subnet show public +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 10.19.108.150-10.19.108.200 | | cidr | 10.19.108.0/24 | | created_at | 2017-04-11T17:58:03Z | | description | | | dns_nameservers | 10.19.5.19 | | enable_dhcp | False | | gateway_ip | 10.19.108.254 | | host_routes | | | id | fd6c8f66-e533-4286-b5ef-13075fede572 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | public | | network_id | 8062ef1e-a9ab-4b86-872d-cd13e835155e | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | revision_number | 2 | | service_types | [] | | subnetpool_id | None | | updated_at | 2017-04-11T17:58:03Z | +-------------------+--------------------------------------+
Create external provider network using physical network dpdk with segmentation ID 4071:
neutron net-create dpdk0 --provider:network_type vlan --provider:segmentation_id 4071 --provider:physical_network dpdk #(name given in Neutron bridgeMappings) (openstack) network list +--------------------------------------+----------------------------------------------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------------------------------------------------+--------------------------------------+ | 6451fda7-d444-4733-90ed-63587821a080 | HA network tenant ff8dbcfd418943a3ba0b07aa46a44d43 | 7a0b9b90-2ca1-446d-bc8b-143778dd48f8 | | 8062ef1e-a9ab-4b86-872d-cd13e835155e | public | fd6c8f66-e533-4286-b5ef-13075fede572 | | dd8e0237-4358-4fcb-9df4-773ec7b1b8be | dpdk0 | 2790eea7-ada8-4c4d-833a-0bf9b309ce3d | +--------------------------------------+----------------------------------------------------+--------------------------------------+ (openstack) network show dpdk0 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2017-04-11T17:58:04Z | | description | | | id | dd8e0237-4358-4fcb-9df4-773ec7b1b8be | | ipv4_address_scope | None | | ipv6_address_scope | None | | mtu | 1496 | | name | dpdk0 | | port_security_enabled | True | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | provider:network_type | vlan | | provider:physical_network | dpdk | | provider:segmentation_id | 4071 | | qos_policy_id | None | | revision_number | 5 | | router:external | Internal | | shared | False | | status | ACTIVE | | subnets | 2790eea7-ada8-4c4d-833a-0bf9b309ce3d | | tags | [] | | updated_at | 2017-04-11T17:58:05Z | +---------------------------+--------------------------------------+
Create a private subnet with DHCP allocation pool starting from 20.35.185.49 and ending at 20.35.185.61:
neutron subnet-create --name dpdk0 --allocation-pool start=20.35.185.49,end=20.35.185.61 --dns-nameserver 8.8.8.8 --gateway 20.35.185.62 dpdk0 20.35.185.48/28 dpdk0_net=$(neutron net-list | awk ' /dpdk0/ {print $2;}') (openstack) subnet list +--------------------------------------+---------------------------------------------------+--------------------------------------+------------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------------------------------------------------+--------------------------------------+------------------+ | 2790eea7-ada8-4c4d-833a-0bf9b309ce3d | dpdk0 | dd8e0237-4358-4fcb-9df4-773ec7b1b8be | 20.35.185.48/28 | | 7a0b9b90-2ca1-446d-bc8b-143778dd48f8 | HA subnet tenant ff8dbcfd418943a3ba0b07aa46a44d43 | 6451fda7-d444-4733-90ed-63587821a080 | 169.254.192.0/18 | | fd6c8f66-e533-4286-b5ef-13075fede572 | public | 8062ef1e-a9ab-4b86-872d-cd13e835155e | 10.19.108.0/24 | +--------------------------------------+---------------------------------------------------+--------------------------------------+------------------+ (openstack) subnet show dpdk0 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 20.35.185.49-20.35.185.61 | | cidr | 20.35.185.48/28 | | created_at | 2017-04-11T17:58:05Z | | description | | | dns_nameservers | 8.8.8.8 | | enable_dhcp | True | | gateway_ip | 20.35.185.62 | | host_routes | | | id | 2790eea7-ada8-4c4d-833a-0bf9b309ce3d | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | dpdk0 | | network_id | dd8e0237-4358-4fcb-9df4-773ec7b1b8be | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | revision_number | 2 | | service_types | [] | | subnetpool_id | None | | updated_at | 2017-04-11T17:58:05Z | +-------------------+--------------------------------------+
Create router, set gateway and add interfaces to the router
openstack router create router0 neutron router-gateway-set router0 public neutron router-interface-add router0 dpdk0 (openstack) router list +--------------------------------------+---------+--------+-------+-------------+------+----------------------------------+ | ID | Name | Status | State | Distributed | HA | Project | +--------------------------------------+---------+--------+-------+-------------+------+----------------------------------+ | 1a39d943-661f-4991-a6e0-2c0607850f32 | router0 | ACTIVE | UP | False | True | ff8dbcfd418943a3ba0b07aa46a44d43 | +--------------------------------------+---------+--------+-------+-------------+------+----------------------------------+ (openstack) router show router0 +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2017-04-11T17:58:09Z | | description | | | distributed | False | | external_gateway_info | {"network_id": "8062ef1e-a9ab-4b86-872d-cd13e835155e", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "fd6c8f66-e533-4286-b5ef-13075fede572", "ip_address": | | | "10.19.108.152"}]} | | flavor_id | None | | ha | True | | id | 1a39d943-661f-4991-a6e0-2c0607850f32 | | name | router0 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | project_id | ff8dbcfd418943a3ba0b07aa46a44d43 | | revision_number | 12 | | routes | | | status | ACTIVE | | updated_at | 2017-04-11T17:58:17Z | +-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ (openstack)
Create some floating IPs for accessing VNFs (VMs) from outside using:
openstack floating ip create public (openstack) floating ip list +--------------------------------------+---------------------+------------------+--------------------------------------+ | ID | Floating IP Address | Fixed IP Address | Port | +--------------------------------------+---------------------+------------------+--------------------------------------+ | 3ddca3eb-04a0-4ebb-a3de-5320dea15465 | 10.19.108.151 | 20.35.185.52 | 4a0221f6-19b5-439b-b921-2b9f4fb3dd38 | | 4f1d313c-a36e-4a33-a83a-2ca28895bca7 | 10.19.108.166 | None | None | | 67530493-5dab-4491-b827-d931db3987ef | 10.19.108.161 | None | None | | 8fdbd654-7fb1-416c-be5a-6aa8bc0198a1 | 10.19.108.160 | None | None | | 9132bb60-0d0a-495a-8d9c-310ca7f8ac76 | 10.19.108.154 | None | None | | b45ff3e7-6045-472d-b45b-8f84b1e8bd8d | 10.19.108.150 | 20.35.185.57 | 086f620e-143d-4472-9b83-819e515b158c | | d968460c-595e-4d83-9963-4beda9bcd9f1 | 10.19.108.155 | None | None | +--------------------------------------+---------------------+------------------+--------------------------------------+
Create VMs vnf1 and vnf2:
(openstack) server create --flavor m1.dpdk_vnf --image centos-password --nic net-id=dpdk0 vnf1 +--------------------------------------+--------------------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | NyBhYZp3tdLT | | config_drive | | | created | 2017-05-26T20:21:02Z | | flavor | m1.dpdk_vnf (f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa) | | hostId | | | id | 2d4d7f88-9fac-48f0-86f8-744c80c8d61b | | image | centos-password (ab44f4f8-877e-4746-879c-771123325894) | | key_name | None | | name | vnf1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | dc1ff67888f649ab94b370e7ff8a0a48 | | properties | | | security_groups | [{u'name': u'default'}] | | status | BUILD | | updated | 2017-05-26T20:21:02Z | | user_id | 283669bb1a4e4b44ae202921d2400728 | +--------------------------------------+--------------------------------------------------------+ (openstack) server show vnf1 +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | overcloud-compute-1.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-compute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-00000009 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-05-26T20:21:11.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | dpdk0=20.35.185.50 | | config_drive | | | created | 2017-05-26T20:21:02Z | | flavor | m1.dpdk_vnf (f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa) | | hostId | 6b2cc089e739555146c38be6d91c1c7dbd07de03d300ca3ef94c4ac3 | | id | 2d4d7f88-9fac-48f0-86f8-744c80c8d61b | | image | centos-password (ab44f4f8-877e-4746-879c-771123325894) | | key_name | None | | name | vnf1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | dc1ff67888f649ab94b370e7ff8a0a48 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2017-05-26T20:21:11Z | | user_id | 283669bb1a4e4b44ae202921d2400728 | +--------------------------------------+----------------------------------------------------------+ (openstack) server create --flavor m1.dpdk_vnf --image centos-password --nic net-id=dpdk0 vnf2 +--------------------------------------+--------------------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | hC5xAKYAgNBt | | config_drive | | | created | 2017-05-26T20:24:57Z | | flavor | m1.dpdk_vnf (f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa) | | hostId | | | id | e4e913d6-c331-4250-84a9-33b0e13ba732 | | image | centos-password (ab44f4f8-877e-4746-879c-771123325894) | | key_name | None | | name | vnf2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | dc1ff67888f649ab94b370e7ff8a0a48 | | properties | | | security_groups | [{u'name': u'default'}] | | status | BUILD | | updated | 2017-05-26T20:24:57Z | | user_id | 283669bb1a4e4b44ae202921d2400728 | +--------------------------------------+--------------------------------------------------------+ (openstack) server show vnf2 +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | overcloud-compute-0.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-compute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000000a | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-05-26T20:25:06.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | dpdk0=20.35.185.60 | | config_drive | | | created | 2017-05-26T20:24:57Z | | flavor | m1.dpdk_vnf (f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa) | | hostId | ce625bfad71affa307817aca90be7b45f01e0d6048de08eb2c1dfd45 | | id | e4e913d6-c331-4250-84a9-33b0e13ba732 | | image | centos-password (ab44f4f8-877e-4746-879c-771123325894) | | key_name | None | | name | vnf2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | dc1ff67888f649ab94b370e7ff8a0a48 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2017-05-26T20:25:06Z | | user_id | 283669bb1a4e4b44ae202921d2400728 | +--------------------------------------+----------------------------------------------------------+ (openstack) floating ip list +--------------------------------------+---------------------+------------------+------+ | ID | Floating IP Address | Fixed IP Address | Port | +--------------------------------------+---------------------+------------------+------+ | 25da467c-6962-43b1-9d66-0de24b8626f7 | 10.19.108.160 | None | None | | 2f7dd57d-8833-4553-b92d-41353c076974 | 10.19.108.159 | None | None | | 3e677d66-d491-4a19-86e9-11db14ab4c0a | 10.19.108.157 | None | None | | 55585620-8ea0-4f7d-8104-3d60e5bf27b1 | 10.19.108.163 | None | None | | 965eb4f6-312b-4a9f-a6ab-3eed8d3d55f0 | 10.19.108.153 | None | None | | c9ffe7e3-77b6-483b-9eb2-58071a445097 | 10.19.108.161 | None | None | | d359ed7b-b052-4af4-95e5-b36717498f80 | 10.19.108.151 | None | None | +--------------------------------------+---------------------+------------------+------+
Associate floating IPs with vnv1 and vnf2:
(openstack) server add floating ip vnf1 10.19.108.151 (openstack) server add floating ip vnf2 10.19.108.153 (openstack) server show vnf1 +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | overcloud-compute-1.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-compute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-00000009 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-05-26T20:21:11.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | dpdk0=20.35.185.50, 10.19.108.151 | | config_drive | | | created | 2017-05-26T20:21:02Z | | flavor | m1.dpdk_vnf (f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa) | | hostId | 6b2cc089e739555146c38be6d91c1c7dbd07de03d300ca3ef94c4ac3 | | id | 2d4d7f88-9fac-48f0-86f8-744c80c8d61b | | image | centos-password (ab44f4f8-877e-4746-879c-771123325894) | | key_name | None | | name | vnf1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | dc1ff67888f649ab94b370e7ff8a0a48 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2017-05-26T20:21:11Z | | user_id | 283669bb1a4e4b44ae202921d2400728 | +--------------------------------------+----------------------------------------------------------+ (openstack) server show vnf2 +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | overcloud-compute-0.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-compute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000000a | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-05-26T20:25:06.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | dpdk0=20.35.185.60, 10.19.108.153 | | config_drive | | | created | 2017-05-26T20:24:57Z | | flavor | m1.dpdk_vnf (f9c1d040-b68e-4438-a8cf-9fe6f1d91dfa) | | hostId | ce625bfad71affa307817aca90be7b45f01e0d6048de08eb2c1dfd45 | | id | e4e913d6-c331-4250-84a9-33b0e13ba732 | | image | centos-password (ab44f4f8-877e-4746-879c-771123325894) | | key_name | None | | name | vnf2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | dc1ff67888f649ab94b370e7ff8a0a48 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2017-05-26T20:25:06Z | | user_id | 283669bb1a4e4b44ae202921d2400728 | +--------------------------------------+----------------------------------------------------------+
We can observe the flows on the compute node:
[root@overcloud-compute-0 ~]# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0xa5e9458eecbe0b89, duration=13597.324s, table=0, n_packets=0, n_bytes=0, idle_age=45316, priority=10,icmp6,in_port=3,icmp_type=136 actions=resubmit(,24) cookie=0xa5e9458eecbe0b89, duration=13597.322s, table=0, n_packets=23427, n_bytes=983934, idle_age=243, priority=10,arp,in_port=3 actions=resubmit(,24) cookie=0xa5e9458eecbe0b89, duration=69288.422s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0xa5e9458eecbe0b89, duration=69287.525s, table=0, n_packets=28, n_bytes=2016, idle_age=326, hard_age=65534, priority=2,in_port=2 actions=drop cookie=0xa5e9458eecbe0b89, duration=13597.326s, table=0, n_packets=2948, n_bytes=1006128, idle_age=8, priority=9,in_port=3 actions=resubmit(,25) cookie=0xa5e9458eecbe0b89, duration=45316.880s, table=0, n_packets=27, n_bytes=2020, idle_age=326, priority=3,in_port=2,dl_vlan=4071 actions=mod_vlan_vid:1,NORMAL cookie=0xa5e9458eecbe0b89, duration=69289.117s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=NORMAL cookie=0xa5e9458eecbe0b89, duration=69289.118s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop cookie=0xa5e9458eecbe0b89, duration=13597.325s, table=24, n_packets=0, n_bytes=0, idle_age=13597, priority=2,icmp6,in_port=3,icmp_type=136,nd_target=fe80::f816:3eff:feaf:2a4 actions=NORMAL cookie=0xa5e9458eecbe0b89, duration=13597.323s, table=24, n_packets=0, n_bytes=0, idle_age=13597, priority=2,arp,in_port=3,arp_spa=20.35.185.56 actions=resubmit(,25) cookie=0xa5e9458eecbe0b89, duration=69289.116s, table=24, n_packets=23427, n_bytes=983934, idle_age=243, hard_age=65534, priority=0 actions=drop cookie=0xa5e9458eecbe0b89, duration=13597.328s, table=25, n_packets=2948, n_bytes=1006128, idle_age=8, priority=2,in_port=3,dl_src=fa:16:3e:af:02:a4 actions=NORMAL [root@overcloud-compute-0 ~]# ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:0000be3edae92f49 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(int-br-isolated): addr:2e:20:d9:1e:02:15 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(int-br-link): addr:ae:84:2a:e7:32:91 ⇐ Input port 2 = int-br-link config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 3(vhu8802f835-fd): addr:00:00:00:00:00:00 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-int): addr:be:3e:da:e9:2f:49 config: PORT_DOWN state: LINK_DOWN current: 10MB-FD COPPER speed: 10 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
Now in the other direction:
[root@overcloud-compute-0 ~]# ovs-ofctl dump-flows br-link NXST_FLOW reply (xid=0x4): cookie=0x87e63a1d52d697f7, duration=45623.879s, table=0, n_packets=2967, n_bytes=1012626, idle_age=6, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:4071,NORMAL cookie=0x87e63a1d52d697f7, duration=69594.523s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=2 actions=drop cookie=0x87e63a1d52d697f7, duration=69594.755s, table=0, n_packets=55, n_bytes=4036, idle_age=633, hard_age=65534, priority=0 actions=NORMAL [root@overcloud-compute-0 ~]# ovs-ofctl show br-link OFPT_FEATURES_REPLY (xid=0x2): dpid:0000a0369f47e470 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(dpdk0): addr:a0:36:9f:47:e4:70 config: 0 state: 0 current: 10GB-FD speed: 10000 Mbps now, 0 Mbps max 2(phy-br-link): addr:82:29:b7:fb:82:de config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-link): addr:a0:36:9f:47:e4:70 config: 0 state: 0 current: 10MB-FD COPPER speed: 10 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
Chapter 8. NFV performance tuning with OVS-DPDK
OVS-DPDK is based on polling the Rx (Poll Mode Driver) queues both on the OVS-DPDK vSwitch that resides in user space of the host as well as the PMD used in the guest VM (VNF). Polling activity is directly mapped to CPU cores. If we we attempt to use OVS-DPDK without any tuning, a random CPU (probably first in the list - core 0) may get picked for all the activities:
- Normal non-DPDK related processes that require CPU
- PMD threads that need CPU for polling on host vSwitch (OVS-DPDK)
- VM (VNFs) where PMD runs as a part of DPDK in the guest
To maximize throughput, enable tuning with Red Hat OpenStack director to reserve CPU cores for each OVS-DPDK activity.
8.1. Hardware requirements
Achieving zero loss also requires:
- Latest hardware generation
Intel NICs, either Niantic series (x520/x540) or the latest Fortville series (x710)
Additionally the BIOS settings shown in Table 4 need to be set as shown.
Setting | Value |
---|---|
C3 Power State | Disabled |
C6 Power State | Disabled |
MLC Steamer | Enabled |
MLC Spacial Prefetcher | Enabled |
DCU Data Prefetcher | Enabled |
DCA | Enabled |
CPU Power and Performance | Performance |
Memory RAS and Performance Config → NUMA Optimized | Enabled |
Table 4: BIOS settings for OVS-DPDK
8.2. Setting OOO Heat templates based on your hardware
After configuring the BIOS settings as described in the hardware section, in order for the tuning to be performed, the following parameters that will be consumed by OOO heat templates as a part of OpenStack overcloud deployment by Red Hat Openstack director need to be configured in the network-environment.yaml file.
HostCpusList - List of all CPUs (cores) that are not going to be used for OVS-DPDK. Parameter set in network-environment.yaml file:
# Set a list or range of physical CPU cores that should be excluded from tuning: HostCpusList: "'0,1,2,3,4,5,6,7,8,9,10,11,24,25,26,27,28,29,30,31,32,33,34,35'"
It is a good practice to exclude siblings of CPU cores 0 through 11, in this case 24 through 35 from being used for DPDK. This can be observed in virsh capabilities output.
HostIsolatedCoreList - List of CPUs used for tuning # HostIsolatedCoreList = NeutronDpdkCoreList(12-15) + NovaVcpuPinSet(16-23) HostIsolatedCoreList: "12-23"
HostIsolatedCoreList and HostCpusList are mutually exclusive. In the above example, CPUs 0-11 and 24-35 are reserved for non-DPDK functions.
NeutronDpdkCoreList - List of cpus to be given for pmd-cpu-mask # List of cores to be used for DPDK Poll Mode Driver NeutronDpdkCoreList: "'12,13,14,15'"
NovaVcpuPinSet - List of cpus to be used for guest vms (used by nova)
NovaVcpuPinSet: ['16-23']
NeutronDpdkCoreList and NovaVcpuPinSet should be selected from NUMA node 1 in this case because the DPDK NICs are mapped to NUMA node 1
Arguments passed on to the kernel.
ComputeKernelArgs: "iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12 isolcpus=12-23"
8.2.1. Examining the hardware for tuning
In the NFV validation lab, the last 2 NICs (NICs 5 & 6 or ens6f0 and ens6f1) are used for dataplane. Let’s start by seeing which NUMA node(s) the NICs are mapped to:
8.2.1.1. Mapping DPDK NIC cards to NUMA nodes
To determine which NUMA node the NICs are mapped to, use the following command:
[root@se-nfv-srv12 ~]# cat /sys/class/net/ens6f0/device/numa_node 1 [root@se-nfv-srv12 ~]# cat /sys/class/net/ens6f1/device/numa_node 1
It can be seen from the above output that both ens6f0 and ens6f1 are mapped to NUMA node 1.
Next thing to do is to get the CPU cores mapped to socket 1 (NUMA node 1).
8.2.1.2. CPU (Processor) to Socket Mapping
CPU to socket mapping can be gleaned using the commands shown below on one of the compute nodes. While selecting CPUs to be assigned to PMD threads it is important to keep in mind the CPU cores associated with the sibling threads. Avoid assigning cores associated to sibling threads to VCpuPinSet for use by VMs (VNFs). Mapping of CPU cores to sockets and sibling information can be obtained using one of the following two methods:
- Method A: Using “virsh capabilities” command
- Method B: Using native linux commands
Typically, all this information would need to be gathered on one of the compute nodes (or any server that has hardware that is identical to the compute nodes) before Red Hat OpenStack director based install. Both methods A and B are covered below just in case virsh capabilities is not available at the time of gathering the required information.
8.2.1.2.1. Method A (Using virsh capabilities):
Run the command virsh capabilities on a compute node or a host that has hardware and configuration that is identical to that of a compute node:
[root@overcloud-compute-0 ~]# virsh capabilities <capabilities> <host> <uuid>b1c41b1d-e33c-e511-a9fb-2c600ccd092c</uuid> <cpu> <arch>x86_64</arch> <model>Broadwell</model> <vendor>Intel</vendor> <topology sockets='1' cores='12' threads='2'/> <feature name='vme'/> < == SNIP === > <pages unit='KiB' size='4'/> <pages unit='KiB' size='1048576'/> </cpu> <topology> <cells num='2'> <cell id='0'> <cpus num='24'> <cpu id='0' socket_id='0' core_id='0' siblings='0,24'/> </distances> <cpus num='24'> <cpu id='0' socket_id='0' core_id='0' siblings='0,24'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,25'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,26'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,27'/> <cpu id='4' socket_id='0' core_id='4' siblings='4,28'/> <cpu id='5' socket_id='0' core_id='5' siblings='5,29'/> <cpu id='6' socket_id='0' core_id='8' siblings='6,30'/> <cpu id='7' socket_id='0' core_id='9' siblings='7,31'/> <cpu id='8' socket_id='0' core_id='10' siblings='8,32'/> <cpu id='9' socket_id='0' core_id='11' siblings='9,33'/> <cpu id='10' socket_id='0' core_id='12' siblings='10,34'/> <cpu id='11' socket_id='0' core_id='13' siblings='11,35'/> < ==== SNIP ==== > </cell> <cpus num='24'> <cpu id='12' socket_id='1' core_id='0' siblings='12,36'/> <cpu id='13' socket_id='1' core_id='1' siblings='13,37'/> <cpu id='14' socket_id='1' core_id='2' siblings='14,38'/> <cpu id='15' socket_id='1' core_id='3' siblings='15,39'/> <cpu id='16' socket_id='1' core_id='4' siblings='16,40'/> <cpu id='17' socket_id='1' core_id='5' siblings='17,41'/> <cpu id='18' socket_id='1' core_id='8' siblings='18,42'/> <cpu id='19' socket_id='1' core_id='9' siblings='19,43'/> <cpu id='20' socket_id='1' core_id='10' siblings='20,44'/> <cpu id='21' socket_id='1' core_id='11' siblings='21,45'/> <cpu id='22' socket_id='1' core_id='12' siblings='22,46'/> <cpu id='23' socket_id='1' core_id='13' siblings='23,47'/> <cpu id='36' socket_id='1' core_id='0' siblings='12,36'/> <cpu id='37' socket_id='1' core_id='1' siblings='13,37'/> <cpu id='38' socket_id='1' core_id='2' siblings='14,38'/> <cpu id='39' socket_id='1' core_id='3' siblings='15,39'/> <cpu id='40' socket_id='1' core_id='4' siblings='16,40'/> <cpu id='41' socket_id='1' core_id='5' siblings='17,41'/> <cpu id='42' socket_id='1' core_id='8' siblings='18,42'/> <cpu id='43' socket_id='1' core_id='9' siblings='19,43'/> <cpu id='44' socket_id='1' core_id='10' siblings='20,44'/> <cpu id='45' socket_id='1' core_id='11' siblings='21,45'/> <cpu id='46' socket_id='1' core_id='12' siblings='22,46'/> <cpu id='47' socket_id='1' core_id='13' siblings='23,47'/> </cpus> </cell> </cells> </topology>
From the CPU section of the above output, <topology sockets='1' cores='12' threads='2'/>. This implies Hyper-threading enabled threads = 2
From above output, siblings 0-11 and 24-35 are used for HostCpusList, cores 12-15 are used for NeutronDpdkCoreList for PMD threads and lastly, cores 16-23 are used to assign NovaVcpuPinSet
8.2.1.2.2. Method B (Using native Linux commands)
Checking if hyper-threading has been enabled:
[root@overcloud-compute-0 ~]# dmidecode -t processor | grep HTT HTT (Multi-threading) HTT (Multi-threading)
Even though the hosts (compute nodes) used in the NFV validation lab have hyperthreading enabled on them, the setup and tests shown here are for single-queue. In this setup, one CPU core is required for polling each physical NIC(PNIC). Since there are 2 DPDK ports, 2 CPUs are consumed for polling PNICs. Additionally, one CPU core is required to service each vHost user port (that connects the VM to the PNIC). If we have 2 VMs with one port each that accounts for 2 additional CPU cores. Based on the above configuration, 2 DPDK PNICs and 2 VMs with 1 port each can be accommodated.
Obtain the physical socket (NUMA node) to CPU core mapping:
[root@se-nfv-srv12 ~]# egrep -e "processor.*:" -e ^physical /proc/cpuinfo|xargs -l2 echo | awk '{print "socket " $7 "\t" "core " $3}' | sort -n -k2 -k4 socket 0 core 0 socket 0 core 1 socket 0 core 2 socket 0 core 3 socket 0 core 4 socket 0 core 5 socket 0 core 6 socket 0 core 7 socket 0 core 8 socket 0 core 9 socket 0 core 10 socket 0 core 11 socket 0 core 24 socket 0 core 25 socket 0 core 26 socket 0 core 27 socket 0 core 28 socket 0 core 29 socket 0 core 30 socket 0 core 31 socket 0 core 32 socket 0 core 33 socket 0 core 34 socket 0 core 35 socket 1 core 12 socket 1 core 13 socket 1 core 14 socket 1 core 15 socket 1 core 16 socket 1 core 17 socket 1 core 18 socket 1 core 19 socket 1 core 20 socket 1 core 21 socket 1 core 22 socket 1 core 23 socket 1 core 36 socket 1 core 37 socket 1 core 38 socket 1 core 39 socket 1 core 40 socket 1 core 41 socket 1 core 42 socket 1 core 43 socket 1 core 44 socket 1 core 45 socket 1 core 46 socket 1 core 47
8.2.2. Picking CPU cores on system enabled for hyperthreading
On nodes where hyperthreading has been enabled, it is important to make sure not to map any other tasks to the core assigned to the sibling threads.
[root@overcloud-compute-0 ~]# for cpu in {12..15}; do cat /sys/devices/system/cpu/"cpu"$cpu/topology/thread_siblings_list; done 12,36 13,37 14,38 15,39
From the above output, if we assign cores 12-15 for PMD threads (NeutronDpdkCoreList), don’t assign CPUs 36-39 to say NovaVcpuPinSet.
8.3. Manual tuning for compute nodes
- Install tuned2.7.1-3.el7.noarch or later version
- Edit /etc/tuned/cpu-partitioning-variables.conf to include CPUs to be used for VM vcpus. Do not include CPUs from HostCpusList.
- Activate with "tuned-adm profile cpu-partitioning"
- Run "grub2-mkconfig -o /boot/grub2/grub.cfg" to make sure grub is updated
- Reboot the server
8.3.1. Software and packages required for tuning
Minimal kernel version as well as tuned packages are required for proper tuning to be accomplished.
For the NFV validation lab kernel version used was:
[root@overcloud-compute-0 ~]# uname -a Linux overcloud-compute-0.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Mon Feb 20 02:37:52 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux version 7.3 was used:
[root@overcloud-compute-0 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo)
Tuned packages version 2.7.1 were used:
[root@overcloud-compute-0 ~]# rpm -qa | grep tuned tuned-profiles-cpu-partitioning-2.7.1-5.el7.noarch tuned-2.7.1-3.el7_3.1.noarch
OVS-DPDK package version 2.6.1 was used:
[root@overcloud-compute-0 instance]# rpm -qa | grep openvs python-openvswitch-2.5.0-14.git20160727.el7fdp.noarch | openvswitch-2.6.1-10.git20161206.el7fdb.x86_64 | openstack-neutron-openvswitch-9.2.0-8.el7ost.noarch
8.3.2. Creating cpu-partitioning-variables.conf file and running tuned-adm
After ensuring the correct versions of software and packages are present, the next task is to create the cpu-partitioning-variables.conf file in /etc/tuned directory of the compute nodes:
[root@overcloud-compute-0 ~]# ls /etc/tuned/ active_profile bootcmdline cpu-partitioning-variables.conf tuned-main.conf [root@overcloud-compute-0 ~]# cat /etc/tuned/cpu-partitioning-variables.conf # Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # isolated_cores=12-23
Make the cpu-partitioning configuration active
root@overcloud-compute-0 ~]# tuned-adm profile cpu-partitioning [root@overcloud-compute-0 ~]# tuned-adm active Current active profile: cpu-partitioning
Reboot the compute node for the configuration to take effect.
8.3.3. Relocating the emulator threads on compute nodes
To emulate the virtual machine (VNF) on the compute node (hypervisor), CPU(s) are used. It is possible that the CPUs used for PDM threads are picked for this purpose. It is important to relocate them to CPU cores that were not set aside for PMD threads or for VCpuPinSet. The following commands show how this can be done:
First get the instance ID:
[root@overcloud-compute-0 ~]# virsh list Id Name State ---------------------------------------------------- 1 instance-00000002 running
Next check which CPUs are currently mapped for this purpose:
[root@overcloud-compute-0 ~]# virsh emulatorpin 1 emulator: CPU Affinity ---------------------------------- *: 16-23
Relocate emulator pins to use CPU cores 0-5 that we have set aside for housekeep or non-DPDK tasks:
[root@overcloud-compute-0 ~]# virsh emulatorpin 1 0-5 [root@overcloud-compute-0 ~]# virsh emulatorpin 1 emulator: CPU Affinity ---------------------------------- *: 0-5
This exercise of relocating emulation pins has to be run everytime the guest VM (VNF) has been halted or a new one has been spawned.
8.4. Ensuring compute nodes are properly tuned
Next thing to do is to check to ensure that the manual tuning that was performed on compute nodes has taken effect. This is done by following the steps below.
8.4.1. Visually inspecting files on the compute nodes
cat /etc/tuned/bootcmdline should have the following options:
TUNED_BOOT_CMDLINE="nohz=on nohz_full=12-23 rcu_nocbs=12-23 intel_pstate=disable nosoftlockup"
cat /proc/cmdline:
[root@overcloud-compute-0 ~]# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-3.10.0-514.10.2.el7.x86_64 root=UUID=8b30335f-2e9c-4453-8de6-f810f71dfc5e ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet iommu=pt intel_iomm B hugepagesz=1G hugepages=12 isolcpus=12-23 nohz=on nohz_full=12-23 rcu_nocbs=12-23 intel_pstate=disable nosoftlockup
Check for CPUAffinity:
cat /etc/systemd/system we are seeing ALL CPUs:
CPUAffinity=0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
The above list is what is expected as it omits CPU cores 12-23 (HostIsolatedCoreList) as desired. Next look for irqbalance
cat /etc/sysconfig/irqbalance:
IRQBALANCE_BANNED_CPUS=00fff000
This shows that CPUs 16-23 are banned from being used for housekeeping activities.
8.4.2. Checking for local interrupts (LOC)
On the compute nodes, the CPU cores mapped for PMD threads no more than 1 Local Interrupt should be observed per second. Following bash script grabs local interrupts for 10 seconds on CPU cores 12-15.
[root@overcloud-compute-0 ~]# grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 12"; sleep 10; grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 12" core 12 250126 core 12 250136; [root@overcloud-compute-0 ~]# grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 13"; sleep 10; grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 13" core 13 248473 core 13 248483 [root@overcloud-compute-0 ~]# grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 14"; sleep 10; grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 14" core 14 248396 core 14 248406 [root@overcloud-compute-0 ~]# grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 15"; sleep 10; grep LOC /proc/interrupts | xargs -L1 | cut -d ':' -f 2 | cut -d 'L' -f 1 | sed s/\\s// | sed s/\\s$// | sed s/\\s/\\n/g | awk '{print "core " NR-1, $0}' | grep "core 15" core 15 248407 core 15 248417
As it can be seen from above output, cores tuned for service PMD threads only have 10 interrupts in 10 seconds or 1 interrupt per second.
8.5. Tuning on the VNF (guest VM)
8.5.1. CPU partitioning
Since DPDK runs on the guest VM (VNF), we need to ensure that we set aside CPUs for PMD threads for optimal performance.
Firstly, make sure you have the right version of kernel and tuned packages:
[root@vnf1 ~]# uname -a Linux vnf1.novalocal 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux root@vnf1 ~]# rpm -qa | grep tuned tuned-2.7.1-3.el7_3.2.noarch tuned-profiles-cpu-partitioning-2.7.1-5.el7.noarch
Currently Tuned does not included in RHOS10 z2 version, it must be consumed through FDBeta or using manual installation on overcloud images,For RHOS10 z3 Tuned and cpu-partitioning are included.
[root@vnf1 ~]# tuned-adm active Current active profile: virtual-guest [root@vnf1 ~]# cd /etc/tuned [root@vnf1 tuned]# [root@vnf1 tuned]# ls active_profile bootcmdline cpu-partitioning-variables.conf tuned-main.conf
We need to add file cpu-partitioning-variables.conf:
[root@overcloud-compute-0 ~]# cat /etc/tuned/cpu-partitioning-variables.conf # Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # isolated_cores=1-7
Reboot VM for this to take effect.
After reboot, make sure /proc/cmdline has the right parameters for hugepages and isolcpus:
[root@vnf1 ~]# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-3.10.0-327.22.2.el7.x86_64 root=UUID=2dfbb5e5-1e8c-4d69-bae2-3199b999d800 ro console=tty0 console=ttyS0,115200n8 crashkernel=auto console=ttyS0,115200 LANG=en_US.UTF-8 iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12 nohz=on nohz_full= rcu_nocbs= intel_pstate=disable nosoftlockup isolcpus=1-7
If it does not have the required parameters use the grubby command to update it: grubby --update-kernel=grubby --default-kernel
--args="isolcpus=1-7 default_hugepagesz=1GB hugepagesz=1G hugepages=12"
After running grubby, reboot is required.
8.6. Checking CPU utilization
PMD threads run continuously resulting in CPU cores allocated for these threads to run at 100%. This is true for both the compute nodes as well as the VNFs (VMs). This can be used as an indicator for ensuring tuning was performed correctly. CPU cores that were allocated for PMD threads should show 100% utilization (or close to it) constantly.
8.6.1. CPU utilization on the VNF
On the VNF (VM), run top command and press “1” to show per CPU/core usage. It can be observed that vCPU1 is being used for PMD polling and is being used 100%) as expected. This is shown in Figure 5

Figure 5: Top Output on the VNF(VM)
8.6.2. CPU utilization on the compute node
Now run top command on the compute node and press “1” to show per CPU/core usage. It can be observed that 12,13,14 and 15 which are tuned to run PMD threads are being used 100% (for user and not for system) as expected shown in Figure 6. Note that CPU17 is also showing 100%. This is because CPU17 corresponds to vCPU1 on the guest VM.

Figure 6: Top output on compute node.
8.7. Troubleshooting and useful commands
8.7.1. OVS-User-Bridge
Use ovs-vsctl show command to display details of the bridges and ports.
root@overcloud-compute-0 ~]# ovs-vsctl show 9ef8aab3-0afa-4fca-9a49-7ca97d3e2ffa Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-link Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-link Interface br-link type: internal Port phy-br-link Interface phy-br-link type: patch options: {peer=int-br-link} Port "dpdkbond0" Interface "dpdk1" type: dpdk Interface "dpdk0" type: dpdk
8.7.2. Checking if physical NICs are bound to DPDK
By default ports are bound to ixgbe driver. During installation, Red Hat OpenStack director assigns the vfio-pci drivers to the DPDK ports. Use dpdk-devbind -s command to check the status of the DPDK binding
[root@overcloud-compute-0 ~]# dpdk-devbind -s Network devices using DPDK-compatible driver ============================================ 0000:83:00.0 'Ethernet Controller 10-Gigabit X540-AT2' drv=vfio-pci unused=ixgbe 0000:83:00.1 'Ethernet Controller 10-Gigabit X540-AT2' drv=vfio-pci unused=ixgbe Network devices using kernel driver =================================== 0000:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f0 drv=ixgbe unused=vfio-pci 0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f1 drv=ixgbe unused=vfio-pci 0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused=vfio-pci 0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f1 drv=ixgbe unused=vfio-pci Other network devices ===================== <none> Crypto devices using DPDK-compatible driver =========================================== <none> Crypto devices using kernel driver ================================== <none> Other crypto devices ==================== <none>
From the above output it can be seen that the NICs ens6f0(0000:83:00.0) and ens6f1(0000:83:00.1) are bound to vfio-pci driver (DPDK). If it was bound to ixgbe it would indicate NICs are not bound to DPDK.
8.7.3. Verifying DPDK bond
Since OVS-DPDK bonds are being used, use ovs-appctl bond/show <bond-name> to check status of the bond.
[root@overcloud-compute-0 ~]# ovs-appctl bond/show dpdkbond0 ---- dpdkbond0 ---- bond_mode: balance-tcp bond may use recirculation: yes, Recirc-ID : 1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms next rebalance: 94 ms lacp_status: negotiated active slave mac: a0:36:9f:47:e0:62(dpdk1) slave dpdk0: enabled may_enable: true slave dpdk1: enabled active slave may_enable: true
It can be seen from the above output that dpdk0 and dpdk1 slaves are enabled on the bond dpdkbond0.
8.7.4. Checking OVS-DPDK counters
Periodically check the counters for receive (rx), transmit (tx), drops and errors (errs) using ovs-ofctl dump-ports <bridge-name> command.
[root@overcloud-compute-0 ~]# ovs-ofctl dump-ports br-link OFPST_PORT reply (xid=0x2): 4 ports port LOCAL: rx pkts=542285, bytes=67242760, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=1, bytes=70, drop=0, errs=0, coll=0 port 1: rx pkts=153700, bytes=11011978, drop=0, errs=0, frame=?, over=?, crc=? tx pkts=271137, bytes=33620934, drop=0, errs=0, coll=? port 2: rx pkts=18084, bytes=2875028, drop=0, errs=0, frame=?, over=?, crc=? tx pkts=271135, bytes=33620740, drop=0, errs=0, coll=? port 3: rx pkts=0, bytes=0, drop=?, errs=?, frame=?, over=?, crc=? tx pkts=0, bytes=0, drop=?, errs=?, coll=?
For each port it is possible to examine transmit and received packets as well as drops. To check for more details and grouping per packets size the following command may be used:
[root@overcloud-compute-0 ~]# ovs-vsctl list Interface|grep -E "^(statistics|name)" | grep -a2 dpdk name : "dpdk0" statistics : {"rx_128_to_255_packets"=18136, "rx_1_to_64_packets"=1, "rx_256_to_511_packets"=0, "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=1, rx_broadcast_packets=0, rx_bytes=2883614, rx_dropped=0, rx_errors=0, rx_jabber_errors=0, rx_packets=18138, "tx_128_to_255_packets"=271940, "tx_1_to_64_packets"=0, "tx_256_to_511_packets"=0, "tx_512_to_1023_packets"=0, "tx_65_to_127_packets"=0, tx_broadcast_packets=0, tx_bytes=33720560, tx_dropped=0, tx_errors=0, tx_multicast_packets=271940, tx_packets=271940} name : "dpdk1" < ======= SNIP ============ >
Chapter 9. High Availability
9.1. Internal Traffic
NICs 3 & 4 are dedicated on each host to carry all internal traffic. This includes:
- Internal API traffic (VLAN 1020)
- Storage (VLAN 1030)
- Storage management (VLAN 1040)
To achieve high availability, NICs 3 & 4 are bonded. This is achieved via the following entries in network-environments.yaml file
Network-environments.yaml file # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 miimon=100" BondInterfaceOvsOptions: "mode=802.3ad"
After OpenStack is deployed bonds are created on all the nodes. Here is an example of what it looks like on compute0:
[root@overcloud-compute-0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens3f0 # This file is autogenerated by os-net-config DEVICE=ens3f0 ONBOOT=yes HOTPLUG=no NM_CONTROLLED=no PEERDNS=no MASTER=bond1 SLAVE=yes BOOTPROTO=none [root@overcloud-compute-0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens3f1 # This file is autogenerated by os-net-config DEVICE=ens3f1 ONBOOT=yes HOTPLUG=no NM_CONTROLLED=no PEERDNS=no MASTER=bond1 SLAVE=yes BOOTPROTO=none [root@overcloud-compute-0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond1 # This file is autogenerated by os-net-config DEVICE=bond1 ONBOOT=yes HOTPLUG=no NM_CONTROLLED=no PEERDNS=no DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex BONDING_OPTS="mode=802.3ad" [root@overcloud-compute-0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-vlan1020 # This file is autogenerated by os-net-config DEVICE=vlan1020 ONBOOT=yes HOTPLUG=no NM_CONTROLLED=no PEERDNS=no DEVICETYPE=ovs TYPE=OVSIntPort OVS_BRIDGE=br-ex OVS_OPTIONS="tag=1020" BOOTPROTO=static IPADDR=172.17.0.20 NETMASK=255.255.255.0 [root@overcloud-compute-0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-vlan1030 # This file is autogenerated by os-net-config DEVICE=vlan1030 ONBOOT=yes HOTPLUG=no NM_CONTROLLED=no PEERDNS=no DEVICETYPE=ovs TYPE=OVSIntPort OVS_BRIDGE=br-ex OVS_OPTIONS="tag=1030" BOOTPROTO=static IPADDR=172.18.0.20 NETMASK=255.255.255.0 [root@overcloud-compute-0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-vlan1040 # This file is autogenerated by os-net-config DEVICE=vlan1040 ONBOOT=yes HOTPLUG=no NM_CONTROLLED=no PEERDNS=no DEVICETYPE=ovs TYPE=OVSIntPort OVS_BRIDGE=br-ex OVS_OPTIONS="tag=1040" BOOTPROTO=static IPADDR=192.0.2.26 NETMASK=255.255.255.0 [root@overcloud-compute-0 ~]# cat /proc/net/bonding/bond1 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 2c:60:0c:84:32:79 Active Aggregator Info: Aggregator ID: 1 Number of ports: 2 Actor Key: 13 Partner Key: 13 Partner Mac Address: 44:38:39:ff:01:02 Slave Interface: ens3f0 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 2c:60:0c:84:32:79 Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 2c:60:0c:84:32:79 port key: 13 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 65535 system mac address: 44:38:39:ff:01:02 oper key: 13 port priority: 255 port number: 1 port state: 63 Slave Interface: ens3f1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 2c:60:0c:84:32:7a Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 2c:60:0c:84:32:79 port key: 13 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 65535 system mac address: 44:38:39:ff:01:02 oper key: 13 port priority: 255 port number: 1 port state: 63
VLANs 1020, 1030 and 1040 as well as bond1 are bridged as follows:
[root@overcloud-compute-0 ~]# ovs-vsctl list-ports br-ex bond1 phy-br-ex vlan100 vlan1020 vlan1030 vlan1040 vlan1050
ens3f0 and ens3f1 are connected to switch3 and switch4 ports (one to each). This protects against failure of any of of the uplink leaf switches.
9.2. HA with SR-IOV
9.2.1. Bonding and SR-IOV
The compute servers have two physical NICs 5 and 6 that are dedicated to the dataplane traffic of the mobile network. These two NICs are connected to different leaf switches so that failure of a single switch will not result in total isolation of the server.
When using SR-IOV, it is not possible to perform NIC bonding at the host level. The vNICs that correspond to the VFs can be bonded at the VNF (VM) level. Again, it is important to pick two vNICs that are mapped to VFs that in turn are mapped to diverse PNICs.
For example on VM1 we have:
[root@test-sriov ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1446 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:55:bf:02 brd ff:ff:ff:ff:ff:ff inet 192.20.1.12/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe55:bf02/64 scope link valid_lft forever preferred_lft forever 3: ens5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether fa:16:3e:fd:ac:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::f816:3eff:fefd:ac20/64 scope link valid_lft forever preferred_lft forever 4: ens6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether fa:16:3e:89:a8:bd brd ff:ff:ff:ff:ff:ff inet6 fe80::f816:3eff:fe89:a8bd/64 scope link valid_lft forever preferred_lft forever 5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether fa:16:3e:fd:ac:20 brd ff:ff:ff:ff:ff:ff inet 192.30.0.20/24 brd 192.30.0.255 scope global bond0 valid_lft forever preferred_lft forever inet6 2620:52:0:136c:f816:3eff:fefd:ac20/64 scope global mngtmpaddr dynamic valid_lft 2591940sec preferred_lft 604740sec inet6 fe80::f816:3eff:fefd:ac20/64 scope link valid_lft forever preferred_lft forever
The bond bond0 assumes the MAC address of the active interface of the bond. In this case ens5 (fa:16:3e:fd:ac:20). This is necessary because if the active link on the bon fails, the traffic will need to be sent on the backup link which becomes active upon failure.
Ens5 and ens6 ports on the VM are bonded to bond0. Configurations for ens5, ens6 and bond0 are as follows:
[root@test-sriov ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens5 NAME=bond0-slave0 DEVICE=ens5 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes [root@test-sriov ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens6 NAME=bond0-slave0 DEVICE=ens6 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes [root@test-sriov ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 NAME=bond0 DEVICE=bond0 BONDING_MASTER=yes TYPE=Bond IPADDR=192.30.0.20 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none BONDING_OPTS="mode=active-backup miimon=100 fail_over_mac=active"
In the bond configuration, it is critical to use “fail_over_mac=active”. Without this, if the active link fails, the switch to the mac address of the backup will not be made. Another important point to note is that the host tags SR-IOV packets towards the switch with a VLAN tag (4075 in this case) because of this, unless the layer 2 switches supports LACP on VLAN tagged interface it is not possible to form an adjacency between the VM ports and the switch. Since the validation lab layer 2 switch does not support LACP on VLAN tagged interface, active-backup mode with fail_over_mac=active had to be used.
The actual state of the bond can be examined as follows:
[root@test-sriov ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active) Primary Slave: None Currently Active Slave: ens5 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: ens5 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: fa:16:3e:fd:ac:20 Slave queue ID: 0 Slave Interface: ens6 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: fa:16:3e:89:a8:bd Slave queue ID: 0 [root@test-sriov ~]#
On the host, ens5 and ens6 correspond to the two physical ports ens6f0 and ens6f1:
[root@overcloud-compute-0 ~]# ip l show ens6f0 6: ens6f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether a0:36:9f:47:e4:70 brd ff:ff:ff:ff:ff:ff vf 0 MAC 9e:f1:2f:a1:8f:ac, spoof checking on, link-state auto, trust off vf 1 MAC 96:4d:7e:1f:a5:38, spoof checking on, link-state auto, trust off vf 2 MAC b6:1e:0f:00:8d:87, spoof checking on, link-state auto, trust off vf 3 MAC 2e:51:7a:71:9c:c9, spoof checking on, link-state auto, trust off vf 4 MAC fa:16:3e:28:ec:b1, vlan 4075, spoof checking on, link-state auto, trust off root@overcloud-compute-0 ~]# ip l show ens6f1 7: ens6f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether a0:36:9f:47:e4:72 brd ff:ff:ff:ff:ff:ff vf 0 MAC a6:5e:b0:68:1b:f9, spoof checking on, link-state auto, trust off vf 1 MAC 22:54:b2:fb:82:6a, spoof checking on, link-state auto, trust off vf 2 MAC 1e:09:5c:a7:83:b1, spoof checking on, link-state auto, trust off vf 3 MAC 2e:ad:e3:cb:14:f8, spoof checking on, link-state auto, trust off vf 4 MAC fa:16:3e:9b:48:17, vlan 4075, spoof checking on, link-state auto, trust off
Looking at compute node1 in Figure 7 ens6f0 (nic5) connects to switch3 in the lab and ens6f1 (nic6) connects to switch4 (two different switches). Even if one of these two switches fail, because of creating the bond on the VM data plane traffic can be protected.

Figure 7: Traffic flow from VNF1 to VNF2 during steady state for SR-IOV setup
With two VMs (running Centos 7) configured with bonding, when the switch port connecting to the active link of the bond was failed, a small amount of packet loss was observed before traffic passed on the backup link. Traffic flow after failure is shown in Figure 8

Figure 8: Traffic flow from VNF1 to VNF2 after failure of primary link switch port on switch3 with SR-IOV setup
9.3. HA with OVS-DPDK
9.3.1. Bonding and OVS-DPDK
The compute servers have two physical NICs 5 and 6 that are dedicated to the dataplane traffic of the mobile network. These two NICs are connected to different leaf switches (switch3 and switch4) so that failure of a single switch will not result in total isolation of the server.
On the compute node the OVS_DPDK_BOND looks like this:
[root@overcloud-compute-0 ~]# ovs-appctl bond/show dpdkbond0 ---- dpdkbond0 ---- bond_mode: active-backup bond may use recirculation: no, Recirc-ID : -1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms lacp_status: negotiated active slave mac: a0:36:9f:47:e4:70(dpdk0) slave dpdk0: enabled active slave may_enable: true slave dpdk1: enabled may_enable: true
During steady state traffic from VM1 to VM2 goes over DPDK bond, NIC5 (active link in bond) to switch3, then flowing over the MLAG connection to switch4 and then into NIC6 of node2 to eth0 of VM2. This is shown in Figure 9.

Figure 9: Traffic flow from VNF1 to VNF2 during steady state for OVS-DPDK setup
With two VMs (running Centos 7) configured with bonding, when the switch port connecting to the active link of the bond was failed, a small amount of packet loss was observed before traffic passed on the backup link. Traffic flow after failure is shown in Figure 10.

Figure 10: Traffic flow from VNF1 to VNF2 after failure of primary link switch port on switch3 with SR-IOV setup
Appendix A. SR-IOV failover testing
For testing packet loss upon failure of active link on the bond, iperf tool is used in the validation lab.
iperf was used for this test because it provides jitter and packet loss with millisecond granularity. However, iperf is not the tool of choice for testing on VMs as it uses kernel networking and does not take advantage of DPDK.
Client runs on VM2 sending traffic to VM1 (172.30.0.20 on port 8282):
[root@test-sriov2 network-scripts]# iperf3 -c 172.30.0.20 -p 8282 -u -l 16 -b 128000 -t 20 Connecting to host 172.30.0.20, port 8282 [ 4] local 172.30.0.21 port 54765 connected to 172.30.0.20 port 8282 [ ID] Interval Transfer Bandwidth Total Datagrams [ 4] 0.00-1.00 sec 14.1 KBytes 115 Kbits/sec 902 [ 4] 1.00-2.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 2.00-3.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 3.00-4.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 4.00-5.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 5.00-6.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 6.00-7.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 7.00-8.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 8.00-9.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 9.00-10.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 10.00-11.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 11.00-12.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 12.00-13.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 13.00-14.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 14.00-15.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 15.00-16.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 16.00-17.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 17.00-18.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 18.00-19.00 sec 15.6 KBytes 128 Kbits/sec 1000 [ 4] 19.00-20.00 sec 15.6 KBytes 128 Kbits/sec 1000 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 4] 0.00-20.00 sec 311 KBytes 127 Kbits/sec 0.001 ms 300/19901 (1.5%) [ 4] Sent 19901 datagrams iperf Done.
VM1 acts as the server listening on port 8282:
Server: [root@test-sriov ~]# iperf3 -s -p 8282 ----------------------------------------------------------- Server listening on 8282 ----------------------------------------------------------- Accepted connection from 172.30.0.21, port 34606 [ 5] local 172.30.0.20 port 8282 connected to 172.30.0.21 port 54765 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 5] 0.00-1.00 sec 14.1 KBytes 115 Kbits/sec 0.001 ms 0/901 (0%) [ 5] 1.00-2.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 2.00-3.00 sec 15.6 KBytes 128 Kbits/sec 0.002 ms 0/1000 (0%) [ 5] 3.00-4.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 4.00-5.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 5.00-6.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 6.00-7.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 7.00-8.00 sec 10.9 KBytes 89.6 Kbits/sec 0.002 ms 300/1000 (30%) ⇐ Shutdown switchport [ 5] 8.00-9.00 sec 15.6 KBytes 128 Kbits/sec 0.002 ms 0/1000 (0%) [ 5] 9.00-10.00 sec 15.6 KBytes 128 Kbits/sec 0.002 ms 0/1000 (0%) [ 5] 10.00-11.00 sec 15.6 KBytes 128 Kbits/sec 0.002 ms 0/1000 (0%) [ 5] 11.00-12.00 sec 15.6 KBytes 128 Kbits/sec 0.002 ms 0/1000 (0%) [ 5] 12.00-13.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 13.00-14.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 14.00-15.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 15.00-16.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 16.00-17.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 17.00-18.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 18.00-19.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 19.00-20.00 sec 15.6 KBytes 128 Kbits/sec 0.001 ms 0/1000 (0%) [ 5] 20.00-20.04 sec 0.00 Bytes 0.00 bits/sec 0.001 ms 0/0 (0%) - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 5] 0.00-20.04 sec 0.00 Bytes 0.00 bits/sec 0.001 ms 300/19901 (1.5%)
Traffic switches over the the backup link.
Appendix B. DPDK performance testing
For testing DPDK performance moongen was used a traffic generator. On the VNF, testpmd was used.
Moongen on undercloud server (runs on baremetal)
[root@se-nfv-srv12 lua-trafficgen]# dpdk-devbind -s Network devices using DPDK-compatible driver ============================================ <none> Network devices using kernel driver =================================== 0000:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f0 drv=ixgbe unused=vfio-pci 0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f1 drv=ixgbe unused=vfio-pci 0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused=vfio-pci 0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f1 drv=ixgbe unused=vfio-pci 0000:83:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens6f0 drv=ixgbe unused=vfio-pci 0000:83:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens6f1 drv=ixgbe unused=vfio-pci Other network devices ===================== <none> Crypto devices using DPDK-compatible driver =========================================== <none> Other network devices ===================== <none> Crypto devices using DPDK-compatible driver =========================================== <none> Crypto devices using kernel driver ================================== <none> Other crypto devices ==================== <none> [root@se-nfv-srv12 lua-trafficgen]# dpdk-devbind --bind=vfio-pci 0000:83:00.0 [root@se-nfv-srv12 lua-trafficgen]# dpdk-devbind -s Network devices using DPDK-compatible driver ============================================ 0000:83:00.0 'Ethernet Controller 10-Gigabit X540-AT2' drv=vfio-pci unused=ixgbe Network devices using kernel driver =================================== 0000:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f0 drv=ixgbe unused=vfio-pci 0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f1 drv=ixgbe unused=vfio-pci 0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f0 drv=ixgbe unused=vfio-pci 0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens3f1 drv=ixgbe unused=vfio-pci 0000:83:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens6f1 drv=ixgbe unused=vfio-pci
B.1. DPDK throughput test results
B.1.1. 64 Byte Packet Size
B.1.1.1. 0% Packet Loss
[Device: id=0] RX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] TX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] RX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] TX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] RX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] TX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] RX: 2.29 Mpps, 1246 Mbit/s (1612 Mbit/s with framing) [Device: id=0] TX: 2.29 (StdDev 0.00) Mpps, 1246 (StdDev 0) Mbit/s (1612 Mbit/s with framing), total 686859894 packets with 46706472792 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.04 Mpps, 22 Mbit/s (28 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 2.26 (StdDev 0.26) Mpps, 1229 (StdDev 142) Mbit/s (1591 Mbit/s with framing), total 686859894 packets with 46706472792 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00000000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 64 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.000000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 686859894 Rx Frames: 686859894 frame loss: 0, 0.000000% Rx Mpps: 2.289533 [REPORT] total: Tx frames: 686859894 Rx Frames: 686859894 frame loss: 0, 0.000000% Tx Mpps: 2.289533 Rx Mpps: 2.289533
B.1.1.2. 0.001 % Packet Loss
[Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] RX: 3.09 Mpps, 1683 Mbit/s (2178 Mbit/s with framing) [Device: id=0] TX: 3.09 (StdDev 0.00) Mpps, 1683 (StdDev 0) Mbit/s (2178 Mbit/s with framing), total 92825964 packets with 6312165552 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.08 Mpps, 42 Mbit/s (54 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 2.73 (StdDev 1.00) Mpps, 1485 (StdDev 547) Mbit/s (1922 Mbit/s with framing), total 92825115 packets with 6312107820 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (849, 0.00091461%) is less than or equal to the maximum (0.00100000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 8.000000 nrFlows: 256 frameSize: 64 runBidirec: false searchRunTime: 30 validationRunTime: 30 acceptableLossPct: 0.001000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 92825964 Rx Frames: 92825115 frame loss: 849, 0.000915% Rx Mpps: 3.094246 [REPORT] total: Tx frames: 92825964 Rx Frames: 92825115 frame loss: 849, 0.000915% Tx Mpps: 3.094274 Rx Mpps: 3.094246
B.1.1.3. 0.002% Frame loss
[Device: id=0] RX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] TX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] RX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] TX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] RX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] TX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] RX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] TX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] RX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] TX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] RX: 3.59 Mpps, 1954 Mbit/s (2529 Mbit/s with framing) [Device: id=0] TX: 3.59 (StdDev 0.00) Mpps, 1954 (StdDev 0) Mbit/s (2529 Mbit/s with framing), total 1077656076 packets with 73280613168 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.07 Mpps, 40 Mbit/s (52 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 3.54 (StdDev 0.41) Mpps, 1928 (StdDev 222) Mbit/s (2496 Mbit/s with framing), total 1077636582 packets with 73279287576 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (19494, 0.00180893%) is less than or equal to the maximum (0.00200000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 64 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.002000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 1077656076 Rx Frames: 1077636582 frame loss: 19494, 0.001809% Rx Mpps: 3.592134 [REPORT] total: Tx frames: 1077656076 Rx Frames: 1077636582 frame loss: 19494, 0.001809% Tx Mpps: 3.592199 Rx Mpps: 3.592134
B.1.2. 256 Byte packet size
B.1.2.1. 0% Packet Loss
[Device: id=0] RX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] TX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] RX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] TX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] RX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] TX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] RX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] TX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] RX: 2.06 Mpps, 4284 Mbit/s (4613 Mbit/s with framing) [Device: id=0] TX: 2.06 (StdDev 0.00) Mpps, 4284 (StdDev 0) Mbit/s (4613 Mbit/s with framing), total 617834700 packets with 160637022000 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.05 Mpps, 96 Mbit/s (104 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 2.03 (StdDev 0.23) Mpps, 4227 (StdDev 486) Mbit/s (4552 Mbit/s with framing), total 617834700 packets with 160637022000 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00000000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 256 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.000000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 617834700 Rx Frames: 617834700 frame loss: 0, 0.000000% Rx Mpps: 2.059450 [REPORT] total: Tx frames: 617834700 Rx Frames: 617834700 frame loss: 0, 0.000000% Tx Mpps: 2.059450 Rx Mpps: 2.059450
B.1.2.2. 0.001% Packet Loss
[Device: id=0] RX: 2.74 Mpps, 5706 Mbit/s (6145 Mbit/s with framing) [Device: id=0] TX: 2.74 Mpps, 5706 Mbit/s (6145 Mbit/s with framing) [Device: id=0] RX: 2.74 Mpps, 5707 Mbit/s (6146 Mbit/s with framing) [Device: id=0] TX: 2.74 Mpps, 5707 Mbit/s (6146 Mbit/s with framing) [Device: id=0] RX: 2.74 Mpps, 5706 Mbit/s (6145 Mbit/s with framing) [Device: id=0] TX: 2.75 (StdDev 0.00) Mpps, 5714 (StdDev 1) Mbit/s (6153 Mbit/s with framing), total 824122089 packets with 214271743140 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.05 Mpps, 110 Mbit/s (119 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 2.71 (StdDev 0.31) Mpps, 5639 (StdDev 649) Mbit/s (6072 Mbit/s with framing), total 824114944 packets with 214269885440 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (7145, 0.00086698%) is less than or equal to the maximum (0.00100000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 256 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.001000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 824122089 Rx Frames: 824114944 frame loss: 7145, 0.000867% Rx Mpps: 2.747064 [REPORT] total: Tx frames: 824122089 Rx Frames: 824114944 frame loss: 7145, 0.000867% Tx Mpps: 2.747087 Rx Mpps: 2.747064
B.1.2.3. 0.002% Packet Loss
[Device: id=0] TX: 3.18 Mpps, 6615 Mbit/s (7124 Mbit/s with framing) [Device: id=0] RX: 3.18 Mpps, 6614 Mbit/s (7123 Mbit/s with framing) [Device: id=0] TX: 3.18 Mpps, 6614 Mbit/s (7123 Mbit/s with framing) [Device: id=0] RX: 3.18 Mpps, 6613 Mbit/s (7122 Mbit/s with framing) [Device: id=0] TX: 3.18 (StdDev 0.00) Mpps, 6614 (StdDev 1) Mbit/s (7123 Mbit/s with framing), total 953965467 packets with 248031021420 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.06 Mpps, 127 Mbit/s (137 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 3.14 (StdDev 0.36) Mpps, 6527 (StdDev 751) Mbit/s (7029 Mbit/s with framing), total 953948680 packets with 248026656800 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (16787, 0.00175971%) is less than or equal to the maximum (0.00200000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 256 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.002000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 953965467 Rx Frames: 953948680 frame loss: 16787, 0.001760% Rx Mpps: 3.179834 [REPORT] total: Tx frames: 953965467 Rx Frames: 953948680 frame loss: 16787, 0.001760% Tx Mpps: 3.179890 Rx Mpps: 3.179834
B.1.3. 512 Bytes Packets
B.1.3.1. 0% Loss
[Device: id=0] RX: 1.72 Mpps, 7097 Mbit/s (7372 Mbit/s with framing) [Device: id=0] TX: 1.72 Mpps, 7097 Mbit/s (7372 Mbit/s with framing) [Device: id=0] RX: 1.72 Mpps, 7097 Mbit/s (7372 Mbit/s with framing) [Device: id=0] TX: 1.72 Mpps, 7097 Mbit/s (7372 Mbit/s with framing) [Device: id=0] RX: 1.72 Mpps, 7095 Mbit/s (7370 Mbit/s with framing) [Device: id=0] TX: 1.72 (StdDev 0.00) Mpps, 7095 (StdDev 9) Mbit/s (7370 Mbit/s with framing), total 515589228 packets with 266044041648 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.04 Mpps, 180 Mbit/s (187 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 1.70 (StdDev 0.19) Mpps, 7001 (StdDev 805) Mbit/s (7273 Mbit/s with framing), total 515589228 packets with 266044041648 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00000000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 512 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.000000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 515589228 Rx Frames: 515589228 frame loss: 0, 0.000000% Rx Mpps: 1.718665 [REPORT] total: Tx frames: 515589228 Rx Frames: 515589228 frame loss: 0, 0.000000% Tx Mpps: 1.718665 Rx Mpps: 1.718665 [root@se-nfv-srv12 lua-trafficgen]#
B.1.3.2. 0.001 % Loss
[Device: id=0] RX: 2.31 Mpps, 9538 Mbit/s (9908 Mbit/s with framing) [Device: id=0] TX: 2.31 Mpps, 9538 Mbit/s (9908 Mbit/s with framing) [Device: id=0] RX: 2.31 Mpps, 9535 Mbit/s (9904 Mbit/s with framing) [Device: id=0] TX: 2.31 (StdDev 0.00) Mpps, 9536 (StdDev 2) Mbit/s (9906 Mbit/s with framing), total 693058086 packets with 357617972376 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.04 Mpps, 175 Mbit/s (182 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 2.28 (StdDev 0.26) Mpps, 9411 (StdDev 1083) Mbit/s (9776 Mbit/s with framing), total 693051841 packets with 357614749956 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (6245, 0.00090108%) is less than or equal to the maximum (0.00100000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 512 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.001000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 693058086 Rx Frames: 693051841 frame loss: 6245, 0.000901% Rx Mpps: 2.310175 [REPORT] total: Tx frames: 693058086 Rx Frames: 693051841 frame loss: 6245, 0.000901% Tx Mpps: 2.310196 Rx Mpps: 2.310175
B.1.3.3. 0.002 % Loss
[Device: id=0] RX: 2.33 Mpps, 9627 Mbit/s (10001 Mbit/s with framing) [Device: id=0] TX: 2.33 Mpps, 9628 Mbit/s (10001 Mbit/s with framing) [Device: id=0] RX: 2.33 Mpps, 9628 Mbit/s (10001 Mbit/s with framing) [Device: id=0] TX: 2.33 (StdDev 0.00) Mpps, 9627 (StdDev 1) Mbit/s (10001 Mbit/s with framing), total 699665904 packets with 361027606464 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.04 Mpps, 182 Mbit/s (189 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 2.30 (StdDev 0.26) Mpps, 9501 (StdDev 1094) Mbit/s (9869 Mbit/s with framing), total 699659295 packets with 361024196220 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (6609, 0.00094459%) is less than or equal to the maximum (0.00200000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 512 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.002000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 699665904 Rx Frames: 699659295 frame loss: 6609, 0.000945% Rx Mpps: 2.332200 [REPORT] total: Tx frames: 699665904 Rx Frames: 699659295 frame loss: 6609, 0.000945% Tx Mpps: 2.332222 Rx Mpps: 2.332200
B.1.4. 1024 Bytes packets
B.1.4.1. 0 % Loss
[Device: id=0] RX: 1.19 Mpps, 9807 Mbit/s (9998 Mbit/s with framing) [Device: id=0] TX: 1.19 Mpps, 9807 Mbit/s (9997 Mbit/s with framing) [Device: id=0] RX: 1.19 Mpps, 9808 Mbit/s (9999 Mbit/s with framing) [Device: id=0] TX: 1.19 (StdDev 0.00) Mpps, 9808 (StdDev 1) Mbit/s (9999 Mbit/s with framing), total 357778575 packets with 367796375100 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.02 Mpps, 174 Mbit/s (177 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 1.18 (StdDev 0.14) Mpps, 9679 (StdDev 1115) Mbit/s (9867 Mbit/s with framing), total 357778575 packets with 367796375100 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00000000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 1024 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.000000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 357778575 Rx Frames: 357778575 frame loss: 0, 0.000000% Rx Mpps: 1.192593 [REPORT] total: Tx frames: 357778575 Rx Frames: 357778575 frame loss: 0, 0.000000% Tx Mpps: 1.192593 Rx Mpps: 1.192593
B.1.4.2. 0.001 % Loss
[Device: id=0] RX: 1.18 Mpps, 9737 Mbit/s (9926 Mbit/s with framing) [Device: id=0] TX: 1.18 Mpps, 9736 Mbit/s (9925 Mbit/s with framing) [Device: id=0] RX: 1.18 Mpps, 9714 Mbit/s (9903 Mbit/s with framing) [Device: id=0] TX: 1.18 (StdDev 0.00) Mpps, 9727 (StdDev 33) Mbit/s (9916 Mbit/s with framing), total 354832506 packets with 364767816168 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.03 Mpps, 206 Mbit/s (210 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 1.17 (StdDev 0.13) Mpps, 9599 (StdDev 1105) Mbit/s (9786 Mbit/s with framing), total 354832506 packets with 364767816168 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00100000%) [INFO] Test Result: [PARAMETERS] startRate: 4.000000 nrFlows: 256 frameSize: 1024 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.001000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 354832506 Rx Frames: 354832506 frame loss: 0, 0.000000% Rx Mpps: 1.182761 [REPORT] total: Tx frames: 354832506 Rx Frames: 354832506 frame loss: 0, 0.000000% Tx Mpps: 1.182761 Rx Mpps: 1.182761
B.1.4.3. 0.002 % Loss
[Device: id=0] TX: 1.18 Mpps, 9701 Mbit/s (9890 Mbit/s with framing) [Device: id=0] RX: 1.18 Mpps, 9690 Mbit/s (9878 Mbit/s with framing) [Device: id=0] TX: 1.18 (StdDev 0.00) Mpps, 9690 (StdDev 25) Mbit/s (9879 Mbit/s with framing), total 353506356 packets with 363404533968 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.03 Mpps, 216 Mbit/s (220 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 1.16 (StdDev 0.13) Mpps, 9563 (StdDev 1100) Mbit/s (9749 Mbit/s with framing), total 353506356 packets with 363404533968 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00200000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 3.000000 nrFlows: 256 frameSize: 1024 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.002000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 353506356 Rx Frames: 353506356 frame loss: 0, 0.000000% Rx Mpps: 1.178306 [REPORT] total: Tx frames: 353506356 Rx Frames: 353506356 frame loss: 0, 0.000000% Tx Mpps: 1.178306 Rx Mpps: 1.178306
B.1.5. 1500 Bytes packets
B.1.5.1. 0 % Loss
[Device: id=0] TX: 0.82 Mpps, 9838 Mbit/s (9969 Mbit/s with framing) [Device: id=0] RX: 0.82 Mpps, 9840 Mbit/s (9970 Mbit/s with framing) [Device: id=0] TX: 0.82 (StdDev 0.00) Mpps, 9851 (StdDev 7) Mbit/s (9982 Mbit/s with framing), total 245612493 packets with 369401189472 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.02 Mpps, 236 Mbit/s (239 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 0.81 (StdDev 0.09) Mpps, 9721 (StdDev 1118) Mbit/s (9850 Mbit/s with framing), total 245612493 packets with 369401189472 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00000000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 3.000000 nrFlows: 256 frameSize: 1500 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.000000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 245612493 Rx Frames: 245612493 frame loss: 0, 0.000000% Rx Mpps: 0.818712 [REPORT] total: Tx frames: 245612493 Rx Frames: 245612493 frame loss: 0, 0.000000% Tx Mpps: 0.818712 Rx Mpps: 0.818712
B.1.5.2. 0.001 % Loss
[Device: id=0] TX: 0.82 Mpps, 9833 Mbit/s (9964 Mbit/s with framing) [Device: id=0] RX: 0.82 Mpps, 9834 Mbit/s (9965 Mbit/s with framing) [Device: id=0] TX: 0.82 (StdDev 0.00) Mpps, 9846 (StdDev 10) Mbit/s (9977 Mbit/s with framing), total 245492919 packets with 369221350176 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.02 Mpps, 240 Mbit/s (243 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 0.81 (StdDev 0.09) Mpps, 9716 (StdDev 1117) Mbit/s (9846 Mbit/s with framing), total 245492919 packets with 369221350176 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00100000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 3.000000 nrFlows: 256 frameSize: 1500 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.001000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 245492919 Rx Frames: 245492919 frame loss: 0, 0.000000% Rx Mpps: 0.818316 [REPORT] total: Tx frames: 245492919 Rx Frames: 245492919 frame loss: 0, 0.000000% Tx Mpps: 0.818316 Rx Mpps: 0.818316
B.1.5.3. 0.002 % Loss
[Device: id=0] TX: 0.82 Mpps, 9864 Mbit/s (9995 Mbit/s with framing) [Device: id=0] RX: 0.82 Mpps, 9864 Mbit/s (9995 Mbit/s with framing) [Device: id=0] TX: 0.82 (StdDev 0.00) Mpps, 9851 (StdDev 8) Mbit/s (9983 Mbit/s with framing), total 245634291 packets with 369433973664 bytes (incl. CRC) [INFO] Stopping final validation [Device: id=0] RX: 0.02 Mpps, 237 Mbit/s (240 Mbit/s with framing) [Device: id=0] RX: 0.00 Mpps, 0 Mbit/s (0 Mbit/s with framing) [Device: id=0] RX: 0.81 (StdDev 0.09) Mpps, 9722 (StdDev 1118) Mbit/s (9851 Mbit/s with framing), total 245634291 packets with 369433973664 bytes (incl. CRC) [INFO] Device 0->0: PASSED - frame loss (0, 0.00000000%) is less than or equal to the maximum (0.00100000%) [INFO] Test Result: PASSED [PARAMETERS] startRate: 3.000000 nrFlows: 256 frameSize: 1500 runBidirec: false searchRunTime: 30 validationRunTime: 300 acceptableLossPct: 0.001000 ports: 1,2 [REPORT]Device 0->0: Tx frames: 245634291 Rx Frames: 245634291 frame loss: 0, 0.000000% Rx Mpps: 0.818775 [REPORT] total: Tx frames: 245634291 Rx Frames: 245634291 frame loss: 0, 0.000000% Tx Mpps: 0.818775 Rx Mpps: 0.818775
Appendix C. GitHub
Relevant templates (yaml files) and scripts used in the NFV validation lab may be found here:
Appendix D. Contributors
We would like to thank the following individuals for their time and patience as we collaborated on this process. This document would not have been possible without their many contributions.
Contributor | Title | Contribution |
---|---|---|
Rimma Iontel | Senior Architect | SME Telco/NFV & Review |
Franck Baudin | Principal Product Manager | SME NFV, Technical Content Review |
Vijay Chundry | Sr. Manager Software | SME OpenStack/NFV, Technical Content Review |
Aaron Smith | Sr. Prinicipal Software Eng | SME Telco, NFV & HA, Design, Testing & Review |
David Cain | Sr. Solutions Architect | SME OpenStack & Datacenter, Review |
Joe Antkowiak | Sr. Solutions Architect OpenStack | SME OpenStack & Review |
Julio Villarreal Pelgrino | Prinicpal Architect Cloud | SME Cloud, OpenStack & Review |
John Fulton | Sr. Software Engineer | SME OpenStack, HCI & Review |
Andrew Bays | Senior Software Engineer | SME HA, Deployment, Testing & Review |
Abhishek Bandarupalle | Systems Engineering Intern - NFV | Testing & Documentation |
Jess Schaefer | Graphic Designer | Diagrams |
Emma Eble | Multimedia Designer | Diagrams |
Appendix E. Revision History
Revision History | ||
---|---|---|
Revision 1.0-0 | 2019-01-02 | AS |