Network Functions Virtualization Configuration Guide

Red Hat OpenStack Platform 10

Configuring the Network Functions Virtualization (NFV) OpenStack Deployment

OpenStack Documentation Team

Abstract

This guide describes the configuration procedures for SR-IOV and OVS-DPDK in your Red Hat OpenStack Platform 10 with NFV deployment.

Preface

Red Hat OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads.

This guide describes the steps to configure SR-IOV and DPDK-acclerated Open vSwitch (OVS) using the Red Hat OpenStack Platform 10 director for NFV deployments.

Chapter 1. Overview

Network Functions Virtualization (NFV) is a software-based solution that virtualizes a network function on general-purpose, cloud-based infrastructure to provide agility, flexibility, simplicity, efficiency, and scalability while reducing the cost, allowing greater innovation. NFV allows the Communication Service Providers (CSPs) to move away from traditional hardware.

Note

This guide describes the procedures to describe SR-IOV and OVS-DPDK for single ports only.

Red Hat OpenStack Platform 10 director allows you to isolate the overcloud networks. Using this feature, you can separate specific network types (for example, external, tenant, internal API and so on) into isolated networks. You can deploy a network on a single network interface or distributed over a multiple host network interface. Open vSwitch allows you to create bonds by assigning multiple interfaces to a single bridge. Network isolation in a Red Hat OpenStack Platform 10 installation is configured using template files. If you do not provide template files, all the service networks are deployed on the provisioning network. There are two types of template configuration files:

  • network-environment.yaml - this file contains the network details such as, subnets, IP address ranges that will be used for the network configuration on the overcloud nodes. In addition, this file also contains the different settings that override the default parameter values for various scenarios.
  • Host templates (for example, compute.yaml , controller.yaml and so on) - these files define the network interface configuration for the overcloud nodes. The values of the network details taken from the network-environment.yaml file.
  • Post install configuration files (first-boot.yaml, post-install.yaml) - these files provide various post configuration steps, for example:

    • Grub argument configurations.
    • DPDK configuration that are still not supported by the Red Hat OpenStack Platform director heat templates.
    • Tuned installation and configuration. The tuned package contains the tuned daemon that monitors the use of system components and dynamically tunes system settings based on that monitoring information. To provide proper CPU affinity configuration in OVS-DPDK and SR-IOV deployments, you should use the tuned-cpu-partitioning profile.

      Note

      In order to get the tuned profile installed and activated, you should provide the yum repository within the first-boot.yaml file. The tuna package installed as a dependency during the installation of tuned-cpu-partitioning profile.

      You need to define the base URL within the first-boot-yaml file. Run the following command in order to get the required base URL for the repositories:

      “TUNA_REPO=`yum info tuna | grep Repo | awk '{print $3}' | sed 's#/.*##g'` && yum repolist -v $TUNA_REPO | grep Repo-baseurl && TUNED_PROFILE_REPO=`yum info tuned-profiles-cpu-partitioning | grep Repo | awk '{print $3}' | sed 's#/.*##g'` && yum repolist -v $TUNED_PROFILE_REPO | grep Repo-baseurl”

      The base URL should be defined in the first-boot.yaml file.

Both the network-environment.yaml and the host template files are located in the /usr/share/openstack-tripleo-heat-templates/ on the undercloud node.

For samples of the reference files mentioned above, see Sample YAML Files.

The following sections provide more details on how to configure the network-environment.yaml, the host template files and the post install configuration files for the SR-IOV and OVS-DPDK features for NFV using the Red Hat OpenStack Platform director.

Note

The procedures in this guide can be used to configure SR-IOV and OVS-DPDK as stand-alone configurations with the Red Hat OpenStack Platform director for NFV. To configure and use composable roles for your Red Hat Openstack Platform deployment with NFV, see Managing Deployments.

Chapter 2. Configure SR-IOV Support for Virtual Networking

This chapter covers the configuration of Single Root Input/Output Virtualization (SR-IOV) within the Red Hat OpenStack Platform 10 environment using the director.

In the following procedure, you need to update the network-environment.yaml file to include parameters for kernel arguments, SR-IOV driver, PCI passthrough and so on. You must also update the compute.yaml file to include the SR-IOV interface parameters, and run the overcloud_deploy.sh script to deploy the overcloud with the SR-IOV parameters.

OpenStack NFV Config Tuning 426823 1116 JCS SR IOV

This section describes the YAML files you need to modify and update for configuring SR-IOV before deploying the overcloud.

  1. Modify the network-environment.yaml file:

    1. Add first-boot.yaml to set the kernel arguments. Add the following line under resource_registry:

      resource_registry:
      OS::TripleO::NodeUserData: /home/stack/templates/first-boot.yaml
    2. Add the ComputeKernelArgs parameters to the default grub file. Add this to the default_parameters section:

      default_parameters:
      ComputeKernelArgs: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12"
      Note

      You need to add ‘hw:mem_page_size=1GB’ to the flavor you will use to launch an SR-IOV instance.

    3. Enable the SR-IOV mechanism driver (sriovnicswitch):

      NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
    4. Configure the Compute pci_passthrough_whitelist parameter, and set devname as the SR-IOV interface:

      NovaPCIPassthrough:
        - devname: "p6p1"
          physical_network: "tenant"
    5. For the Red Hat OpenStack Platform 10 release, set the Product and Vendor ID for the Physical Function (PFs) instead of using the devname:

      - vendor_id: "8086"
        product_id :"154d"
       physical_network: "tenant"

      The following example uses tenant as the physical_network name.

    6. List the available scheduler filters:

      NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter"]
    7. List the most restrictive filters first to make the filtering process for the nodes more efficient:

      Compute uses an array of filters to filter a node. These filters are applied in the order they are listed.

      NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter', 'PciPassthroughFilter' ]
    8. Set a list or range of physical CPU cores to reserve for virtual machine processes:

      Add a comma-separated list of CPU cores.

      NovaVcpuPinSet: ['4-12','^8']

      This example reserves cores from 4-12 excluding 8.

    9. List the supported PCI vendor devices in the format VendorID:ProductID:

      By default, Intel and Mellanox SR-IOV capable NICs are supported:

      NeutronSupportedPCIVendorDevs: ['15b3:1004','8086:10ca']
    10. Specify the physical network and SR-IOV interface in the format - PHYSICAL_NETWORK:PHYSICAL DEVICE.

      All physical networks listed in the network_vlan_ranges on the server should have mappings to the appropriate interfaces on each agent.

      NeutronPhysicalDevMappings: "tenant:p6p1"
    11. Provide a list of Virtual Functions (VFs) to be reserved for each SR-IOV interface:

      NeutronSriovNumVFs: "p6p1:5"
      Note

      Currently, the Red Hat OpenStack Platform with NFV supports 30 or fewer VFs.

    12. Set the tunnel type for the tenant network (vxlan or gre). To disable the tunnel type parameter, set the value to "":

      NeutronTunnelTypes: ""
    13. Set the tenant network type for OpenStack Networking. The options available are vlan or vxlan. By default, the value is set to vxlan:

      NeutronNetworkType: 'vlan'
    14. Set the Open vSwitch logical to physical bridge mappings:

      NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-isolated'

      Here, the datacentre:br-ex,tenant:br-isolated is the associated physical device to bridge.

    15. Set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range:

      NeutronNetworkVLANRanges: 'datacentre:398:399,tenant:405:406'

      Here, the datacentre:398:399,tenant:405:406 represent the VLAN IDs for each physical device.

      Physnet (physical network) is the value which connects bridge from the NeutronBridgeMapping and VLAN ID from NeutronNetworkVLANRanges, and is used later in order to create the network on overcloud.

    16. Set a list or range of physical CPU cores to be tuned:

      The given argument will be appended to the tuned cpu-partitioning profile.

      HostCpusList: '4-8'

      This example will tune cores 4,5,6,7,8

  2. Modify the compute.yaml file:

    Set the SR-IOV interface by adding the following to the compute.yaml file:

    -
                type: interface
                name: p6p1
                use_dhcp: false
                defroute: false
  3. Run the overcloud_deploy.sh script:

    The following example defines the openstack overcloud deploy command for the VLAN environment:

    # openstack overcloud deploy --debug \
    --templates \
    --environment-file "$HOME/extra_env.yaml" \
    --libvirt-type kvm \
    --ntp-server clock.redhat.com \
    -e /home/stack/<relative-directory>/network-environment.yaml \
    -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \
    --log-file overcloud_install.log &> overcloud_install.log
    • /home/stack/<relative-directory>/network-environment.yaml is the path for the network-environment.yaml file.
    • /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml is the location of the default neutron-sriov.yaml file.
    • The default neutron-sriov.yaml values can be overridden in network-environment.yaml file.

2.1. Create a Flavor and Deploy an Instance for SR-IOV

After you have completed configuring SR-IOV for your Red Hat OpenStack Platform deployment with NFV, you need to create a flavor and deploy an instance by performing the following steps:

  1. Create a flavor:

    # openstack flavor create  m1.medium_huge_4cpu --ram 4096 --disk 150 --vcpus 4

    Here, m1.medium_huge_4cpu is the flavor name, 4096 is the memory size in MB, 150 is the disk size in GB (default 0G), and 4 is the number of VCPUs.

  2. Set additional flavor properties:

    # openstack  flavor set --property hw:cpu_policy=dedicated --property  hw:mem_page_size=large --property hw:numa_nodes=1 --property hw:numa_mempolicy=preferred --property  hw:numa_cpus.0=0,1,2,3 --property hw:numa_mem.0=4096 m1.medium_huge_4cpu

    Here, m1.medium_huge_4cpu is the flavor name and the remaining parameters set the other properties for the flavor.

  3. Deploy an instance:

    # openstack server create --flavor m1.medium_huge_4cpu --availability-zone sriov --image rhel_7.3 --nic  port-id=<direct-port-id>

    Here, m1.medium_huge_4cpu is the flavor name or ID, sriov is the availability zone for the server, rhel_7.3 is the image (name or ID) used to create an instance, <direct-port-id> is the NIC on the server.

You have now deployed an instance for the SR-IOV with NFV use case.

Chapter 3. Configure DPDK-accelerated Open vSwitch (OVS) for Networking

This chapter covers DPDK with Open vSwitch installation and tuning within the Red Hat OpenStack Platform environment.

In the following procedure, you need to update the network-environment.yaml file to include parameters for kernel arguments and DPDK arguments, the compute.yaml file to include the bridge for DPDK interface parameters, the controller.yaml file to include the same bridge details for DPDK interface parameters, and run the overcloud_deploy.sh script to deploy the overcloud with the DPDK parameters.

OpenStack NFV Config Tuning 426823 1116 JCS OVS DPDK

Before you begin the procedue, please ensure you have the following:

  • Red Hat OpenStack Platform 10 with Red Hat Enterprise Linux 7.3
  • OVS-DPDK 2.5.0-14
  • NIC - Dual port Intel x520
Note

OVS-DPDK is dependent on Red Hat Enterprise Linux and with Red Hat Enterprise Linux 7.3, you need to have OVS version 2.5.*.

Before you begin configuring OVS-DPDK, see the Managing Deployments section the Network Functions Virtualization Planning Guide.

3.1. Configure OVS-DPDK in Red Hat OpenStack Platform

This section covers the procedures to configure and deploy OVS-DPDK for your OpenStack environment.

Note

NeutronDPDKCoreList and NeutronDPDKMemoryChannels are the required settings for this procedure. Attempting to deploy DPDK without appropriate values will cause the deployment to fail, or lead to unstable deployments.

  1. Modify the network-environment.yaml file:

    1. Add the first-boot.yaml to set the kernel parameters. Add the following line under resource_registry:

      resource_registry:
      OS::TripleO::NodeUserData: /home/stack/templates/first-boot.yaml
    2. Add the ComputeKernelArgs parameters to the default grub file. Add this to the default_parameters section:

      default_parameters:
      ComputeKernelArgs: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12"
      Note

      The following huge pages will be consumed by the virtual machines, and also by OVS-DPDK using the NeutronDpdkSocketMemory parameter as shown in this procedure. You need to understand that the huge pages available for the virtual machines is the boot parameter minus the NeutronDpdkSocketMemory.

      You need to add hw:mem_page_size=1GB to the flavor you asssociate with the DPDK instance. If you do not do this, then the instance will not get a DHCP allocation.

    3. Add the post-install.yaml file to set the DPDK arguments:

      OS::TripleO::​NodeExtraConfigPost: post-install.yaml
    4. Provide a list of cores that can be used as DPDK PMDs in the format - [allowed_pattern: "'[0-9,-]+'"]:

      NeutronDpdkCoreList: "'2'"
    5. Provide the number of memory channels in the format - [allowed_pattern: "[0-9]+"]:

      NeutronDpdkMemoryChannels: "'2'"
    6. Set the memory allocated for each socket:

      NeutronDpdkSocketMemory: "'1024'"
    7. Set the DPDK driver type, the default value is vfio-pci module:

      NeutronDpdkDriverType: "vfio-pci"
    8. Reserve the RAM for the host processes:

      NovaReservedHostMemory: "4096"
    9. Add a list or range of physical CPU cores to be reserved for virtual machine processes:

      NovaVcpuPinSet: ['4-12','^8']

      This will reserve cores from 4-12 excluding 8.

    10. List the most restrictive filters first to make the filtering process for the nodes more efficient:

      Compute uses an array of filters to filter a node. These filters are applied in the order they are listed.

      NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter"
    11. Set the tunnel type for tenant network (for example, vxlan or gre). To disable the tunnel type parameter, set the value to "":

      NeutronTunnelTypes: ""
    12. Set the tenant network type for OpenStack Networking. The options available are vlan or vxlan:

      NeutronNetworkType: 'vlan'
    13. Set the Open vSwitch logical to physical bridge mappings:

      NeutronBridgeMappings: 'datacentre: br-ex,dpdk:br-link '

      Here, the datacentre:br-ex,dpdk:br-link is the associated physical device to bridge.

    14. Set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range:

      NeutronNetworkVLANRanges: ‘ datacentre:22:22,dpdk:25:25 ’

      Here, the datacentre:22:22,dpdk:25:25 values represent the VLAN IDs for each physical device.

      Physnet (physical network) is the value which connects bridge from the NeutronBridgeMapping and VLAN ID from NeutronNetworkVLANRanges, and is used later in order to create the network on the overcloud.

    15. Set a list or range of physical CPU cores to be tuned:

      The given argument will be appended to the tuned cpu-partitioning profile.

      HostCpusList: '4-8'

      This example will tune cores 4,5,6,7,8

  2. Modify the compute.yaml file:

    1. Add the following lines to the compute.yaml file to set a bridge with dpdk port using the desired interface.

      -
        type: ovs_user_bridge
        name: br-link                 #<BRIDGE_NAME>
        use_dhcp: false
        members:
              -
                type: ovs_dpdk_port
                name: dpdk0
                members:
                     -
                       type: interface
                       name: nic3       # <SUPPORTED_INTERFACE_NAME>
      Note

      To include multiple DPDK devices, repeat the type code snippet for each DPDK device you want to add.

    2. Optionally, change any ovs_bridge type to linux_bridge if you enable network isolation.

      network_config:
                 -
                   type: interface
                   name: nic1
                   use_dhcp: false
                   defroute: false
                 -
                   type: linux_bridge
                   name: br-isolated
      Note

      When using OVS-DPDK, all bridges should be of type ovs_user_bridge on the Compute node. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge with network isolation configured. To use network isolation, set the bridge type to linux_bridge.

  3. Modify the controller.yaml file:

    1. Add the following lines to the controller.yaml file to set the bridge with the same name including an interface with the same VLAN configured.

      -
        type: ovs_bridge
        name: br-link                 #<BRIDGE_NAME>
        use_dhcp: false
        members:
             -
               type: interface
               name: nic4       # <SUPPORTED_INTERFACE_NAME>
  4. Run the overcloud_deploy.sh script to deploy your overcloud with OVS-DPDK:

    # openstack overcloud deploy --debug \
    --templates \
    --environment-file "$HOME/extra_env.yaml" \
    --libvirt-type kvm \
    --ntp-server clock.redhat.com \
    -e /home/stack/<relative-directory>/network-environment.yaml \
    -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
    --log-file overcloud_install.log &> overcloud_install.log
    • /home/stack/<relative directory>/network-environment.yaml sets the path for the network-environment.yaml file.
    • /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml sets the location of the default neutron-ovs-dpdk.yaml file.
    • The default neutron-ovs-dpdk.yaml values can be overridden in network-environment.yaml file.
Note

The following configuration of OVS-DPDK does not support security-groups and live-migrations.

3.2. Known Limitations

There are certain limitations when configuring OVS-DPDK with Red Hat OpenStack Platform 10 for the NFV use case:

  • Huge pages are required for every instance running on the hosts with OVS-DPDK. If huge pages are not present in the guest, the interface will appear but not function.
  • There is a performance degradation of services that use tap devices, because these devices do not support DPDK. For example, services like DVR, FWaaS, and LBaaS use tap devices.

    • With OVS-DPDK, you can enable DVR with netdev datapath, but this has poor performance and is not suitable for a production environment. DVR uses kernel namespace and tap devices to perform the routing.
    • To ensure the DVR routing performs well with OVS-DPDK, you need to use a controller such as ODL which implements routing as OpenFlow rules. With OVS-DPDK, OpenFlow routing removes the bottleneck introduced by the Linux kernel interfaces so that the full performance of datapath is maintained.
  • When using OVS-DPDK, all bridges should be of type ovs_user_bridge on the Compute node. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge with network isolation configured. To use network isolation, set the bridge type to linux_bridge.

3.3. Create a Flavor and Deploy an Instance for OVS-DPDK

After you have completed configuring OVS-DPDK for your Red Hat OpenStack Platform deployment with NFV, you can create a flavor and deploy an instance with the following steps:

  1. Create a flavor:

    # openstack flavor create  m1.medium_huge_4cpu --ram 4096 --disk 150 --vcpus 4

    Here, m1.medium_huge_4cpu is the flavor name, 4096 is the memory size in MB, 150 is the disk size in GB (default 0G), and 4 is the number of VCPUs.

  2. Set additional flavor properties:

    # openstack  flavor set  --property hw:cpu_policy=dedicated --property hw:cpu_thread_policy=require --property hw:mem_page_size=large --property hw:numa_nodes=1 --property hw:numa_mempolicy=preferred --property  hw:numa_cpus.0=0,1,2,3 --property hw:numa_mem.0=4096 m1.medium_huge_4cpu

    Here, m1.medium_huge_4cpu is the flavor name and the remaining parameters set the other properties for the flavor.

  3. Deploy an instance:

    # openstack server create --flavor m1.medium_huge_4cpu --availability-zone dpdk --image rhel_7.3 --nic  net-id=<net-id>

    Here, m1.medium_huge_4cpu is the flavor name or ID, dpdk is the availability zone for the server, rhel_7.3 is the image (name or ID) used to create an instance, <net-id> is the NIC on the server.

You have now deployed an instance for the OVS-DPDK with NFV use case.

For using multi-queue with OVS-DPDK, there are a couple of additional steps that you need to include in the above procedure. Before you create a flavor, perform the following steps:

  1. Set the image properties:

    # openstack image set --property hw_vif_multiqueue_enabled=true <image-id>

    Here, hw_vif_multiqueue_enabled=true is a property on this image to enable multiqueue, <image-id> is the name or ID of the image to modify.

  2. Set additional flavor properties:

    # openstack flavor set m1.vm_mq set hw:vif_multiqueue_enabled=true

    Here, m1.vm_mq is the flavor ID or name, and the remaining options enable multiqueue for the flavor.

3.4. Troubleshooting the Configuration

This section describes the steps to troubleshoot the DPDK-OVS configuration.

  1. Review the bridge configuration and confirm that the bridge was created with the datapath_type=netdev. For example:

    # ovs-vsctl list bridge br0
    _uuid               : bdce0825-e263-4d15-b256-f01222df96f3
    auto_attach         : []
    controller          : []
    datapath_id         : "00002608cebd154d"
    datapath_type       : netdev
    datapath_version    : "<built-in>"
    external_ids        : {}
    fail_mode           : []
    flood_vlans         : []
    flow_tables         : {}
    ipfix               : []
    mcast_snooping_enable: false
    mirrors             : []
    name                : "br0"
    netflow             : []
    other_config        : {}
    ports               : [52725b91-de7f-41e7-bb49-3b7e50354138]
    protocols           : []
    rstp_enable         : false
    rstp_status         : {}
    sflow               : []
    status              : {}
    stp_enable          : false
  2. Review the OVS service by confirming that the neutron-ovs-agent is configured to start automatically:

    # systemctl status neutron-openvswitch-agent.service
    neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent
    Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)
    Active: active (running) since Mon 2015-11-23 14:49:31 AEST; 25min ago

    If the service is having trouble starting, you can view any related messages:

    # journalctl -t neutron-openvswitch-agent.service
  3. Confirm that the PMD CPU mask of the ovs-dpdk are pinned to the CPUs. In case of HT, use sibling CPUs.

    For example, take CPU4:

    # cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list
    4,20

    So, using CPU 4 and 20:

    # ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010

    Display their status:

    # tuna -t ovs-vswitchd -CP
    thread  ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary       cmd
    3161	OTHER 	0    	6	765023      	614	ovs-vswitchd
    3219   OTHER 	0    	6     	1        	0   	handler24
    3220   OTHER 	0    	6     	1        	0   	handler21
    3221   OTHER 	0    	6     	1        	0   	handler22
    3222   OTHER 	0    	6     	1        	0   	handler23
    3223   OTHER 	0    	6     	1        	0   	handler25
    3224   OTHER 	0    	6     	1        	0   	handler26
    3225   OTHER 	0    	6     	1        	0   	handler27
    3226   OTHER 	0    	6     	1        	0   	handler28
    3227   OTHER 	0    	6     	2        	0   	handler31
    3228   OTHER 	0    	6     	2        	4   	handler30
    3229   OTHER 	0    	6     	2        	5   	handler32
    3230   OTHER 	0    	6	953538      	431   revalidator29
    3231   OTHER 	0    	6   1424258      	976   revalidator33
    3232   OTHER 	0    	6   1424693      	836   revalidator34
    3233   OTHER 	0    	6	951678      	503   revalidator36
    3234   OTHER 	0    	6   1425128      	498   revalidator35
    *3235   OTHER 	0    	4	151123       	51       	pmd37*
    *3236   OTHER 	0   	20	298967       	48       	pmd38*
    3164   OTHER 	0    	6 	47575        	0  dpdk_watchdog3
    3165   OTHER 	0    	6	237634        	0   vhost_thread1
    3166   OTHER 	0    	6  	3665        	0       	urcu2

Chapter 4. Finding More Information

The following table includes additional Red Hat documentation for reference:

The Red Hat OpenStack Platform documentation suite can be found here: Red Hat OpenStack Platform 10 Documentation Suite

Table 4.1. List of Available Documentation

ComponentReference

Red Hat Enterprise Linux

Red Hat OpenStack Platform is supported on Red Hat Enterprise Linux 7.3. For information on installing Red Hat Enterprise Linux, see the corresponding installation guide at: Red Hat Enterprise Linux Documentation Suite.

Red Hat OpenStack Platform

To install OpenStack components and their dependencies, use the Red Hat OpenStack Platform director. The director uses a basic OpenStack installation as the undercloud to install, configure and manage the OpenStack nodes in the final overcloud. Be aware that you will need one extra host machine for the installation of the undercloud, in addition to the environment necessary for the deployed overcloud. For detailed instructions, see Red Hat OpenStack Platform Director Installation and Usage.

For information on configuring advanced features for a Red Hat OpenStack Platform enterprise environment using the Red Hat OpenStack Platform director such as network isolation, storage configuration, SSL communication, and general configuration method, see Advanced Overcloud Customization.

You can also manually install the Red Hat OpenStack Platform components, see Manual Installation Procedures.

NFV Documentation

For a high level overview of the NFV concepts, see the Network Functions Virtualization Product Guide.

For more details on planning your Red Hat OpenStack Platform deployment with NFV, see Network Function Virtualization Planning Guide.

Appendix A. Sample YAML Files

This section provides sample YAML files as a reference.

A.1. Sample SR-IOV YAML Files

A.1.1. network.environment.yaml

resource_registry:
  # Specify the relative/absolute path to the config files you want to use for override the default.
  OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  # First boot and Kernel Args
  OS::TripleO::NodeUserData: first-boot.yaml
  OS::TripleO::NodeExtraConfigPost: post-install.yaml

  # Network isolation configuration
  # Service section
  # If some service should be disable, use the following example
  # OS::TripleO::Network::Management: OS::Heat::None
  OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml
  OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml
  OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml
  OS::TripleO::Network::Management: OS::Heat::None
  OS::TripleO::Network::StorageMgmt: OS::Heat::None
  OS::TripleO::Network::Storage: OS::Heat::None

  # Port assignments for the VIPs
  OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml
  OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # Port assignments for the controller role
  OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
  OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # Port assignments for the compute role
  OS::TripleO::Compute::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
  OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # Port assignments for service virtual IPs for the controller role
  OS::TripleO::Controller::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml

parameter_defaults:
  # Customize all these values to match the local environment
  InternalApiNetCidr: 10.10.10.0/24
  TenantNetCidr: 10.10.2.0/24
  ExternalNetCidr: 172.20.12.112/28
  # CIDR subnet mask length for provisioning network
  ControlPlaneSubnetCidr: '24'
  InternalApiAllocationPools: [{'start': '10.10.10.10', 'end': '10.10.10.200'}]
  TenantAllocationPools: [{'start': '172.10.2.100', 'end': '172.10.2.200'}]
  # Use an External allocation pool which will leave room for floating IPs
  ExternalAllocationPools: [{'start': '172.20.12.114', 'end': '172.20.12.125'}]
  # Set to the router gateway on the external network
  ExternalInterfaceDefaultRoute: 172.20.12.126
  # Gateway router for the provisioning network (or Undercloud IP)
  ControlPlaneDefaultRoute: 192.0.2.254
  # Generally the IP of the Undercloud
  EC2MetadataIp: 192.0.2.1
  InternalApiNetworkVlanID: 10
  TenantNetworkVlanID: 11
  ExternalNetworkVlanID: 12
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["8.8.4.4","8.8.8.8"]
  # May set to br-ex if using floating IPs only on native VLAN on bridge br-ex
  NeutronExternalNetworkBridge: "''"
  # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
  NeutronTunnelTypes: ''
  # The tenant network type for Neutron (vlan or vxlan).
  NeutronNetworkType: 'vlan'
  # The OVS logical->physical bridge mappings to use.
  NeutronBridgeMappings: 'datacentre:br-ex,sriov:br-sriov'
  # The Neutron ML2 and OpenVSwitch vlan mapping range to support.
  NeutronNetworkVLANRanges: 'datacentre:22:22,sriov:25:25'
  # Nova flavor to use.
  OvercloudControlFlavor: baremetal
  OvercloudComputeFlavor: baremetal
  # Number of nodes to deploy.
  ControllerCount: 1
  ComputeCount: 1

  # Sets overcloud nodes custom names
  # http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#custom-hostnames
  ControllerHostnameFormat: 'controller-%index%'
  ComputeHostnameFormat: 'compute-%index%'
  CephStorageHostnameFormat: 'ceph-%index%'
  ObjectStorageHostnameFormat: 'swift-%index%'

  #######################
  # SRIOV configuration #
  #######################
  # The mechanism drivers for the Neutron tenant network.
  NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
  # List of PCI Passthrough whitelist parameters.
  # Use ONE of the following examples.
  # Example 1:
  # NovaPCIPassthrough:
  #   - vendor_id: "8086"
  #     product_id: "154c"
  #     address: "0000:05:00.0" - (optional)
  #     physical_network: "datacentre"
  #
  # Example 2:
  # NovaPCIPassthrough:
  #   - devname: "p6p1"
  #     physical_network: "tenant"
  NovaPCIPassthrough:
    - devname: "p6p1"
      physical_network: "sriov"
  # List of supported pci vendor devices in the format VendorID:ProductID.
  NeutronSupportedPCIVendorDevs: ['8086:154d', '8086:10ed']
  # List of <physical_network>:<physical device>
  # All physical networks listed in network_vlan_ranges on the server
  # should have mappings to appropriate interfaces on each agent.
  NeutronPhysicalDevMappings: "sriov:p6p1"
  # Provide the list of VFs to be reserved for each SR-IOV interface.
  # Format "<interface_name1>:<numvfs1>","<interface_name2>:<numvfs2>"
  # Example "eth1:4096","eth2:128"
  NeutronSriovNumVFs: "p6p1:5"

  # List of scheduler available filters
  NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter"]
  # An array of filters used by Nova to filter a node.These filters will be applied in the order they are listed,
  # so place your most restrictive filters first to make the filtering process more efficient.
  NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']
  # Kernel arguments for Compute node
  ComputeKernelArgs: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12"
  # A list or range of physical CPU cores to be tuned.
  # The given args will be appended to the tuned cpu-partitioning profile.
  # Ex. HostCpusList: '4-12' will tune cores from 4-12
  HostCpusList: "2,4,6,8,10,12,14,18,20,22,24,26,28,30"

A.1.2. first-boot.yaml

heat_template_version: 2014-10-16

description: >
  This is an example showing how you can do firstboot configuration
  of the nodes via cloud-init.  To enable this, replace the default
  mapping of OS::TripleO::NodeUserData in ../overcloud_resource_registry*

parameters:
  ComputeKernelArgs:
    description: >
      Space seprated list of Kernel args to be update to grub.
      The given args will be appended to existing args of GRUB_CMDLINE_LINUX in file /etc/default/grub
      Example: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=1"
    type: string
    default: ""
  ComputeHostnameFormat:
    type: string
    default: ""
  HostCpusList:
    description: >
    A list or range of physical CPU cores to be tuned.
    The given args will be appended to the tuned cpu-partitioning profile.
    Ex. HostCpusList: '4-12' will tune cores from 4-12
    type: string
    default: ""


resources:
  userdata:
    type: OS::Heat::MultipartMime
    properties:
      parts:
      - config: {get_resource: boot_config}
      - config: {get_resource: compute_kernel_args}

  boot_config:
      type: OS::Heat::CloudConfig
      properties:
      cloud_config:
      yum_repos:
      # Overcloud images deployed without any repos.
      # In order to install required tuned profile and activate it, we should create relevantFDP repos.
      <repo-file-name>:
      name: <repo-name>
      baseurl: <relevant-baserepo-url>
      enabled: 1
      gpgcheck: 0
      <repo-file-name>:
      name: <repo-name>
      baseurl: <relevant-baserepo-url>
      enabled: 1
      gpgcheck: 0


  # Verify the logs on /var/log/cloud-init.log on the overcloud node
  compute_kernel_args:
    type: OS::Heat::SoftwareConfig
    properties:
    config:
    str_replace:
    template: |
    #!/bin/bash
    set -x
    FORMAT=$COMPUTE_HOSTNAME_FORMAT
    if [[ -z $FORMAT ]] ; then
    FORMAT="compute" ;
    else
    # Assumption: only %index% and %stackname% are the variables in Host name format
    FORMAT=$(echo $FORMAT | sed  's/\%index\%//g' | sed 's/\%stackname\%//g') ;
    fi
    if [[ $(hostname) == *$FORMAT* ]] ; then
    # Install the tuned package
    yum install -y tuned-profiles-cpu-partitioning

    tuned_conf_path="/etc/tuned/cpu-partitioning-variables.conf"
    if [ -n "$TUNED_CORES" ]; then
        grep -q "^isolated_cores" $tuned_conf_path
        if [ "$?" -eq 0 ]; then
        sed -i 's/^isolated_cores=.*/isolated_cores=$TUNED_CORES/' $tuned_conf_path
        else
        echo "isolated_cores=$TUNED_CORES" >> $tuned_conf_path
        fi
        tuned-adm profile cpu-partitioning
    fi

    sed 's/^\(GRUB_CMDLINE_LINUX=".*\)"/\1 $KERNEL_ARGS"/g' -i /etc/default/grub ;
    grub2-mkconfig -o /etc/grub2.cfg

    reboot
    fi
    params:
    $KERNEL_ARGS: {get_param: ComputeKernelArgs}
    $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat}
    $TUNED_CORES: {get_param: HostCpusList}

outputs:
  # This means get_resource from the parent template will get the userdata, see:
  # http://docs.openstack.org/developer/heat/template_guide/composition.html#making-your-template-resource-more-transparent
  # Note this is new-for-kilo, an alternative is returning a value then using
  # get_attr in the parent template instead.
  OS::stack_id:
    value: {get_resource: userdata}

A.1.3. post-install.yaml

heat_template_version: 2014-10-16

description: >
  Example extra config for post-deployment

parameters:
  servers:
    type: json
  ComputeHostnameFormat:
    type: string
    default: ""

resources:
  ExtraDeployments:
    type: OS::Heat::StructuredDeployments
    properties:
    servers:  {get_param: servers}
    config: {get_resource: ExtraConfig}
    # Do this on CREATE/UPDATE (which is actually the default)
    actions: ['CREATE', 'UPDATE']

  ExtraConfig:
    type: OS::Heat::SoftwareConfig
    properties:
    group: script
    config:
    str_replace:
    template: |
    #!/bin/bash
    set -x
    FORMAT=$COMPUTE_HOSTNAME_FORMAT
    if [[ -z $FORMAT ]] ; then
    FORMAT="compute" ;
    else
    # Assumption: only %index% and %stackname% are the variables in Host name format
    FORMAT=$(echo $FORMAT | sed  's/\%index\%//g' | sed 's/\%stackname\%//g') ;
    fi
    if [[ $(hostname) == *$FORMAT* ]] ; then
    tuned_service=/usr/lib/systemd/system/tuned.service
    grep -q "network.target" $tuned_service
    if [ "$?" -eq 0 ]; then
        sed -i '/After=.*/s/network.target//g' $tuned_service
    fi
    grep -q "Before=.*network.target" $tuned_service
    if [ ! "$?" -eq 0 ]; then
        grep -q "Before=.*" $tuned_service
        if [ "$?" -eq 0 ]; then
        sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service
        else
        sed -i '/After/i Before=network.target openvswitch.service' $tuned_service
        fi
    fi

    systemctl daemon-reload
    systemctl restart openvswitch
    fi
    params:
    $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat}

A.1.4. controller.yaml

heat_template_version: 2015-04-30

description: >
  Software Config to drive os-net-config to configure VLANs for the
  controller role.

parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  ExternalNetworkVlanID:
    default: ''
    description: Vlan ID for the external network traffic.
    type: number
  InternalApiNetworkVlanID:
    default: ''
    description: Vlan ID for the internal_api network traffic.
    type: number
  TenantNetworkVlanID:
    default: ''
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 23
    description: Vlan ID for the management network traffic.
    type: number
  ExternalInterfaceDefaultRoute:
    default: ''
    description: default route for the external network
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string

resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
            -
              type: ovs_bridge
              name: br-isolated
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
              addresses:
                -
                  ip_netmask:
                    list_join:
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
              routes:
                -
                  ip_netmask: 169.254.169.254/32
                  next_hop: {get_param: EC2MetadataIp}
              members:
                -
                  type: interface
                  name: nic2
                  # force the MAC address of the bridge to this interface
                  primary: true
                -
                  type: vlan
                  vlan_id: {get_param: InternalApiNetworkVlanID}
                  addresses:
                    -
                      ip_netmask: {get_param: InternalApiIpSubnet}
                -
                  type: vlan
                  vlan_id: {get_param: TenantNetworkVlanID}
                  addresses:
                    -
                      ip_netmask: {get_param: TenantIpSubnet}
            -
              type: ovs_bridge
              name: br-ex
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
              members:
                -
                  type: interface
                  name: nic3
                  # force the MAC address of the bridge to this interface
                -
                  type: vlan
                  vlan_id: {get_param: ExternalNetworkVlanID}
                  addresses:
                  -
                    ip_netmask: {get_param: ExternalIpSubnet}
                  routes:
                    -
                      default: true
                      next_hop: {get_param: ExternalInterfaceDefaultRoute}
            -
              type: ovs_bridge
              name: br-sriov
              use_dhcp: false
              members:
                -
                  type: interface
                  name: nic4

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

A.1.5. compute.yaml

heat_template_version: 2015-04-30

description: >
  Software Config to drive os-net-config to configure VLANs for the
  compute role.

parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  InternalApiNetworkVlanID:
    default: ''
    description: Vlan ID for the internal_api network traffic.
    type: number
  TenantNetworkVlanID:
    default: ''
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 23
    description: Vlan ID for the management network traffic.
    type: number
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  ControlPlaneDefaultRoute: # Override this via parameter_defaults
    description: The default route of the control plane network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string
  ExternalInterfaceDefaultRoute:
    default: ''
    description: default route for the external network
    type: string

resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
            -
              type: ovs_bridge
              name: br-isolated
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
              addresses:
               -
                 ip_netmask:
                   list_join:
                     - '/'
                     - - {get_param: ControlPlaneIp}
                       - {get_param: ControlPlaneSubnetCidr}
              routes:
               -
                 ip_netmask: 169.254.169.254/32
                 next_hop: {get_param: EC2MetadataIp}
               -
                 next_hop: {get_param: ControlPlaneDefaultRoute}
              members:
                -
                  type: interface
                  name: nic2
                  # force the MAC address of the bridge to this interface
                  primary: true
                -
                  type: vlan
                  vlan_id: {get_param: InternalApiNetworkVlanID}
                  addresses:
                    -
                      ip_netmask: {get_param: InternalApiIpSubnet}
                -
                  type: vlan
                  vlan_id: {get_param: TenantNetworkVlanID}
                  addresses:
                    -
                      ip_netmask: {get_param: TenantIpSubnet}
            -
              type: interface
              name: nic3
              use_dhcp: false
              defroute: false

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}
    value: {get_resource: OsNetConfigImpl}

A.1.6. overcloud_deploy.sh

#!/bin/bash

cat >"$HOME/extra_env.yaml"<<EOF
---
parameter_defaults:
    Debug: true
EOF

openstack overcloud deploy --debug \
--templates \
--environment-file "$HOME/extra_env.yaml" \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-e /home/stack/<relative-directory>/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \
--log-file overcloud_install.log &> overcloud_install.log

A.2. Sample OVS-DPDK YAML Files

A.2.1. network-environment.yaml

resource_registry:
  # Specify the relative/absolute path to the config files you want to use for override the default.
  OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  OS::TripleO::NodeUserData: first-boot.yaml
  OS::TripleO::NodeExtraConfigPost: post-install.yaml

  # Network isolation configuration
  # Service section
  # If some service should be disable, use the following example
  # OS::TripleO::Network::Management: OS::Heat::None
  OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml
  OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml
  OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml
  OS::TripleO::Network::Management: OS::Heat::None
  OS::TripleO::Network::StorageMgmt: OS::Heat::None
  OS::TripleO::Network::Storage: OS::Heat::None

  # Port assignments for the VIPs
  OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml
  OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # Port assignments for the controller role
  OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
  OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # Port assignments for the compute role
  OS::TripleO::Compute::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
  OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
  OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
  OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Compute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # Port assignments for service virtual IPs for the controller role
  OS::TripleO::Controller::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml

parameter_defaults:
  # Customize all these values to match the local environment
  InternalApiNetCidr: 10.10.10.0/24
  TenantNetCidr: 10.10.2.0/24
  ExternalNetCidr: 172.20.12.112/28
  # CIDR subnet mask length for provisioning network
  ControlPlaneSubnetCidr: '24'
  InternalApiAllocationPools: [{'start': '10.10.10.10', 'end': '10.10.10.200'}]
  TenantAllocationPools: [{'start': '172.10.2.100', 'end': '172.10.2.200'}]
  # Use an External allocation pool which will leave room for floating IPs
  ExternalAllocationPools: [{'start': '172.20.12.114', 'end': '172.20.12.125'}]
  # Set to the router gateway on the external network
  ExternalInterfaceDefaultRoute: 172.20.12.126
  # Gateway router for the provisioning network (or Undercloud IP)
  ControlPlaneDefaultRoute: 192.0.2.254
  # Generally the IP of the Undercloud
  EC2MetadataIp: 192.0.2.1
  InternalApiNetworkVlanID: 10
  TenantNetworkVlanID: 11
  ExternalNetworkVlanID: 12
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["8.8.4.4","8.8.8.8"]
  # May set to br-ex if using floating IPs only on native VLAN on bridge br-ex
  NeutronExternalNetworkBridge: "''"
  # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
  NeutronTunnelTypes: ''
  # The tenant network type for Neutron (vlan or vxlan).
  NeutronNetworkType: 'vlan'
  # The OVS logical->physical bridge mappings to use.
  NeutronBridgeMappings: 'datacentre:br-ex,dpdk:br-link'
  # The Neutron ML2 and OpenVSwitch vlan mapping range to support.
  NeutronNetworkVLANRanges: 'datacentre:22:22,dpdk:25:25'
  # Nova flavor to use.
  OvercloudControlFlavor: baremetal
  OvercloudComputeFlavor: baremetal
  # Number of nodes to deploy.
  ControllerCount: 1
  ComputeCount: 1

  # Sets overcloud nodes custom names
  # http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#custom-hostnames
  ControllerHostnameFormat: 'controller-%index%'
  ComputeHostnameFormat: 'compute-%index%'
  CephStorageHostnameFormat: 'ceph-%index%'
  ObjectStorageHostnameFormat: 'swift-%index%'

  ##########################
  # OVS DPDK configuration #
  ##########################
  ## NeutronDpdkCoreList and NeutronDpdkMemoryChannels are REQUIRED settings.
  ## Attempting to deploy DPDK without appropriate values will cause deployment to fail or lead to unstable deployments.
  # List of cores to be used for DPDK Poll Mode Driver
  NeutronDpdkCoreList: "'2'"
  # Number of memory channels to be used for DPDK
  NeutronDpdkMemoryChannels: "4"
  # NeutronDpdkSocketMemory
  NeutronDpdkSocketMemory: "1024"
  # NeutronDpdkDriverType
  NeutronDpdkDriverType: "vfio-pci"
  # Datapath type for ovs bridges
  NeutronDatapathType: "netdev"
  # The vhost-user socket directory for OVS
  NeutronVhostuserSocketDir: "/var/run/openvswitch"

  # Reserved RAM for host processes
  NovaReservedHostMemory: 2048
  # A list or range of physical CPU cores to reserve for virtual machine processes.
  # Example: NovaVcpuPinSet: ['4-12','^8'] will reserve cores from 4-12 excluding 8
  NovaVcpuPinSet: "3-15"
  # An array of filters used by Nova to filter a node.These filters will be applied in the order they are listed,
  # so place your most restrictive filters first to make the filtering process more efficient.
  NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter"
  # Kernel arguments for Compute node
  ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"
  # A list or range of physical CPU cores to be tuned.
  # The given args will be appended to the tuned cpu-partitioning profile.
  # Ex. HostCpusList: '4-12' will tune cores from 4-12
  HostCpusList: "2,4,6,8,10,12,14,18,20,22,24,26,28,30"

A.2.2. first-boot.yaml

heat_template_version: 2014-10-16

description: >
  This is an example showing how you can do firstboot configuration
  of the nodes via cloud-init.  To enable this, replace the default
  mapping of OS::TripleO::NodeUserData in ../overcloud_resource_registry*

parameters:
  ComputeKernelArgs:
    description: >
      Space seprated list of Kernel args to be update to grub.
      The given args will be appended to existing args of GRUB_CMDLINE_LINUX in file /etc/default/grub
      Example: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=1"
    type: string
    default: ""
  ComputeHostnameFormat:
    type: string
    default: ""
  HostCpusList:
    description: >
      A list or range of physical CPU cores to be tuned.
      The given args will be appended to the tuned cpu-partitioning profile.
      Ex. HostCpusList: '4-12' will tune cores from 4-12
    type: string
    default: ""

resources:
  userdata:
    type: OS::Heat::MultipartMime
    properties:
      parts:
      - config: {get_resource: boot_config}
      - config: {get_resource: compute_kernel_args}

  boot_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        yum_repos:
          # Overcloud images deployed without any repos.
          # In order to install required tuned profile and activate it, we should create relevant repos.
          <repo-file-name>:
            name: <repo-name>
            baseurl: <relevant-baserepo-url>
            enabled: 1
            gpgcheck: 0
          <repo-file-name>:
            name: <repo-file-name>
            baseurl:<relevant-baserepo-url>
            enabled: 1
            gpgcheck: 0

  # Verify the logs on /var/log/cloud-init.log on the overcloud node
  compute_kernel_args:
    type: OS::Heat::SoftwareConfig
    properties:
      config:
        str_replace:
          template: |
            #!/bin/bash
            set -x
            FORMAT=$COMPUTE_HOSTNAME_FORMAT
            if [[ -z $FORMAT ]] ; then
              FORMAT="compute" ;
            else
              # Assumption: only %index% and %stackname% are the variables in Host name format
              FORMAT=$(echo $FORMAT | sed  's/\%index\%//g' | sed 's/\%stackname\%//g') ;
            fi
            if [[ $(hostname) == *$FORMAT* ]] ; then
              # Install the tuned package
              yum install -y tuned-profiles-cpu-partitioning

              tuned_conf_path="/etc/tuned/cpu-partitioning-variables.conf"
              if [ -n "$TUNED_CORES" ]; then
                grep -q "^isolated_cores" $tuned_conf_path
                if [ "$?" -eq 0 ]; then
                  sed -i 's/^isolated_cores=.*/isolated_cores=$TUNED_CORES/' $tuned_conf_path
                else
                  echo "isolated_cores=$TUNED_CORES" >> $tuned_conf_path
                fi
                tuned-adm profile cpu-partitioning
              fi

              sed 's/^\(GRUB_CMDLINE_LINUX=".*\)"/\1 $KERNEL_ARGS"/g' -i /etc/default/grub ;
              grub2-mkconfig -o /etc/grub2.cfg

              reboot
            fi
          params:
            $KERNEL_ARGS: {get_param: ComputeKernelArgs}
            $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat}
            $TUNED_CORES: {get_param: HostCpusList}

outputs:
  # This means get_resource from the parent template will get the userdata, see:
  # http://docs.openstack.org/developer/heat/template_guide/composition.html#making-your-template-resource-more-transparent
  # Note this is new-for-kilo, an alternative is returning a value then using
  # get_attr in the parent template instead.
  OS::stack_id:
    value: {get_resource: userdata}

A.2.3. post-install.yaml

heat_template_version: 2014-10-16

description: >
  Example extra config for post-deployment

parameters:
  servers:
    type: json
  ComputeOvsDpdkHostnameFormat:
    type: string
    default: ""
  NeutronDpdkCoreList:
    type: string

resources:
  ExtraDeployments:
    type: OS::Heat::StructuredDeployments
    properties:
      servers:  {get_param: servers}
      config: {get_resource: ExtraConfig}
      # Do this on CREATE/UPDATE (which is actually the default)
      actions: ['CREATE', 'UPDATE']

  ExtraConfig:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template: |
            #!/bin/bash
            core_mask=''
            get_core_mask()
            {
                list=$1
                declare -a bm
                bm=(0 0 0 0 0 0 0 0 0 0)
                max_idx=0
                for core in $(echo $list | sed 's/,/ /g')
                do
                    index=$(($core/32))
                    temp=$((1<<$core))
                    bm[$index]=$((${bm[$index]} | $temp))
                    if [ $max_idx -lt $index ]; then
                       max_idx=$index
                    fi
                done

                printf -v core_mask "%x" "${bm[$max_idx]}"
                for ((i=$max_idx-1;i>=0;i--));
                do
                    printf -v hex "%08x" "${bm[$i]}"
                    core_mask+=$hex
                done
                return 0
            }

            set -x
            FORMAT=$COMPUTE_OVS_DPDK_HOSTNAME_FORMAT
            if [[ -z $FORMAT ]] ; then
              FORMAT="compute" ;
            else
              # Assumption: only %index% and %stackname% are the variables in Host name format
              FORMAT=$(echo $FORMAT | sed  's/\%index\%//g' | sed 's/\%stackname\%//g') ;
            fi
            if [[ $(hostname) == *$FORMAT* ]] ; then
              if [ -f /usr/lib/systemd/system/openvswitch-nonetwork.service ]; then
                ovs_service_path="/usr/lib/systemd/system/openvswitch-nonetwork.service"
              elif [ -f /usr/lib/systemd/system/ovs-vswitchd.service ]; then
                ovs_service_path="/usr/lib/systemd/system/ovs-vswitchd.service"
              fi
              grep -q "RuntimeDirectoryMode=.*" $ovs_service_path
              if [ "$?" -eq 0 ]; then
                sed -i 's/RuntimeDirectoryMode=.*/RuntimeDirectoryMode=0775/' $ovs_service_path
              else
                echo "RuntimeDirectoryMode=0775" >> $ovs_service_path
              fi
              grep -Fxq "Group=qemu" $ovs_service_path
              if [ ! "$?" -eq 0 ]; then
                echo "Group=qemu" >> $ovs_service_path
              fi
              grep -Fxq "UMask=0002" $ovs_service_path
              if [ ! "$?" -eq 0 ]; then
                echo "UMask=0002" >> $ovs_service_path
              fi
              ovs_ctl_path='/usr/share/openvswitch/scripts/ovs-ctl'
              grep -q "umask 0002 \&\& start_daemon \"\$OVS_VSWITCHD_PRIORITY\"" $ovs_ctl_path
              if [ ! "$?" -eq 0 ]; then
                sed -i 's/start_daemon \"\$OVS_VSWITCHD_PRIORITY.*/umask 0002 \&\& start_daemon \"$OVS_VSWITCHD_PRIORITY\" \"$OVS_VSWITCHD_WRAPPER\" \"$@\"/' $ovs_ctl_path
              fi

              tuned_service=/usr/lib/systemd/system/tuned.service
              grep -q "network.target" $tuned_service
              if [ "$?" -eq 0 ]; then
                sed -i '/After=.*/s/network.target//g' $tuned_service
              fi
              grep -q "Before=.*network.target" $tuned_service
              if [ ! "$?" -eq 0 ]; then
                grep -q "Before=.*" $tuned_service
                if [ "$?" -eq 0 ]; then
                  sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service
                else
                  sed -i '/After/i Before=network.target openvswitch.service' $tuned_service
                fi
              fi

              get_core_mask $DPDK_PMD_CORES
              ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=$core_mask
              sed -ri '/^DPDK_OPTIONS/s/-l [0-9\,]+ /-l 0 /' /etc/sysconfig/openvswitch

              systemctl daemon-reload
              systemctl restart openvswitch
            fi
          params:
            $DPDK_PMD_CORES: {get_param: NeutronDpdkCoreList}
            $COMPUTE_OVS_DPDK_HOSTNAME_FORMAT: {get_param: ComputeOvsDpdkHostnameFormat}

A.2.4. controller.yaml

heat_template_version: 2015-04-30

description: >
  Software Config to drive os-net-config to configure VLANs for the
  controller role.

parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  ExternalNetworkVlanID:
    default: ''
    description: Vlan ID for the external network traffic.
    type: number
  InternalApiNetworkVlanID:
    default: ''
    description: Vlan ID for the internal_api network traffic.
    type: number
  TenantNetworkVlanID:
    default: ''
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 23
    description: Vlan ID for the management network traffic.
    type: number
  ExternalInterfaceDefaultRoute:
    default: ''
    description: default route for the external network
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string

resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
          -
             type: interface
             name: nic1
             use_dhcp: false
             defroute: false
           -
             type: linux_bridge
             name: br-isolated
             use_dhcp: false
             dns_servers: {get_param: DnsServers}
             addresses:
               -
                 ip_netmask:
                   list_join:
                     - '/'
                     - - {get_param: ControlPlaneIp}
                       - {get_param: ControlPlaneSubnetCidr}
             routes:
               -
                 ip_netmask: 169.254.169.254/32
                 next_hop: {get_param: EC2MetadataIp}
             members:
               -
                 type: interface
                 name: nic2
                 # force the MAC address of the bridge to this interface
                 primary: true
           -
             type: vlan
             vlan_id: {get_param: InternalApiNetworkVlanID}
             device: br-isolated
             addresses:
               -
                 ip_netmask: {get_param: InternalApiIpSubnet}
           -
             type: vlan
             vlan_id: {get_param: TenantNetworkVlanID}
             device: br-isolated
             addresses:
               -
                 ip_netmask: {get_param: TenantIpSubnet}
           -
             type: linux_bridge
             name: br-ex
             use_dhcp: false
             dns_servers: {get_param: DnsServers}
             members:
               -
                 type: interface
                 name: nic3
           -
             type: vlan
             vlan_id: {get_param: ExternalNetworkVlanID}
             device: br-ex
             addresses:
             -
               ip_netmask: {get_param: ExternalIpSubnet}
             routes:
               -
                 default: true
                 next_hop: {get_param: ExternalInterfaceDefaultRoute}
           -
             type: ovs_bridge
             name: br-link
             use_dhcp: false
             members:
               -
                 type: interface
                 name: nic4

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

A.2.5. compute.yaml

heat_template_version: 2015-04-30

description: >
  Software Config to drive os-net-config to configure VLANs for the
  compute role.

parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  InternalApiNetworkVlanID:
    default: ''
    description: Vlan ID for the internal_api network traffic.
    type: number
  TenantNetworkVlanID:
    default: ''
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 23
    description: Vlan ID for the management network traffic.
    type: number
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  ControlPlaneDefaultRoute: # Override this via parameter_defaults
    description: The default route of the control plane network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string
  ExternalInterfaceDefaultRoute:
    default: ''
    description: default route for the external network
    type: string

resources:
  OsNetConfigImpl:
    type: OS::Heat::StructuredConfig
    properties:
      group: os-apply-config
      config:
        os_net_config:
          network_config:
            -
            type: interface
            name: nic1
            use_dhcp: false
            defroute: false
          -
            type: linux_bridge
            name: br-isolated
            use_dhcp: false
            dns_servers: {get_param: DnsServers}
            addresses:
             -
               ip_netmask:
                 list_join:
                   - '/'
                   - - {get_param: ControlPlaneIp}
                     - {get_param: ControlPlaneSubnetCidr}
            routes:
             -
               ip_netmask: 169.254.169.254/32
               next_hop: {get_param: EC2MetadataIp}
             -
               next_hop: {get_param: ControlPlaneDefaultRoute}
            members:
              -
                type: interface
                name: nic2
                # force the MAC address of the bridge to this interface
                primary: true
          -
            type: vlan
            vlan_id: {get_param: InternalApiNetworkVlanID}
            device: br-isolated
            addresses:
              -
                ip_netmask: {get_param: InternalApiIpSubnet}
          -
            type: vlan
            vlan_id: {get_param: TenantNetworkVlanID}
            device: br-isolated
            addresses:
              -
                ip_netmask: {get_param: TenantIpSubnet}
          -
            type: ovs_user_bridge
            name: br-link
            use_dhcp: false
            members:
              -
                type: ovs_dpdk_port
                name: dpdk0
                members:
                  -
                    type: interface
                    name: nic3

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}

A.2.6. overcloud_deploy.sh

#!/bin/bash

cat >"$HOME/extra_env.yaml"<<EOF
---
parameter_defaults:
    Debug: true
EOF

openstack overcloud deploy --debug \
--templates \
--environment-file "$HOME/extra_env.yaml" \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-e /home/stack/<relative-directory>/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
--log-file overcloud_install.log &> overcloud_install.log

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.