Chapter 8. Configuring an OVS-DPDK deployment

This section deploys DPDK with Open vSwitch (OVS-DPDK) within the Red Hat OpenStack Platform environment. The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core Heat templates on the director node.

You must install and configure the undercloud before you can deploy the overcloud. See the Director Installation and Usage Guide for details.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK.

Note

Do not edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that are modified by these director heat templates

8.1. Deriving DPDK parameters with workflows

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

See Section 7.2, “Overview of workflows and derived parameters” for an overview of the Mistral workflow for DPDK.

Prerequisites

You must have Bare Metal introspection, including hardware inspection extras (inspection_extras) enabled to provide the data retrieved by this workflow. Hardware inspection extras are enabled by default. See Inspecting the Hardware of Nodes.

Define the Workflows and Input Parameters for DPDK

The following lists the input parameters you can provide to the OVS-DPDK workflows:

num_phy_cores_per_numa_node_for_pmd
This input parameter specifies the required minimum number of cores for the NUMA node associated with the DPDK NIC. One physical core is assigned for the other NUMA nodes not associated with DPDK NIC. This parameter should be set to 1.
huge_page_allocation_percentage
This input parameter specifies the required percentage of total memory (excluding NovaReservedHostMemory) that can be configured as huge pages. The KernelArgs parameter is derived using the calculated huge pages based on the huge_page_allocation_percentage specified. This parameter should be set to 50.

The workflows use these input parameters along with the bare-metal introspection details to calculate appropriate DPDK parameter values.

To define the workflows and input parameters for DPDK:

  1. Copy the tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml file to a local directory and set the input parameters to suit your environment.

      workflow_parameters:
        tripleo.derive_params.v1.derive_parameters:
          # DPDK Parameters #
          # Specifices the minimum number of CPU physical cores to be allocated for DPDK
          # PMD threads. The actual allocation will be based on network config, if
          # the a DPDK port is associated with a numa node, then this configuration
          # will be used, else 1.
          num_phy_cores_per_numa_node_for_pmd: 1
          # Amount of memory to be configured as huge pages in percentage. Ouf the
          # total available memory (excluding the NovaReservedHostMemory), the
          # specified percentage of the remaining is configured as huge pages.
          huge_page_allocation_percentage: 50
  2. Deploy the overcloud with the update-plan-only parameter to calculate the derived parameters.

      $ openstack overcloud deploy --templates --update-plan-only -r /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/docker-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml -e /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml
      -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml
      -p /home/stack/plan-environment-derived-params.yaml

The output of this command shows the derived results, which are also merged into the plan-environment.yaml file.

Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 55ba73f2-2ef4-4da1-94e9-eae2fdc35535
Waiting for messages on queue 472a4180-e91b-4f9e-bd4c-1fbdfbcf414f with no timeout.
Removing the current plan files
Uploading new plan files
Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: 7fa995f3-7e0f-4c9e-9234-dd5292e8c722
Plan updated.
Processing templates in the directory /tmp/tripleoclient-SY6RcY/tripleo-heat-templates
Invoking workflow (tripleo.derive_params.v1.derive_parameters) specified in plan-environment file
Started Mistral Workflow tripleo.derive_params.v1.derive_parameters. Execution ID: 2d4572bf-4c5b-41f8-8981-c84a363dd95b
Workflow execution is completed. result:
ComputeOvsDpdkParameters:
 IsolCpusList: 1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31
 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on
   isolcpus=1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31
 NovaReservedHostMemory: 4096
 NovaVcpuPinSet: 2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31
 OvsDpdkCoreList: 0,16,8,24
 OvsDpdkMemoryChannels: 4
 OvsDpdkSocketMemory: 1024,1024
 OvsPmdCoreList: 1,17,9,25
Note

The OvsDpdkMemoryChannels parameter cannot be derived from introspection details. In most cases, this value should be 4.

Deploy the Overcloud with the Derived Parameters

To deploy the overcloud with these derived parameters:

  1. Copy the derived parameters from the plan-environment.yaml to the network-environment.yaml file.

      # DPDK compute node.
      ComputeOvsDpdkParameters:
        KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31"
        NovaVcpuPinSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31']
        NovaReservedHostMemory: 4096
        OvsDpdkSocketMemory: "1024,1024"
        OvsDpdkMemoryChannels: "4"
        OvsDpdkCoreList: "0,16,8,24"
        OvsPmdCoreList: "1,17,9,25"
    Note

    You must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.

    Note

    These parameters apply to the specific role (ComputeOvsDpdk). You can apply these parameters globally, but any global parameters are overwritten by role-specific parameters.

  2. Deploy the overcloud.
  #!/bin/bash

 openstack overcloud deploy \
 --templates \
 -r /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/roles_data.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
 -e /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/network-environment.yaml
NOTE
In a cluster with Compute, ComputeOvsDpdk and ComputeSriov, the existing derive parameters workflow applies the formula only for the ComputeOvsDpdk role and the other roles are not affected.

8.2. OVS-DPDK topology

With Red Hat OpenStack Platform, you can create custom deployment roles, using the composable roles feature, adding or removing services from each role. For more information on Composable Roles, see Composable Roles and Services.

This image shows a sample OVS-DPDK topology with two bonded ports for the control plane and data plane:

OpenStack NFV Config Guide Topology 450694 0617 ECE OVS DPDK

Configuring OVS-DPDK comprises the following tasks:

  • If you use composable roles, copy and modify the roles_data.yaml file to add the custom role for OVS-DPDK.
  • Update the appropriate network-environment.yaml file to include parameters for kernel arguments and DPDK arguments.
  • Update the compute.yaml file to include the bridge for DPDK interface parameters.
  • Update the controller.yaml file to include the same bridge details for DPDK interface parameters.
  • Run the overcloud_deploy.sh script to deploy the overcloud with the DPDK parameters.
Note

This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and Chapter 2, Hardware requirements to understand the hardware and configuration options.

Before you begin the procedure, ensure that you have the following:

  • Red Hat OpenStack Platform 12 with Red Hat Enterprise Linux 7.4
  • OVS 2.7
  • DPDK 16.11
  • Tested NIC. For a list of tested NICs for NFV, see Section 2.1, “Tested NICs”.
Note

Red Hat OpenStack Platform 12 operates in OVS client mode for OVS-DPDK deployments.

8.3. Configuring two-port OVS-DPDK data plane bonding with VLAN tunnelling

This section describes how to configure OVS-DPDK with two data plane ports in an OVS-DPDK bond.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

8.3.1. Generating the ComputeOvsDpdk composable role

Generate roles_data.yaml for the ComputeOvsDpdk role.

# openstack overcloud roles generate --roles-path templates/openstack-tripleo-heat-templates/roles -o roles_data.yaml Controller ComputeOvsDpdk

8.3.2. Configuring the OVS-DPDK parameters

This example uses the sample network-environment.yaml file.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

  1. Add the custom resources for OVS-DPDK under resource_registry:

      resource_registry:
        # Specify the relative/absolute path to the config files you want to use for override the default.
        OS::TripleO::ComputeOvsDpdk::Net::SoftwareConfig: nic-configs/computeovsdpdk.yaml
        OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  2. Under parameter_defaults, disable the tunnel type (set the value to ""), and set the network type to vlan:

    NeutronTunnelTypes: ''
    NeutronNetworkType: 'vlan'
  3. Under parameter_defaults, map the physical network to the virtual bridge:

    NeutronBridgeMappings: 'tenant:br-link0'
  4. Under parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range:

    NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'
  5. Under parameter_defaults, set the role-specific parameters for the ComputeOvsDpdk role:

      ##########################
      # OVS DPDK configuration #
      # ########################
      ComputeOvsDpdkParameters:
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39"
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "2-19,22-39"
        NovaVcpuPinSet: ['4-19,24-39']
        NovaReservedHostMemory: 4096
        OvsDpdkSocketMemory: "3072,1024"
        OvsDpdkMemoryChannels: "4"
        OvsDpdkCoreList: "0,20,1,21"
        OvsPmdCoreList: "2,22,3,23"
    Note

    You must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.

Note

These huge pages are consumed by the virtual machines, and also by OVS-DPDK using the OvsDpdkSocketMemory parameter as shown in this procedure. The number of huge pages available for the virtual machines is the boot parameter minus the OvsDpdkSocketMemory.

You must also add hw:mem_page_size=1GB to the flavor you associate with the DPDK instance.

Note

OvsDPDKCoreList and OvsDpdkMemoryChannels are the required settings for this procedure. Attempting to deploy DPDK without appropriate values causes the deployment to fail or lead to unstable deployments.

8.3.3. Configuring the Controller node

This example uses the sample controller.yaml file.

  1. Create the control plane Linux bond for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic3
          primary: true
        - type: interface
          name: nic4
  2. Assign VLANs to this Linux bond.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: TenantNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: TenantIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageMgmtNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageMgmtIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: ExternalNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: ExternalIpSubnet
        routes:
        - default: true
          next_hop:
            get_param: ExternalInterfaceDefaultRoute
  3. Create the OVS bridge for access to the floating IPs into cloud networks.

      - type: ovs_bridge
        name: br-link0
        use_dhcp: false
        mtu: 9000
        members:
        - type: ovs_bond
          name: bond0
          use_dhcp: true
          members:
          - type: interface
            name: nic7
            mtu: 9000
          - type: interface
            name: nic8
            mtu: 9000

8.3.4. Configuring the Compute node for DPDK interfaces

This example uses the sample compute.yaml file.

  1. Create the control plane Linux bond for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic3
          primary: true
        - type: interface
          name: nic4
  2. Assign VLANs to this Linux bond.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: TenantNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: TenantIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
  3. Set a bridge with two DPDK ports in an OVS-DPDK bond to link to the controller.

      - type: ovs_user_bridge
        name: br-link0
        use_dhcp: false
        members:
          - type: ovs_dpdk_bond
            name: dpdkbond0
            mtu: 9000
            rx_queue: 2
            members:
              - type: ovs_dpdk_port
                name: dpdk0
                members:
                  - type: interface
                    name: nic7
              - type: ovs_dpdk_port
                name: dpdk1
                members:
                  - type: interface
                    name: nic8
    Note

    To include multiple DPDK devices, repeat the type code section for each DPDK device you want to add.

    Note

    When using OVS-DPDK, all bridges on the same Compute node should be of type ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node.

8.3.5. Deploying the overcloud

Run the overcloud_deploy.sh script to deploy the overcloud.

#!/bin/bash

openstack overcloud deploy \
--templates \
-r /home/stack/ospd-12-vlan-dpdk-two-ports-ctlplane-dataplane-bonding/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-dpdk-permissions.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \
-e /home/stack/ospd-12-vlan-dpdk-two-ports-ctlplane-dataplane-bonding/docker-images.yaml \
-e /home/stack/ospd-12-vlan-dpdk-two-ports-ctlplane-dataplane-bonding/network-environment.yaml \
--log-file overcloud_install.log &> overcloud_install.log

8.4. Configuring OVS-DPDK with VXLAN tunnelling

This section describes how to configure OVS-DPDK with VXLAN tunnelling.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

8.4.1. Generating the ComputeOvsDpdk composable role

Generate roles_data.yaml for the ComputeOvsDpdk role.

# openstack overcloud roles generate --roles-path templates/openstack-tripleo-heat-templates/roles -o roles_data.yaml Controller ComputeOvsDpdk

8.4.2. Configuring OVS-DPDK parameters

This example uses the sample network-environment.yaml file.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

  1. Add the custom resources for OVS-DPDK under resource_registry:

      resource_registry:
        # Specify the relative/absolute path to the config files you want to use for override the default.
        OS::TripleO::ComputeOvsDpdk::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml
        OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  2. Under parameter_defaults, set the tunnel type and the tenant type to vxlan:

    NeutronTunnelTypes: 'vxlan'
    NeutronNetworkType: 'vxlan'
  3. Under parameter_defaults, set the role-specific parameters for the ComputeOvsDpdk role:

      ComputeOvsDpdkParameters:
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39"
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "2-19,22-39"
        NovaVcpuPinSet: ['4-19,24-39']
        NovaReservedHostMemory: 4096
        OvsDpdkSocketMemory: "3072,1024"
        OvsDpdkMemoryChannels: "4"
        OvsDpdkCoreList: "0,20,1,21"
        OvsPmdCoreList: "2,22,3,23"
    Note

    You must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.

    Note

    These huge pages are consumed by the virtual machines, and also by OVS-DPDK using the OvsDpdkSocketMemory parameter as shown in this procedure. The number of huge pages available for the virtual machines is the boot parameter minus the OvsDpdkSocketMemory.

    You must also add hw:mem_page_size=1GB to the flavor you associate with the DPDK instance.

    Note

    OvsDPDKCoreList and OvsDpdkMemoryChannels are the required settings for this procedure. Attempting to deploy DPDK without appropriate values causes the deployment to fail or lead to unstable deployments.

8.4.3. Configuring the Controller node

This example uses the sample controller.yaml file.

  1. Create the control plane Linux bond for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic3
          primary: true
        - type: interface
          name: nic4
  2. Assign VLANs to this Linux bond.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageMgmtNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageMgmtIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: ExternalNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: ExternalIpSubnet
        routes:
        - default: true
          next_hop:
            get_param: ExternalInterfaceDefaultRoute
  3. Create the OVS bridge for access to the floating IPs into cloud networks.

    - type: ovs_bridge
      name: br-link0
      use_dhcp: false
      mtu: 9000
      members:
      - type: ovs_bond
        name: bond0
        use_dhcp: true
        members:
        - type: interface
          name: nic7
          mtu: 9000
        - type: interface
          name: nic8
          mtu: 9000
      - type: vlan
        vlan_id:
          get_param: TenantNetworkVlanID
        mtu: 9000
        addresses:
        - ip_netmask:
            get_param: TenantIpSubnet

8.4.4. Configuring the Compute node for DPDK interfaces

This example uses the compute.yaml file.

Create the compute-ovs-dpdk.yaml file from the default compute.yaml file and make the following changes:

  1. Create the control plane Linux bond for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic3
          primary: true
        - type: interface
          name: nic4
  2. Assign VLANs to this Linux bond.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
  3. Set a bridge with a DPDK port to link to the controller.

      - type: ovs_user_bridge
        name: br-link0
        use_dhcp: false
        ovs_extra:
          - str_replace:
              template: set port br-link0 tag=_VLAN_TAG_
              params:
                _VLAN_TAG_:
                   get_param: TenantNetworkVlanID
        addresses:
          - ip_netmask:
              get_param: TenantIpSubnet
        members:
          - type: ovs_dpdk_bond
            name: dpdkbond0
            mtu: 9000
            rx_queue: 2
            members:
              - type: ovs_dpdk_port
                name: dpdk0
                members:
                  - type: interface
                    name: nic7
              - type: ovs_dpdk_port
                name: dpdk1
                members:
                  - type: interface
                    name: nic8
    Note

    To include multiple DPDK devices, repeat the type code section for each DPDK device you want to add.

    Note

    When using OVS-DPDK, all bridges on the same Compute node should be of type ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node.

8.4.5. Deploying the overcloud

Run the overcloud_deploy.sh script to deploy the overcloud.

#!/bin/bash

openstack overcloud deploy \
--templates \
-r /home/stack/ospd-12-vxlan-dpdk-single-port-ctlplane-bonding/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-dpdk-permissions.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \
-e /home/stack/ospd-12-vxlan-dpdk-single-port-ctlplane-bonding/docker-images.yaml \
-e /home/stack/ospd-12-vxlan-dpdk-single-port-ctlplane-bonding/network-environment.yaml \
--log-file overcloud_install.log &> overcloud_install.log

8.5. Setting the MTU value for OVS-DPDK interfaces

Red Hat OpenStack Platform supports jumbo frames for OVS-DPDK. To set the MTU value for jumbo frames you must:

  • Set the global MTU value for networking in the network-environment.yaml file.
  • Set the physical DPDK port MTU value in the compute.yaml file. This value is also used by the vhost user interface.
  • Set the MTU value within any guest instances on the Compute node to ensure that you have a comparable MTU value from end to end in your configuration.
Note

VXLAN packets include an extra 50 bytes in the header. Calculate your MTU requirements based on these additional header bytes. For example, an MTU value of 9000 means the VXLAN tunnel MTU value is 8950 to account for these extra bytes.

Note

You do not need any special configuration for the physical NIC since the NIC is controlled by the DPDK PMD and has the same MTU value set by the compute.yaml file. You cannot set an MTU value larger than the maximum value supported by the physical NIC.

To set the MTU value for OVS-DPDK interfaces:

  1. Set the NeutronGlobalPhysnetMtu parameter in the network-environment.yaml file.

    parameter_defaults:
      # MTU global configuration
      NeutronGlobalPhysnetMtu: 9000
    Note

    Ensure that the NeutronDpdkSocketMemory value in the network-environment.yaml file is large enough to support jumbo frames. See Section 7.4.2, “Memory parameters” for details.

  2. Set the MTU value on the bridge to the Compute node in the controller.yaml file.

      -
        type: ovs_bridge
        name: br-link0
        use_dhcp: false
        members:
          -
            type: interface
            name: nic3
            mtu: 9000
  3. Set the MTU values for an OVS-DPDK bond in the compute.yaml file:

    - type: ovs_user_bridge
      name: br-link0
      use_dhcp: false
      members:
        - type: ovs_dpdk_bond
          name: dpdkbond0
          mtu: 9000
          rx_queue: 2
          members:
            - type: ovs_dpdk_port
              name: dpdk0
              mtu: 9000
              members:
                - type: interface
                  name: nic4
            - type: ovs_dpdk_port
              name: dpdk1
              mtu: 9000
              members:
                - type: interface
                  name: nic5

8.6. Configuring security groups

You can configure security groups to use the OVS firewall driver in Red Hat OpenStack Platform director. The NeutronOVSFirewallDriver parameter allows you to control which firewall driver is used:

  • iptables_hybrid - Configures Networking to use the iptables/hybrid based implementation.
  • openvswitch - Configures Networking to use the OVS firewall flow-based driver.

The openvswitch OVS firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network.

Note

The iptables_hybrid option is not compatible with OVS-DPDK.

Configure the NeutronOVSFirewallDriver parameter in the network-environment.yaml file:

# Configure the classname of the firewall driver to use for implementing security groups.
 NeutronOVSFirewallDriver: openvswitch

See Basic security group operations for details on enabling and disabling security groups.

8.7. Setting multiqueue for OVS-DPDK interfaces

To set set same number of queues for interfaces in OVS-DPDK on the Compute node, modify the compute.yaml file as follows:

- type: ovs_user_bridge
  name: br-link0
  use_dhcp: false
  members:
    - type: ovs_dpdk_bond
      name: dpdkbond0
      mtu: 9000
      rx_queue: 2
      members:
        - type: ovs_dpdk_port
          name: dpdk0
          mtu: 9000
          members:
            - type: interface
              name: nic4
        - type: ovs_dpdk_port
          name: dpdk1
          mtu: 9000
          members:
            - type: interface
              name: nic5

8.8. Known limitations

There are certain limitations when configuring OVS-DPDK with Red Hat OpenStack Platform for the NFV use case:

  • Use Linux bonds for control plane networks. Ensure both PCI devices used in the bond are on the same NUMA node for optimum performance. Neutron Linux bridge configuration is not supported by Red Hat.
  • Huge pages are required for every instance running on the hosts with OVS-DPDK. If huge pages are not present in the guest, the interface appears but does not function.
  • There is a performance degradation of services that use tap devices, because these devices do not support DPDK. For example, services such as DVR, FWaaS, and LBaaS use tap devices.

    • With OVS-DPDK, you can enable DVR with netdev datapath, but this has poor performance and is not suitable for a production environment. DVR uses kernel namespace and tap devices to perform the routing.
    • To ensure the DVR routing performs well with OVS-DPDK, you need to use a controller such as ODL which implements routing as OpenFlow rules. With OVS-DPDK, OpenFlow routing removes the bottleneck introduced by the Linux kernel interfaces so that the full performance of datapath is maintained.
  • When using OVS-DPDK, all bridges on the same Compute node should be of type ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node.

8.9. Creating a flavor and deploying an instance for OVS-DPDK

After you have completed configuring OVS-DPDK for your Red Hat OpenStack Platform deployment with NFV, you can create a flavor and deploy an instance with the following steps:

  1. Create an aggregate group and add a host to it for OVS-DPDK.

     # openstack aggregate create --zone=dpdk dpdk
     # openstack aggregate add host dpdk compute-ovs-dpdk-0.localdomain
    Note

    You should use host aggregates to separate CPU pinned instances from unpinned instances. Instances that do not use CPU pinning do not respect the resourcing requirements of instances that use CPU pinning.

  2. Create a flavor.

    # openstack flavor create m1.medium_huge_4cpu --ram 4096 --disk 150 --vcpus 4

    Here, m1.medium_huge_4cpu is the flavor name, 4096 is the memory size in MB, 150 is the disk size in GB (default 0G), and 4 is the number of vCPUs.

  3. Set additional flavor properties.

    # openstack flavor set --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB m1.medium_huge_4cpu --property hw:emulator_threads_policy=isolate

    Here, m1.medium_huge_4cpu is the flavor name and the remaining parameters set the other properties for the flavor.

See Configure Emulator Threads to run on a Dedicated Physical CPU for details on the emulator threads policy for performance improvements.

  1. Create the network.

    # openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
  2. Deploy an instance.

    # openstack server create --flavor m1.medium_huge_4cpu --availability-zone dpdk --image rhel_7.3 --nic net-id=net1

    Where:

    • m1.medium_huge_4cpu is the flavor name or ID.
    • dpdk is the availability zone for the server.
    • rhel_7.3 is the image (name or ID) used to create an instance.
    • net1 is the NIC on the server.

You have now deployed an instance for the OVS-DPDK with NFV use case.

For using multi-queue with OVS-DPDK, there are a couple of additional steps that you need to include in the above procedure. Before you create a flavor, perform the following steps:

  1. Set the image properties.

    # openstack image set --property hw_vif_multiqueue_enabled=true <image-id>

    Here, hw_vif_multiqueue_enabled=true is a property on this image to enable multiqueue, <image-id> is the name or ID of the image to modify.

  2. Set additional flavor properties.

    # openstack flavor set m1.vm_mq set hw:vif_multiqueue_enabled=true

    Here, m1.vm_mq is the flavor ID or name, and the remaining options enable multiqueue for the flavor.

8.10. Troubleshooting the configuration

This section describes the steps to troubleshoot the DPDK-OVS configuration.

  1. Review the bridge configuration and confirm that the bridge was created with the datapath_type=netdev.

    # ovs-vsctl list bridge br0
    _uuid               : bdce0825-e263-4d15-b256-f01222df96f3
    auto_attach         : []
    controller          : []
    datapath_id         : "00002608cebd154d"
    datapath_type       : netdev
    datapath_version    : "<built-in>"
    external_ids        : {}
    fail_mode           : []
    flood_vlans         : []
    flow_tables         : {}
    ipfix               : []
    mcast_snooping_enable: false
    mirrors             : []
    name                : "br0"
    netflow             : []
    other_config        : {}
    ports               : [52725b91-de7f-41e7-bb49-3b7e50354138]
    protocols           : []
    rstp_enable         : false
    rstp_status         : {}
    sflow               : []
    status              : {}
    stp_enable          : false
  2. Review the OVS service by confirming that the neutron-ovs-agent is configured to start automatically.

    # systemctl status neutron-openvswitch-agent.service
    neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent
    Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)
    Active: active (running) since Mon 2015-11-23 14:49:31 AEST; 25min ago

    If the service is having trouble starting, you can view any related messages.

    # journalctl -t neutron-openvswitch-agent.service
  3. Confirm that the PMD CPU mask of the ovs-dpdk are pinned to the CPUs. In case of HT, use sibling CPUs.

    For example, take CPU4:

    # cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list
    4,20

    So, using CPU 4 and 20:

    # ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010

    Display their status:

    # tuna -t ovs-vswitchd -CP
    thread  ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary       cmd
    3161	OTHER 	0    	6	765023      	614	ovs-vswitchd
    3219   OTHER 	0    	6     	1        	0   	handler24
    3220   OTHER 	0    	6     	1        	0   	handler21
    3221   OTHER 	0    	6     	1        	0   	handler22
    3222   OTHER 	0    	6     	1        	0   	handler23
    3223   OTHER 	0    	6     	1        	0   	handler25
    3224   OTHER 	0    	6     	1        	0   	handler26
    3225   OTHER 	0    	6     	1        	0   	handler27
    3226   OTHER 	0    	6     	1        	0   	handler28
    3227   OTHER 	0    	6     	2        	0   	handler31
    3228   OTHER 	0    	6     	2        	4   	handler30
    3229   OTHER 	0    	6     	2        	5   	handler32
    3230   OTHER 	0    	6	953538      	431   revalidator29
    3231   OTHER 	0    	6   1424258      	976   revalidator33
    3232   OTHER 	0    	6   1424693      	836   revalidator34
    3233   OTHER 	0    	6	951678      	503   revalidator36
    3234   OTHER 	0    	6   1425128      	498   revalidator35
    *3235   OTHER 	0    	4	151123       	51       	pmd37*
    *3236   OTHER 	0   	20	298967       	48       	pmd38*
    3164   OTHER 	0    	6 	47575        	0  dpdk_watchdog3
    3165   OTHER 	0    	6	237634        	0   vhost_thread1
    3166   OTHER 	0    	6  	3665        	0       	urcu2