Chapter 10. Configuring SR-IOV and DPDK interfaces on the same compute node

This section describes how to deploy SR-IOV and DPDK interfaces on the same Compute node. This deployment uses a custom role for OVS-DPDK and SR-IOV with role-specific parameters defined in the network-environment.yaml file. The process to create and deploy a custom role includes:

  • Define the custom role to support SR-IOV and DPDK interfaces on the same Compute node.
  • Set the role-specific parameters for SR-IOV role and OVS-DPDK in the network_environment.yaml file.
  • Configure the compute.yaml file with an SR-IOV interface and a DPDK interface.
  • Deploy the overcloud with this updated set of roles.
  • Create the appropriate OpenStack flavor, networks, and ports to support these interface types.

You must install and configure the undercloud before you can deploy the compute node in the overcloud. See the Director Installation and Usage Guide for details.

Note

Ensure that you create an OpenStack flavor that match this custom role.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

In this example, the ComputeOvsDpdkSriov is a custom role for compute nodes to enable DPDK and SR-IOV only on the nodes that have the appropriate NICs. The existing set of default roles provided by Red Hat OpenStack Platform is stored in the /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file.

10.1. Creating the ComputeOvsDpdkSriov composable role

Red Hat OpenStack Platform provides a set of default roles in the roles_data.yaml file. You can create your own roles_data.yaml file to support the roles you need.

To create the ComputeOvsDpdkSriov composable role to support SR-IOV and DPDK interfaces on the same Compute node:

  1. Create the ComputeOvsDpdkSriov.yaml file in a local directory and add the definition of this role:

    ###############################################################################
    # Role: ComputeOvsDpdkSriov                                                   #
    ###############################################################################
    - name: ComputeOvsDpdkSriov
      description: |
        Compute OvS DPDK Sriov Role
      CountDefault: 1
      networks:
        - InternalApi
        - Tenant
        - Storage
      HostnameFormatDefault: 'computeovsdpdksriov-%index%'
      disable_upgrade_deployment: True
      ServicesDefault:
        - OS::TripleO::Services::AuditD
        - OS::TripleO::Services::CACerts
        - OS::TripleO::Services::CephClient
        - OS::TripleO::Services::CephExternal
        - OS::TripleO::Services::CertmongerUser
        - OS::TripleO::Services::Collectd
        - OS::TripleO::Services::ComputeCeilometerAgent
        - OS::TripleO::Services::ComputeNeutronCorePlugin
        - OS::TripleO::Services::ComputeNeutronL3Agent
        - OS::TripleO::Services::ComputeNeutronMetadataAgent
        - OS::TripleO::Services::ComputeNeutronOvsDpdk
        - OS::TripleO::Services::Docker
        - OS::TripleO::Services::FluentdClient
        - OS::TripleO::Services::Iscsid
        - OS::TripleO::Services::Kernel
        - OS::TripleO::Services::MySQLClient
        - OS::TripleO::Services::NovaCompute
        - OS::TripleO::Services::NovaLibvirt
        - OS::TripleO::Services::NovaMigrationTarget
        - OS::TripleO::Services::NeutronLinuxbridgeAgent
        - OS::TripleO::Services::NeutronSriovAgent
        - OS::TripleO::Services::NeutronVppAgent
        - OS::TripleO::Services::Ntp
        - OS::TripleO::Services::ContainersLogrotateCrond
        - OS::TripleO::Services::OpenDaylightOvs
        - OS::TripleO::Services::Securetty
        - OS::TripleO::Services::SensuClient
        - OS::TripleO::Services::Snmp
        - OS::TripleO::Services::Sshd
        - OS::TripleO::Services::Timezone
        - OS::TripleO::Services::TripleoFirewall
        - OS::TripleO::Services::TripleoPackages
        - OS::TripleO::Services::Tuned
        - OS::TripleO::Services::Vpp
        - OS::TripleO::Services::OVNController
  2. Generate roles_data.yaml for the ComputeOvsDpdkSriov role and any other roles you need for your deployment:

    # openstack overcloud roles generate --roles-path templates/openstack-tripleo-heat-templates/roles -o roles_data.yaml Controller ComputeOvsDpdkSriov

10.2. Defining the SR-IOV and OVS-DPDK role-specific parameters

Modify the network-environment.yaml file to configure SR-IOV and OVS-DPDK role-specific parameters:

  1. Add the resource mapping for the OVS-DPDK and SR-IOV services to the network-environment.yaml file along with the network configuration for these nodes:

      resource_registry:
        # Specify the relative/absolute path to the config files you want to use for override the default.
        OS::TripleO::ComputeOvsDpdkSriov::Net::SoftwareConfig: nic-configs/computeovsdpdksriov.yaml
        OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  2. Define the flavors:

      OvercloudControllerFlavor: controller
      OvercloudComputeOvsDpdkSriovFlavor: compute
  3. Configure the role-specific parameters for SR-IOV:

      #SR-IOV params
      NeutronMechanismDrivers: ['openvswitch','sriovnicswitch']
      NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']
      NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter"]
      NeutronSupportedPCIVendorDevs: ['8086:154d', '8086:10ed']
      NovaPCIPassthrough:
        - devname: "ens2f1"
          physical_network: "tenant"
    
      NeutronPhysicalDevMappings: "tenant:ens2f1"
      NeutronSriovNumVFs: "ens2f1:5"
  4. Configure the role-specific parameters for OVS-DPDK:

      ##########################
      # OVS DPDK configuration #
      # ########################
      ComputeOvsDpdkSriovParameters:
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39"
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "2-19,22-39"
        NovaVcpuPinSet: ['4-19,24-39']
        NovaReservedHostMemory: 4096
        OvsDpdkSocketMemory: "3072,1024"
        OvsDpdkMemoryChannels: "4"
        OvsDpdkCoreList: "0,20,1,21"
        OvsPmdCoreList: "2,22,3,23"
    Note

    You must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.

  5. Configure the remainder of the network-environment.yaml file to override the default parameters from the neutron-ovs-dpdk-agent.yaml and neutron-sriov-agent.yaml files as needed for your OpenStack deployment.

See Planning your OVS-DPDK Deployment for details on how to determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK.

10.3. Configuring the Compute node for SR-IOV and DPDK interfaces

This example uses the sample the compute.yaml file to create the computeovsdpdksriov.yaml file to support SR-IOV and DPDK interfaces.

  1. Create the control plane Linux bond for an isolated network:

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic3
          primary: true
        - type: interface
          name: nic4
  2. Assign VLANs to this Linux bond:

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: TenantNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: TenantIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
  3. Set a bridge with a DPDK port to link to the controller:

      - type: ovs_user_bridge
        name: br-link0
        use_dhcp: false
        members:
        - type: ovs_dpdk_port
          name: dpdk0
          mtu: 9000
          rx_queue: 2
          members:
          - type: interface
            name: nic5
    Note

    To include multiple DPDK devices, repeat the type code section for each DPDK device you want to add.

    Note

    When using OVS-DPDK, all bridges on the same Compute node should be of type ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node.

  4. Create the SR-IOV interface to the Controller:

      - type: interface
        name: ens2f1
        mtu: 9000
        use_dhcp: false
        defroute: false
        nm_controlled: true
        hotplug: true

10.4. Deploying the overcloud

The following example defines the openstack overcloud deploy Bash script that uses composable roles:

#!/bin/bash

openstack overcloud deploy \
--templates \
-r /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \
-e /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/docker-images.yaml \
-e /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/network-environment.yaml

Where:

  • /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/roles_data.yaml is the location of the updated roles_data.yaml file, which defines the ComputeOvsDpdkSriov custom role.
  • /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/network-environment.yaml contains the role-specific parameters for SR-IOV OVS-DPDK interfaces.

10.5. Creating a flavor and deploying an instance with SR-IOV and DPDK interfaces

After you have completed configuring SR-IOV and DPDK interfaces on the same compute node, you need to create a flavor and deploy an instance by performing the following steps:

Note

You should use host aggregates to separate CPU pinned instances from unpinned instances. Instances that do not use CPU pinning do not respect the resourcing requirements of instances that use CPU pinning.

  1. Create a flavor:

    # openstack flavor create --vcpus 6 --ram 4096 --disk 40 compute

    Where:

    • compute is the flavor name.
    • 4096 is the memory size in MB.
    • 40 is the disk size in GB (default 0G).
    • 6 is the number of vCPUs.
  2. Set the flavor for large pages:

    # openstack flavor set compute --property hw:mem_page_size=1GB
  3. Create the networks for SR-IOV and DPDK:

    # openstack network create --name net-dpdk
    # openstack network create --name net-sriov
    # openstack subnet create --subnet-range <cidr/prefix> --network net-dpdk --gateway <gateway> net-dpdk-subnet
    # openstack subnet create --subnet-range <cidr/prefix> --network net-sriov --gateway <gateway> net-sriov-subnet
  4. Create the SR-IOV port.

    1. Use vnic-type direct to create an SR-IOV VF port:

      # openstack port create --network net-sriov --vnic-type direct sriov_port
    2. Use vnic-type direct-physical to create an SR-IOV PF port:

      # openstack port create --network net-sriov --vnic-type direct-physical sriov_port
  5. Deploy an instance:

    # openstack server create --flavor compute --image <IMAGE_ID> --nic port-id=sriov_port --nic net-id=net-dpdk vm1

    Where:

    • compute is the flavor name or ID.
    • <IMAGE_ID> is the image ID used to create an instance.
    • sriov_port is the SR-IOV NIC on the Compute node.
    • net-dpdk is the DPDK network.
    • vm1 is the name of the instance.

You have now deployed an instance that uses an SR-IOV interface and a DPDK interface on the same Compute node.