Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 6. Deploying SR-IOV technologies

Special considerations may be needed for networking performance beyond what is possible through virtual networking. SR-IOV allows near bare metal performance by allowing instances from OpenStack direct access to a shared PCIe resource through virtual resources.

6.1. Prerequisites


Do not manually edit values in /etc/tuned/cpu-partitioning-variables.conf that are modified by Director heat templates.

6.2. Configuring SR-IOV


The CPU assignments, memory allocation and NIC configurations of the following examples may differ from your topoology and use case.

  1. Generate the built-in ComputeSriov to define nodes in the OpenStack cluster that will run NeutronSriovAgent, NeutronSriovHostConfig and default compute services.

    # openstack overcloud roles generate \
    -o /home/stack/templates/roles_data.yaml \
    Controller ComputeSriov
  2. Include the neutron-sriov.yaml and roles_data.yaml files when generating overcloud_images.yaml so that SR-IOV containers are prepared.

    openstack overcloud container image prepare \ \
    --push-destination= \
    --prefix=openstack- \
    --tag-from-label {version}-{release} \
    -e ${SERVICES}/neutron-sriov.yaml \
    --roles-file /home/stack/templates/roles_data.yaml \
    --output-env-file=/home/stack/templates/overcloud_images.yaml \

    For more information on container image preparation, see Director Installation and Usage.

  3. To apply the KernelAgs and TunedProfile parameters, include the host-config-and-reboot.yaml file from /usr/share/openstack-tripleo-heat-templates/environments to your deployment script.

    openstack overcloud deploy --templates \
    … \
    -e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
  4. Configure the parameters for the SR-IOV nodes under parameter_defaults in accordance with the needs of your cluster, and the configuration of your hardware. These settings are typically added to the network-environment.yaml file.

      NeutronBridgeMappings: 'tenant:br-link0'
      NeutronNetworkType: 'vlan'
      NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'
        - devname: "p7p1"
           physical_network: "tenant"
        - devname: "p7p2"
          physical_network: "tenant"
      NeutronPhysicalDevMappings: "tenant:p7p1,tenant:p7p2"
      NeutronSriovNumVFs: "p7p1:5,p7p2:5"
      NeutronTunnelTypes: ''
  5. In the same file, configure role specific parameters for SR-IOV compute nodes.

        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-19,21-39"
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "1-19,21-39"
        NovaVcpuPinSet: '1-19,21-39'
        NovaReservedHostMemory: 4096
  6. Apply the following network configuration changes to the SR-IOV interface:

    - type: interface
      name: <interface_name>
        - ip_netmask:
            get_param: TenantIpSubnet

    A Mellanox network interface cannot be configured as a nic-config interface type ‘vlan’, as tunnel endpoints such as VXLAN will not pass traffic due to driver limitations. Use a Mellanox port with nic-config interface type ‘interface’ only.

  7. Ensure that the list of default filters includes the value AggregateInstanceExtraSpecsFilter.

    NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter',’AggregateInstanceExtraSpecsFilter’’]
  8. Deploy the overcloud.

openstack overcloud deploy --templates \
  -r ${CUSTOM_TEMPLATES}/roles_data.yaml \
  -e ${TEMPLATES_HOME}/environments/host-config-and-reboot.yaml \
  -e ${TEMPLATES_HOME}/neutron-sriov.yaml \
  -e ${CUSTOM_TEMPLATES}/network-environment.yaml

6.3. Configuring SR-IOV with OVS Hardware Offload (Technology Preview)

The following procedures describe additional settings that will be needed to configure OVS hardware offload. Configuring SR-IOV with OVS hardware offload does not require changes to the network configuration templates.

As of Red Hat OpenStack Platform 13, OVS hardware offload is a Technology Preview and not recommended for production deployments. For more information about Technology Preview features, see Scope of Coverage Details.

6.3.1. Procedure

To enable OpenvSwitch (OVS) hardware offload in the ComputeSriov role, complete these steps.

  1. Add the ovs-hw-offload.yaml environmnet file from /usr/share/openstack-tripleo-heat-templates/environments to be included in your deployment script. This is in addition to host-config-and-reboot.yaml environment file noted in Configuring SR-IOV.

    openstack overcloud deploy --templates \
      -r ${CUSTOME_TEMPLATES}/roles_data.yaml \
      -e ${TEMPLATES_HOME}/environments/ovs-hw-offload.yaml \
      -e ${TEMPLATES_HOME}/environments/host-config-and-reboot.yaml \
      -e ${CUSTOME_TEMPLATES}/network-environment.yaml
  2. Append the switchdev option to the NeutronSriovNumVFs parameters that will be configured for OVS hardware offload.

    NeutronSriovNumVFs: "p7p1:5,p7p2:5:switchdev"

6.3.2. Verification

  1. Confirm that a pci device has its mode configured as switchdev.

    # devlink dev eswitch show pci/0000:03:00.0
    pci/0000:03:00.0: mode switchdev inline-mode none encap enable
  2. Confirm offload is enabled in OVS.

    # ovs-vsctl get Open_vSwitch . other_config:hw-offload

6.4. Deploying an instance for SR-IOV

It is recommended to use host aggregates to separate high performance compute hosts. For information on creating host aggregates and associated flavors for scheduling see Creating host aggregates.


You should use host aggregates to separate CPU pinned instances from unpinned instances. Instances that do not use CPU pinning do not respect the resourcing requirements of instances that use CPU pinning.

Display an instance for SR-IOV by performing the following steps:

  1. Create a flavor.

    # openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>
  2. Create the network.

    openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
  3. Create the port.

    • Use vnic-type direct to create an SR-IOV VF port.

      # openstack port create --network net1 --vnic-type direct sriov_port
    • Use vnic-type direct-physical to create an SR-IOV PF port.

      # openstack port create --network net1 --vnic-type direct-physical sriov_port
  4. Deploy an instance

    # openstack server create --flavor <flavor> --image <image> --nic port-id=<id> <instance name>

6.5. Creating host aggregates

It is recommended to deploy guests using cpu pinning and hugepages for increased performance. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata.

6.5.1. Procedure

  1. Ensure that the AggregateInstanceExtraSpecFilter value is added to the scheduler_default_filters parameter in the nova.conf configuration file. If you add AggregateInstanceExtraSpecFilter to satisfy this requirement, restart the nova container.
  2. Create an aggregate group for SR-IOV, and add relevant hosts. Define metadata, for example, sriov=true, that matches defined flavor metadata.

    # openstack aggregate create sriov_group
    # openstack aggregate add host sriov_group compute-sriov-0.localdomain
    # openstack aggregate set sriov_group sriov=true
  3. Create a flavor.

    # openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>
  4. Set additional flavor properties. Note that the defined metadata, sriov=true, matches the defined metadata on the SR-IOV aggregate.

    openstack flavor set --property sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>