Chapter 6. Configuring an SR-IOV deployment

This section describes how to configure Single Root Input/Output Virtualization (SR-IOV) for Red Hat OpenStack.

You must install and configure the undercloud before you can deploy the overcloud. See the Director Installation and Usage Guide for details.

Note

Do not edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that are modified by these director heat templates.

6.1. Overview of SR-IOV configurable parameters

You need to update the network-environment.yaml file to include parameters for kernel arguments, SR-IOV driver, PCI passthrough and so on. You must also update the compute.yaml file to include the SR-IOV interface parameters, and run the overcloud_deploy.sh script to deploy the overcloud with the SR-IOV parameters.

OpenStack NFV Config Guide Topology 450694 0617 ECE SR IOV
Note

This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case.

6.2. OVS Hardware Offload

Configuring SR-IOV with OVS hardware offload does not require any specific changes to the network configuration templates. This section describes changes specific to OVS hardware offload deployment. These procedures do not describe the other parameters and network configuration template files that you need for your general network isolation deployment. See Chapter 5, Planning an SR-IOV deployment for details.

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

6.2.1. Configuring SR-IOV with OVS Hardware Offload with VLAN

To enable OpenvSwitch (OVS) hardware offload in the ComputeSriov role:

  1. Generate the ComputeSriov role:

    # openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
  2. Add the environment files to the openstack overcloud deploy command to enable OVS hardware offload and its dependencies:

    # Enables OVS Hardware Offload in the ComputeSriov role
    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
    
    # Applies the KernelArgs and TunedProfile with a reboot
    /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml
  3. Add the following environment file to the openstack overcloud deploy command to enable ML2-ODL:

    #Enables ml2-odl with SR-IOV deployment
    /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
    Note

    The OVS Hardware offload environment file is common across neutron ML2 plugins. While the default ML2 plugin for deployment is ML2-OVS, you should use ML2-ODL for NFV deployments.

  4. Configure the parameters for the SR-IOV nodes to an environment file named sriov-environment.yaml:

    parameter_defaults:
        NeutronTunnelTypes: ''
        NeutronNetworkType: 'vlan'
        NeutronBridgeMappings:
          - <network_name>:<bridge_name>
        NeutronNetworkVLANRanges:
          - <network_name>:<vlan_ranges>
        NeutronSriovNumVFs:
          - <interface_name>:<number_of_vfs>:switchdev
        NeutronPhysicalDevMappings:
          - <network_name>:<interface_name>
        NovaPCIPassthrough:
          - devname: <interface_name>
            physical_network: <network_name>
    Note

    Configure the <network_name>, <interface_name> and <number_of_vfs> according to the SR-IOV node’s configuration and requirement.

To deploy with ML2-ODL:

THT_ROOT=”/usr/share/openstack-tripleo-heat-templates/”
openstack overcloud deploy --templates \
  -r roles_data.yaml \
  -e $THT_ROOT/environments/ovs-hw-offload.yaml \
  -e $THT_ROOT/environments/services-docker/neutron-opendaylight.yaml \
  -e $THT_ROOT/environments/host-config-and-reboot.yaml \
  -e sriov-environment.yaml \
  -e <<network isolation and network config environment files>>

To deploy with ML2-OVS:

THT_ROOT=”/usr/share/openstack-tripleo-heat-templates/”
openstack overcloud deploy --templates \
  -r roles_data.yaml \
  -e $THT_ROOT/environments/ovs-hw-offload.yaml \
  -e $THT_ROOT/environments/host-config-and-reboot.yaml \
  -e sriov-environment.yaml \
  -e <<network isolation and network config environment files>>

6.2.2. Configuring SR-IOV with OVS Hardware Offload with VXLAN

To enable OpenvSwitch (OVS) hardware offload in the ComputeSriov role:

  1. Generate the ComputeSriov role:

    # openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
  2. Add the environment files to the openstack overcloud deploy command to enable OVS hardware offload and its dependencies:

    # Enables OVS Hardware Offload in the ComputeSriov role
    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
    
    # Applies the KernelArgs and TunedProfile with a reboot
    /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml
  3. Apply the following network configuration changes on top of the SR-IOV interface:

    - type: interface
      name: <interface_name>
      addresses:
        - ip_netmask:
            get_param: TenantIpSubnet
    Note

    The Mellanox driver has an issue with VXLAN tunnels on a VLAN interface in a OVS bridge (similar to the VXLAN setup with single-nic-vlans network configuration). Until this issue is resolved in the Mellanox driver, the VXLAN network could be on the interface or on the OVS bridge directly (instead of a VLAN interface on an OVS bridge).

  4. Add the following environment file to the openstack overcloud deploy command to enable ML2-ODL:

    #Enables ml2-odl with SR-IOV deployment
    /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
    Note

    The OVS Hardware offload environment file is common across neutron ML2 plugins. While the default ML2 plugin for deployment is ML2-OVS, you should use ML2-ODL for NFV deployments.

  5. Configure the parameters for the SR-IOV nodes to an environment file named sriov-environment.yaml for the VXLAN deployment:

    parameter_defaults:
        NeutronSriovNumVFs:
          - <interface_name>:<number_of_vfs>:switchdev
        NovaPCIPassthrough:
          - devname: <interface_name>
            physical_network: null
    Note

    Configure the <network_name>, <interface_name> and <number_of_vfs> according to the SR-IOV node’s configuration and requirement.

To deploy with ML2-ODL:

THT_ROOT=”/usr/share/openstack-tripleo-heat-templates/”
openstack overcloud deploy --templates \
  -r roles_data.yaml \
  -e $THT_ROOT/environments/ovs-hw-offload.yaml \
  -e $THT_ROOT/environments/services-docker/neutron-opendaylight.yaml \
  -e $THT_ROOT/environments/host-config-and-reboot.yaml \
  -e sriov-environment.yaml \
  -e <<network isolation and network config environment files>>

To deploy with ML2-OVS:

THT_ROOT=”/usr/share/openstack-tripleo-heat-templates/”
openstack overcloud deploy --templates \
  -r roles_data.yaml \
  -e $THT_ROOT/environments/ovs-hw-offload.yaml \
  -e $THT_ROOT/environments/host-config-and-reboot.yaml \
  -e sriov-environment.yaml \
  -e <<network isolation and network config environment files>>

6.3. Creating a flavor and deploying an instance for SR-IOV

After you have completed configuring SR-IOV for your Red Hat OpenStack Platform deployment with NFV, you need to create a flavor and deploy an instance by performing the following steps:

  1. Create an aggregate group and add a host to it for SR-IOV.

     # openstack aggregate create --zone=sriov sriov
     # openstack aggregate add host sriov compute-sriov-0.localdomain
    Note

    You should use host aggregates to separate CPU pinned instances from unpinned instances. Instances that do not use CPU pinning do not respect the resourcing requirements of instances that use CPU pinning.

  2. Create a flavor.

    # openstack flavor create compute --ram 4096 --disk 150 --vcpus 4

    compute is the flavor name, 4096 is the memory size in MB, 150 is the disk size in GB (default 0G), and 4 is the number of vCPUs.

  3. Set additional flavor properties.

    # openstack flavor set --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB compute

    compute is the flavor name and the remaining parameters set the other properties for the flavor.

  4. Create the network.

    # openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
  5. Create the port.

    1. Use vnic-type direct to create an SR-IOV VF port:

      # openstack port create --network net1 --vnic-type direct sriov_port
    2. Use vnic-type direct-physical to create an SR-IOV PF port.

      # openstack port create --network net1 --vnic-type direct-physical sriov_port
  6. Deploy an instance.

    # openstack server create --flavor compute --availability-zone sriov --image rhel_7.3 --nic port-id=sriov_port sriov_vm

    Where:

    • compute is the flavor name or ID.
    • sriov is the availability zone for the server.
    • rhel_7.3 is the image (name or ID) used to create an instance.
    • sriov_port is the NIC on the server.
    • sriov_vm is the name of the instance.

You have now deployed an instance for the SR-IOV with NFV use case.