Chapter 6. Configuring an SR-IOV deployment

This section describes how to configure Single Root Input/Output Virtualization (SR-IOV) for Red Hat OpenStack.

You must install and configure the undercloud before you can deploy the overcloud. See the Director Installation and Usage Guide for details.

Note

Do not edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that are modified by these director heat templates

6.1. Overview of SR-IOV configurable parameters

You need to update the network-environment.yaml file to include parameters for kernel arguments, SR-IOV driver, PCI passthrough and so on. You must also update the compute.yaml file to include the SR-IOV interface parameters, and run the overcloud_deploy.sh script to deploy the overcloud with the SR-IOV parameters.

OpenStack NFV Config Guide Topology 450694 0617 ECE SR IOV
Note

This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case.

6.2. Configuring SR-IOV with control plane bonding

This section describes how to configure a composable role for SR-IOV with two port control plane bonding for your OpenStack environment. The process to create and deploy a composable role includes:

  • Define the new role in a local copy of the role_data.yaml file.
  • Modify the network_environment.yaml file to include this new role.
  • Deploy the overcloud with this updated set of roles.

In this example, ComputeSriov is a composable role for compute node to enable SR-IOV only on the nodes that have the SR-IOV NICs. The existing set of default roles provided by the Red Hat OpenStack Platform is stored in the /home/stack/roles_data.yaml file.

6.2.1. Creating the ComputeSriov composable role

Modify roles_data.yaml to Create an SR-IOV Composable Role.

Copy the roles_data.yaml file to your /home/stack/templates directory and add the new `ComputeSriov role.

  - name: ComputeSriov
    description: |
      Compute Sriov Node role
    CountDefault: 1
    networks:
      - InternalApi
      - Tenant
      - Storage
    HostnameFormatDefault: computesriov-%index%
    ServicesDefault:
      - OS::TripleO::Services::AuditD
      - OS::TripleO::Services::CACerts
      - OS::TripleO::Services::CephClient
      - OS::TripleO::Services::CephExternal
      - OS::TripleO::Services::CertmongerUser
      - OS::TripleO::Services::Collectd
      - OS::TripleO::Services::ComputeCeilometerAgent
      - OS::TripleO::Services::ComputeNeutronCorePlugin
      - OS::TripleO::Services::ComputeNeutronL3Agent
      - OS::TripleO::Services::ComputeNeutronMetadataAgent
      - OS::TripleO::Services::ComputeNeutronOvsAgent
      - OS::TripleO::Services::Docker
      - OS::TripleO::Services::FluentdClient
      - OS::TripleO::Services::Iscsid
      - OS::TripleO::Services::Kernel
      - OS::TripleO::Services::MySQLClient
      - OS::TripleO::Services::NeutronLinuxbridgeAgent
      - OS::TripleO::Services::NeutronSriovAgent
      - OS::TripleO::Services::NeutronSriovHostConfig
      - OS::TripleO::Services::NeutronVppAgent
      - OS::TripleO::Services::NovaCompute
      - OS::TripleO::Services::NovaLibvirt
      - OS::TripleO::Services::NovaMigrationTarget
      - OS::TripleO::Services::Ntp
      - OS::TripleO::Services::ContainersLogrotateCrond
      - OS::TripleO::Services::OpenDaylightOvs
      - OS::TripleO::Services::Securetty
      - OS::TripleO::Services::SensuClient
      - OS::TripleO::Services::Snmp
      - OS::TripleO::Services::Sshd
      - OS::TripleO::Services::Timezone
      - OS::TripleO::Services::TripleoFirewall
      - OS::TripleO::Services::TripleoPackages
      - OS::TripleO::Services::Tuned
      - OS::TripleO::Services::Vpp
      - OS::TripleO::Services::OVNController

6.2.2. Configuring SR-IOV parameters

You must set the SR-IOV parameters to match your OpenStack deployment.

This example uses the sample network-environment.yaml.

  1. Add the custom resources for SR-IOV including the ComputeSriov role under resource_registry.

      resource_registry:
        # Specify the relative/absolute path to the config files you want to use for override the default.
        OS::TripleO::ComputeSriov::Net::SoftwareConfig: nic-configs/compute-sriov.yaml
        OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  2. Under parameter_defaults, disable the tunnel type (set the value to ""), and set network type to vlan.

    NeutronTunnelTypes: ''
    NeutronNetworkType: 'vlan'
  3. Optional: Under parameter_defaults, map the physical network to the logical bridge. You will need to consider your OVS/OVS-DPSK configuration when determining whether your deployment requires this step.

    NeutronBridgeMappings: 'tenant:br-link0'
  4. Under parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range.

    NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'

    This example sets the VLAN ranges on the physical network (tenant).

  5. Under parameter_defaults, set the SR-IOV configuration parameters.

    1. Enable the SR-IOV mechanism driver (sriovnicswitch).

      NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
    2. Configure the Compute pci_passthrough_whitelist parameter, and set devname for the SR-IOV interface. The whitelist sets the PCI devices available to instances.

        NovaPCIPassthrough:
          - devname: "p7p1"
            physical_network: "tenant"
          - devname: "p7p2"
            physical_network: "tenant"
    3. Specify the physical network and SR-IOV interface in the format - PHYSICAL_NETWORK:PHYSICAL DEVICE.

      All physical networks listed in the network_vlan_ranges on the server should have mappings to the appropriate interfaces on each agent.

        NeutronPhysicalDevMappings: "tenant:p7p1,tenant:p7p2"

      This example uses tenant as the physical_network name.

    4. Provide the number of Virtual Functions (VFs) to be reserved for each SR-IOV interface.

        NeutronSriovNumVFs: "p7p1:5,p7p2:5"

      This example reserves 5 VFs for the SR-IOV interfaces.

      Note

      Red Hat OpenStack Platform supports the number of VFs supported by the NIC vendor. See Deployment Limits for Red Hat OpenStack Platform for other related details.

  6. Under parameter_defaults, set the role-specific parameters for the ComputeSriov role.

      # SR-IOV compute node.
      ComputeSriovParameters:
      ComputeSriovParameters:
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-19,21-39"
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "1-19,21-39"
        NovaVcpuPinSet: ['1-19,21-39']
        NovaReservedHostMemory: 4096
    Note

    You should also add hw:mem_page_size=1GB to the flavor you associate with the SR-IOV instance.

  7. Under parameter_defaults, list the applicable filters.

    Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.

      NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']

6.2.3. Configuring the Controller node

This example uses the sample controller.yaml file.

  1. Create interfaces for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
        name: nic3
        primary: true
        - type: interface
        name: nic4
  2. Assign VLANs to these interfaces.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: TenantNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: TenantIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageMgmtNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageMgmtIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: ExternalNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: ExternalIpSubnet
        routes:
        - default: true
          next_hop:
            get_param: ExternalInterfaceDefaultRoute
  3. Create the OVS bridge for access to the floating IPs into cloud networks.

        - type: ovs_bridge
          name: br-link0
          use_dhcp: false
          mtu: 9000
          members:
          - type: ovs_bond
            name: bond0
            use_dhcp: true
            members:
            - type: interface
              name: nic7
              mtu: 9000
              name: nic8
              mtu: 9000

6.2.4. Configuring the Compute node for SR-IOV interfaces

This example uses the sample compute-sriov.yaml file.

Create compute-sriov.yaml from the default compute.yaml file. This is the file that controls the parameters for the Compute nodes that use the ComputeSriov composable role.

  1. Create the interface for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic3
          primary: true
        - type: interface
          name: nic4
  2. Assign VLANs to this interface.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: TenantNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: TenantIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
  3. Create a interfaces to the Controller node.

      - type: interface
        name: p7p1
        mtu: 9000
        use_dhcp: false
        defroute: false
        nm_controlled: true
        hotplug: true
    
      - type: interface
        name: p7p2
        mtu: 9000
        use_dhcp: false
        defroute: false
        nm_controlled: true
        hotplug: true

6.2.5. Deploying the overcloud

Run the overcloud_deploy.sh script to deploy the overcloud.

#!/bin/bash

openstack overcloud deploy \
--templates \
-r /home/stack/ospd-12-vlan-sriov-two-ports-ctlplane-bonding/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \
-e /home/stack/ospd-12-vlan-sriov-two-ports-ctlplane-bonding/docker-images.yaml \
-e /home/stack/ospd-12-vlan-sriov-two-ports-ctlplane-bonding/network-environment.yaml \
--log-file overcloud_install.log &> overcloud_install.log

6.3. Creating a flavor and deploying an instance for SR-IOV

After you have completed configuring SR-IOV for your Red Hat OpenStack Platform deployment with NFV, you need to create a flavor and deploy an instance by performing the following steps:

  1. Create an aggregate group and add a host to it for SR-IOV.

     # openstack aggregate create --zone=sriov sriov
     # openstack aggregate add host sriov compute-sriov-0.localdomain
    Note

    You should use host aggregates to separate CPU pinned instances from unpinned instances. Instances that do not use CPU pinning do not respect the resourcing requirements of instances that use CPU pinning.

  2. Create a flavor.

    # openstack flavor create compute --ram 4096 --disk 150 --vcpus 4

    compute is the flavor name, 4096 is the memory size in MB, 150 is the disk size in GB (default 0G), and 4 is the number of vCPUs.

  3. Set additional flavor properties.

    # openstack flavor set --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB compute

    compute is the flavor name and the remaining parameters set the other properties for the flavor.

  4. Create the network.

    # openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
  5. Create the port.

    1. Use vnic-type direct to create an SR-IOV VF port:

      # openstack port create --network net1 --vnic-type direct sriov_port
    2. Use vnic-type direct-physical to create an SR-IOV PF port.

      # openstack port create --network net1 --vnic-type direct-physical sriov_port
  6. Deploy an instance.

    # openstack server create --flavor compute --availability-zone sriov --image rhel_7.3 --nic port-id=sriov_port sriov_vm

    Where:

    • compute is the flavor name or ID.
    • sriov is the availability zone for the server.
    • rhel_7.3 is the image (name or ID) used to create an instance.
    • sriov_port is the NIC on the server.
    • sriov_vm is the name of the instance.

You have now deployed an instance for the SR-IOV with NFV use case.