Chapter 10. Configuring SR-IOV and DPDK interfaces on the same compute node
This section describes how to deploy SR-IOV and DPDK interfaces on the same Compute node. This deployment uses a custom role for OVS-DPDK and SR-IOV with role-specific parameters defined in the network-environment.yaml file. The process to create and deploy a custom role includes:
- Define the custom role to support SR-IOV and DPDK interfaces on the same Compute node.
-
Set the role-specific parameters for SR-IOV role and OVS-DPDK in the
network_environment.yamlfile. -
Configure the
compute.yamlfile with an SR-IOV interface and a DPDK interface. - Deploy the overcloud with this updated set of roles.
- Create the appropriate OpenStack flavor, networks, and ports to support these interface types.
You must install and configure the undercloud before you can deploy the compute node in the overcloud. See the Director Installation and Usage Guide for details.
Ensure that you create an OpenStack flavor that match this custom role.
You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.
In this example, the ComputeOvsDpdkSriov is a custom role for compute nodes to enable DPDK and SR-IOV only on the nodes that have the appropriate NICs. The existing set of default roles provided by Red Hat OpenStack Platform is stored in the /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file.
10.1. Creating the ComputeOvsDpdkSriov composable role
Red Hat OpenStack Platform provides a set of default roles in the roles_data.yaml file. You can create your own roles_data.yaml file to support the roles you need.
To create the ComputeOvsDpdkSriov composable role to support SR-IOV and DPDK interfaces on the same Compute node:
Create the
ComputeOvsDpdkSriov.yamlfile in a local directory and add the definition of this role:############################################################################### # Role: ComputeOvsDpdkSriov # ############################################################################### - name: ComputeOvsDpdkSriov description: | Compute OvS DPDK Sriov Role CountDefault: 1 networks: - InternalApi - Tenant - Storage HostnameFormatDefault: 'computeovsdpdksriov-%index%' disable_upgrade_deployment: True ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsDpdk - OS::TripleO::Services::Docker - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronSriovAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::Ntp - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNControllerGenerate
roles_data.yamlfor the ComputeOvsDpdkSriov role and any other roles you need for your deployment:# openstack overcloud roles generate --roles-path templates/openstack-tripleo-heat-templates/roles -o roles_data.yaml Controller ComputeOvsDpdkSriov
10.2. Configuring tuned for CPU affinity
This example uses the sample post-install.yaml file.
Set the
tunedconfiguration to enable CPU affinity.ExtraDeployments: type: OS::Heat::StructuredDeployments properties: servers: {get_param: servers} config: {get_resource: ExtraConfig} actions: ['CREATE','UPDATE'] ExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: | #!/bin/bash set -x function tuned_service_dependency() { tuned_service=/usr/lib/systemd/system/tuned.service grep -q "network.target" $tuned_service if [ "$?" -eq 0 ]; then sed -i '/After=.*/s/network.target//g' $tuned_service fi grep -q "Before=.*network.target" $tuned_service if [ ! "$?" -eq 0 ]; then grep -q "Before=.*" $tuned_service if [ "$?" -eq 0 ]; then sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service else sed -i '/After/i Before=network.target openvswitch.service' $tuned_service fi fi } if hiera -c /etc/puppet/hiera.yaml service_names | grep -q neutron_ovs_dpdk_agent; then tuned_service_dependency fi
10.3. Defining the SR-IOV and OVS-DPDK role-specific parameters
Modify the network-environment.yaml file to configure SR-IOV and OVS-DPDK role-specific parameters:
Add the resource mapping for the OVS-DPDK and SR-IOV services to the
network-environment.yamlfile along with the network configuration for these nodes:resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkSriov::Net::SoftwareConfig: nic-configs/computeovsdpdksriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeExtraConfigPost: post-install.yamlDefine the flavors:
OvercloudControllerFlavor: controller OvercloudComputeOvsDpdkSriovFlavor: compute
Configure the role-specific parameters for SR-IOV:
#SR-IOV params NeutronMechanismDrivers: ['openvswitch','sriovnicswitch'] NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter'] NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter"] NeutronSupportedPCIVendorDevs: ['8086:154d', '8086:10ed'] NovaPCIPassthrough: - devname: "ens2f1" physical_network: "tenant" NeutronPhysicalDevMappings: "tenant:ens2f1" NeutronSriovNumVFs: "ens2f1:5"Configure the role-specific parameters for OVS-DPDK:
########################## # OVS DPDK configuration # # ######################## ComputeOvsDpdkSriovParameters: KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on TunedProfileName: "cpu-partitioning" IsolCpusList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31" NovaVcpuPinSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: "1024,1024" OvsDpdkMemoryChannels: "4" OvsDpdkCoreList: "0,16,8,24" OvsPmdCoreList: "1,17,9,25" # MTU global configuration NeutronGlobalPhysnetMtu: 9000 # DHCP provide metadata route to VM. NeutronEnableIsolatedMetadata: true # DHCP always provides metadata route to VM. NeutronEnableForceMetadata: true # Configure the classname of the firewall driver to use for implementing security groups. NeutronOVSFirewallDriver: openvswitchNoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
-
Configure the remainder of the
network-environment.yamlfile to override the default parameters from theneutron-ovs-dpdk-agent.yamlandneutron-sriov-agent.yamlfiles as needed for your OpenStack deployment.
See Planning your OVS-DPDK Deployment for details on how to determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK.
10.4. Configuring the Compute node for SR-IOV and DPDK interfaces
This example uses the sample the compute.yaml file to create the computeovsdpdksriov.yaml file to support SR-IOV and DPDK interfaces.
Create the control plane Linux bond for an isolated network:
- type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4Assign VLANs to this Linux bond:
- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: TenantIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnetSet a bridge with a DPDK port to link to the controller:
- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 rx_queue: 2 members: - type: interface name: nic5NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.Create the SR-IOV interface to the Controller:
- type: interface name: ens2f1 mtu: 9000 use_dhcp: false defroute: false nm_controlled: true hotplug: true
10.5. Deploying the overcloud
The following example defines the openstack overcloud deploy Bash script that uses composable roles:
#!/bin/bash openstack overcloud deploy \ --templates \ -r /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \ -e /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/docker-images.yaml \ -e /home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/network-environment.yaml
Where:
-
/home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/roles_data.yamlis the location of the updatedroles_data.yamlfile, which defines the ComputeOvsDpdkSriov custom role. -
/home/stack/ospd-12-vlan-dpdk-sriov-two-ports-ctlplane-bonding/network-environment.yamlcontains the role-specific parameters for SR-IOV OVS-DPDK interfaces.
10.6. Creating a flavor and deploying an instance with SR-IOV and DPDK interfaces
After you have completed configuring SR-IOV and DPDK interfaces on the same compute node, you need to create a flavor and deploy an instance by performing the following steps:
You should use host aggregates to separate CPU pinned instances from unpinned instances. Instances that do not use CPU pinning do not respect the resourcing requirements of instances that use CPU pinning.
Create a flavor:
# openstack flavor create --vcpus 6 --ram 4096 --disk 40 compute
Where:
-
computeis the flavor name. -
4096is the memory size in MB. -
40is the disk size in GB (default 0G). -
6is the number of vCPUs.
-
Set the flavor for large pages:
# openstack flavor set compute --property hw:mem_page_size=1GB
Create the networks for SR-IOV and DPDK:
# openstack network create --name net-dpdk # openstack network create --name net-sriov # openstack subnet create --subnet-range <cidr/prefix> --network net-dpdk --gateway <gateway> net-dpdk-subnet # openstack subnet create --subnet-range <cidr/prefix> --network net-sriov --gateway <gateway> net-sriov-subnet
Create the SR-IOV port.
Use
vnic-typedirect to create an SR-IOV VF port:# openstack port create --network net-sriov --vnic-type direct sriov_port
Use
vnic-typedirect-physical to create an SR-IOV PF port:# openstack port create --network net-sriov --vnic-type direct-physical sriov_port
Deploy an instance:
# openstack server create --flavor compute --image rhel_7.3 --nic port-id=sriov_port --nic net-id=net-dpdk vm1
Where:
- compute is the flavor name or ID.
-
rhel_7.3is the image (name or ID) used to create an instance. -
sriov_portis the SR-IOV NIC on the Compute node. -
net-dpdkis the DPDK network. -
vm1is the name of the instance.
You have now deployed an instance that uses an SR-IOV interface and a DPDK interface on the same Compute node.
