Chapter 9. Configuring a heterogeneous compute cluster
This section describes how to deploy an SR-IOV Compute node and an OVS-DPDK Compute node in the same environment. This deployment uses custom roles for OVS-DPDK and SR-IOV with role-specific parameters defined in the network-environment.yaml file. The process to create and deploy a composable role includes:
-
Define the SR-IOV and OVS-DPDK custom roles in a local copy of the
roles_data.yamlfile. -
Set the role-specific parameters for the SR-IOV role and the OVS-DPDK role in the
network_environment.yamlfile. - Deploy the overcloud with this updated set of roles.
You must install and configure the undercloud before you can deploy a heterogeneous compute cluster in the overcloud. See the Director Installation and Usage Guide for details.
Ensure that you create OpenStack flavors that match these custom roles.
You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.
9.1. Naming conventions
We recommend that you follow a consistent naming convention when you use custom roles in your OpenStack deployment, especially with a heterogeneous compute cluster. This naming convention can assist you when creating the following files and configurations:
instackenv.json- To differentiate between the compute node that uses SR-IOV interfaces from the compute node that uses DPDK interfaces."name":"computeovsdpdk-0"
roles_data.yaml- To differentiate between compute-based roles that support SR-IOV from compute-based roles that support DPDK.`ComputeOvsDpdk`
network_environment.yaml-To ensure that you match the custom role to the correct flavor name.
`OvercloudComputeOvsDpdkFlavor: computeovsdpdk`
To include the correct custom role name for any role-specific parameters.
`ComputeOvsDpdkParameters`
-
nic-configfile names - To differentiate NIC yaml files for compute nodes that support SR-IOV from compute nodes that support DPDK interfaces. Flavor creation - To help you match a flavor and
capabilities:profilevalue to the appropriate bare metal node and custom role.# openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 4 computeovsdpdk # openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computeovsdpdk" computeovsdpdk
Bare metal node - To ensure that you match the bare metal node with the appropriate hardware and
capability:profilevalue.# openstack baremetal node update computeovsdpdk-0 add properties/capabilities='profile:computeovsdpdk,boot_option:local'
The flavor name does not have to match the capabilities:profile value for the flavor, but the flavor capabilities:profile value must match the bare metal node properties/capabilities='profile value. All three use computeovsdpdk in this example.
Ensure that all your nodes used for a custom role and profile have the same CPU, RAM, and PCI hardware topology.
In this example, the ComputeOvsDpdk and ComputeSriov are custom roles for compute nodes to enable DPDK or SR-IOV only on the nodes that have the appropriate NICs. The existing set of default roles provided by Red Hat OpenStack Platform is stored in the /home/stack/roles_data.yaml file.
9.2. Creating the SR-IOV and OVS-DPDK custom roles
Red Hat OpenStack provides a set of default roles in the roles_data.yaml file. You can create your own roles_data.yaml file to support the roles you need.
To create the custom roles to support SR-IOV and OVS-DPDK:
Create the
ComputeSriov.yamlfile in a local directory and add the definition of this role:############################################################################### # Role: ComputeSriov # ############################################################################### - name: ComputeSriov description: | Compute SR-IOV role CountDefault: 1 networks: - InternalApi - Tenant - Storage HostnameFormatDefault: 'computesriov-%index%' disable_upgrade_deployment: True ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsAgent - OS::TripleO::Services::Docker - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronSriovAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::Ntp - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNControllerCreate the
ComputeOvsDpdk.yamlfile in a local directory and add the definition of this role:############################################################################### # Role: ComputeOvsDpdk # ############################################################################### - name: ComputeOvsDpdk description: | Compute OVS DPDK Role CountDefault: 1 networks: - InternalApi - Tenant - Storage HostnameFormatDefault: 'computeovsdpdk-%index%' disable_upgrade_deployment: True ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsDpdk - OS::TripleO::Services::Docker - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::Ntp - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackagesGenerate
roles_data.yamlfor the ComputeSriov and ComputeOvsDpdk roles and any other roles you need for your deployment:# openstack overcloud roles generate --roles-path templates/openstack-tripleo-heat-templates/roles -o roles_data.yaml Controller ComputeSriov ComputeOvsDpdk
9.3. Configuring tuned for CPU affinity
This example uses the sample post-install.yaml file.
Set the
tunedconfiguration to enable CPU affinity.resources: ExtraDeployments: type: OS::Heat::StructuredDeployments properties: servers: {get_param: servers} config: {get_resource: ExtraConfig} actions: ['CREATE','UPDATE'] ExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: | #!/bin/bash set -x function tuned_service_dependency() { tuned_service=/usr/lib/systemd/system/tuned.service grep -q "network.target" $tuned_service if [ "$?" -eq 0 ]; then sed -i '/After=.*/s/network.target//g' $tuned_service fi grep -q "Before=.*network.target" $tuned_service if [ ! "$?" -eq 0 ]; then grep -q "Before=.*" $tuned_service if [ "$?" -eq 0 ]; then sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service else sed -i '/After/i Before=network.target openvswitch.service' $tuned_service fi fi } if hiera -c /etc/puppet/hiera.yaml service_names | grep -q neutron_ovs_dpdk_agent; then tuned_service_dependency fi
9.4. Defining the SR-IOV and OVS-DPDK role-specific parameters
You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Planning your OVS-DPDK Deployment and Section 8.1, “Deriving DPDK parameters with workflows” for details.
To configure SR-IOV and OVS-DPDK role-specific parameters in the network-environment.yaml file:
Add the resource mapping for the OVS-DPDK and SR-IOV services to the
network-environment.yamlfile along with the network configuration for these nodes:resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeSriov::Net::SoftwareConfig: nic-configs/compute-sriov.yaml OS::TripleO::ComputeOvsDpdk::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeExtraConfigPost: post-install.yamlSpecify the flavors for each role:
OvercloudControlFlavor: controller OvercloudComputeOvsDpdkFlavor: computeovsdpdk OvercloudComputeSriovFlavor: computesriov
Specify the number of nodes to deploy for each role:
#Number of nodes to deploy. ControllerCount: 1 ComputeOvsDpdkCount: 1 ComputeSriovCount: 1
Configure the SR-IOV parameters:
Configure the Compute
pci_passthrough_whitelistparameter, and setdevnamefor the SR-IOV interface. The whitelist sets the PCI devices available to instances.NovaPCIPassthrough: - devname: "ens2f0" physical_network: "tenant" - devname: "ens2f1" physical_network: "tenant"Specify the physical network and SR-IOV interface in the format -
PHYSICAL_NETWORK:PHYSICAL DEVICE:All physical networks listed in the
network_vlan_rangeson the server should have mappings to the appropriate interfaces on each agent.NeutronPhysicalDevMappings: "tenant:ens2f0,tenant:ens2f1"
This example uses
tenantas thephysical_networkname.Provide the number of Virtual Functions (VFs) to be reserved for each SR-IOV interface:
NeutronSriovNumVFs: "ens2f0:5,ens2f1:5"
This example reserves 5 VFs for the SR-IOV interface.
NoteRed Hat OpenStack Platform supports the number of VFs supported by the NIC vendor. See Deployment Limits for Red Hat OpenStack Platform for other related details.
Configure the role-specific parameters for SR-IOV:
# SR-IOV compute node. ComputeSriovParameters: KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on TunedProfileName: "cpu-partitioning" IsolCpusList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31" NovaVcpuPinSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31'] NovaReservedHostMemory: 4096Configure the role-specific parameters for OVS-DPDK:
# DPDK compute node. ComputeOvsDpdkParameters: KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on TunedProfileName: "cpu-partitioning" IsolCpusList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31" NovaVcpuPinSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: "1024,1024" OvsDpdkMemoryChannels: "4" OvsDpdkCoreList: "0,16,8,24" OvsPmdCoreList: "1,17,9,25"NoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
Set the DHCP Metadata parameters:
# DHCP provide metadata route to VM. NeutronEnableIsolatedMetadata: true # DHCP always provides metadata route to VM. NeutronEnableForceMetadata: true
-
Configure the remainder of the
network-environment.yamlfile to override the default parameters from theneutron-ovs-dpdk-agent.yamlandneutron-sriov-agent.yamlfiles as needed for your OpenStack deployment.
9.5. Configuring SR-IOV and DPDK compute nodes
To support an SR-IOV Compute node and an OVS-DPDK Compute node:
Create the
compute-sriov.yamlfile to support SR-IOV interfaces.Create the control plane Linux bond for an isolated network:
- type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4Assign VLANs to this Linux bond:
- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: TenantIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnetCreate the SR-IOV interfaces to the Controller:
- type: interface name: ens2f0 mtu: 9000 use_dhcp: false defroute: false nm_controlled: true hotplug: true - type: interface name: ens2f1 mtu: 9000 use_dhcp: false defroute: false nm_controlled: true hotplug: true
Create the
compute-ovsdpdk.yamlfile to support DPDK interfaces.Create the control plane Linux bond for an isolated network:
- type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4Assign VLANs to this Linux bond:
- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: TenantIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnetSet the bridge with an OVS-DPDK bond to link to the Controller:
- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic6NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.
NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.
9.6. Deploying the overcloud
The following example defines the openstack overcloud deploy Bash script that uses composable roles:
#!/bin/bash openstack overcloud deploy \ --templates \ -r /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \ -e /home/stack/ospd-12-vlan-sriov-two-ports-ctlplane-bonding/docker-images.yaml \ -e /home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/network-environment.yaml
Where:
-
/home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/roles_data.yamlis the location of the updatedroles_data.yamlfile, which defines the OVS-DPDK and the SR-IOV composable roles. -
/home/stack/ospd-12-sriov-dpdk-heterogeneous-cluster/network-environment.yamlcontains the role-specific parameters for the SR-IOV Compute node and the OVS-DPDK Compute node.
