Chapter 3. Configure SR-IOV Support for Virtual Networking
This section describes how to configure Single Root Input/Output Virtualization (SR-IOV) for Red Hat OpenStack.
3.1. Understanding SR-IOV Configurable Parameters
You need to update the network-environment.yaml file to include parameters for kernel arguments, SR-IOV driver, PCI passthrough and so on. You must also update the compute.yaml file to include the SR-IOV interface parameters, and run the overcloud_deploy.sh script to deploy the overcloud with the SR-IOV parameters.

This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and the Network Functions Virtualization Planning and Prerequisite Guide to understand the hardware and configuration options.
3.2. Configure Single-NIC SR-IOV Composable Role with VLAN Tunnelling
This section describes how to configure a composable role for SR-IOV with VLAN tunnelling for your OpenStack environment. The process to create and deploy a composable role includes:
-
Define the new role in a local copy of the
role_data.yamlfile. -
Modify the
network_environment.yamlfile to include this new role. - Deploy the overcloud with this updated set of roles.
In this example, ComputeSriov is a composable role for compute node to enable SR-IOV only on the nodes that have the SR-IOV NICs. The existing set of default roles provided by the Red Hat OpenStack Platform is stored in the /home/stack/roles_data.yaml file.
3.2.1. Modify roles_data.yaml to Create an SR-IOV Composable Role
Copy the roles_data.yaml file to your /home/stack/templates directory and add the new `ComputeSriov role.
- name: ComputeSriov
CountDefault: 1
HostnameFormatDefault: compute-sriov-%index%
disable_upgrade_deployment: True
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::NovaCompute
- OS::TripleO::Services::NovaLibvirt
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::ComputeNeutronCorePlugin
- OS::TripleO::Services::ComputeNeutronOvsAgent
- OS::TripleO::Services::ComputeCeilometerAgent
- OS::TripleO::Services::ComputeNeutronL3Agent
- OS::TripleO::Services::ComputeNeutronMetadataAgent
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::NeutronSriovAgent
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::AuditD
- OS::TripleO::Services::Collectd3.2.2. Modify first-boot.yaml
Set the configuration to install
tunedfor CPU affinity.compute_kernel_args: type: OS::Heat::SoftwareConfig properties: config: str_replace: template: | #!/bin/bash set -x FORMAT=$COMPUTE_HOSTNAME_FORMAT if [[ -z $FORMAT ]] ; then FORMAT="compute" ; else # Assumption: only %index% and %stackname% are the variables in Host name format FORMAT=$(echo $FORMAT | sed 's/\%index\%//g' | sed 's/\%stackname\%//g') ; fi if [[ $(hostname) == *$FORMAT* ]] ; then tuned_conf_path="/etc/tuned/cpu-partitioning-variables.conf" if [ -n "$TUNED_CORES" ]; then grep -q "^isolated_cores" $tuned_conf_path if [ "$?" -eq 0 ]; then sed -i 's/^isolated_cores=.*/isolated_cores=$TUNED_CORES/' $tuned_conf_path else echo "isolated_cores=$TUNED_CORES" >> $tuned_conf_path fi tuned-adm profile cpu-partitioning fi sed 's/^\(GRUB_CMDLINE_LINUX=".*\)"/\1 $KERNEL_ARGS isolcpus=$TUNED_CORES"/g' -i /etc/default/grub ; grub2-mkconfig -o /etc/grub2.cfg reboot fi params: $KERNEL_ARGS: {get_param: ComputeKernelArgs} $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat} $TUNED_CORES: {get_param: HostIsolatedCoreList}
3.2.3. Modify post-install.yaml
Set the
tunedconfiguration to enable CPU affinity.ExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/bash set -x FORMAT=$COMPUTE_HOSTNAME_FORMAT if [[ -z $FORMAT ]] ; then FORMAT="compute" ; else # Assumption: only %index% and %stackname% are the variables in Host name format FORMAT=$(echo $FORMAT | sed 's/\%index\%//g' | sed 's/\%stackname\%//g') ; fi if [[ $(hostname) == *$FORMAT* ]] ; then tuned_service=/usr/lib/systemd/system/tuned.service grep -q "network.target" $tuned_service if [ "$?" -eq 0 ]; then sed -i '/After=.*/s/network.target//g' $tuned_service fi grep -q "Before=.*network.target" $tuned_service if [ ! "$?" -eq 0 ]; then grep -q "Before=.*" $tuned_service if [ "$?" -eq 0 ]; then sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service else sed -i '/After/i Before=network.target openvswitch.service' $tuned_service fi fi systemctl daemon-reload fi params: $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat}
3.2.4. Modify network-environment.yaml
Add the custom resources for SR-IOV including the ComputeSriov role under
resource_registry.resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeSriov::Net::SoftwareConfig: nic-configs/compute-sriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::Services::NeutronSriovAgent: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-sriov-agent.yaml OS::TripleO::NodeUserData: first-boot.yaml OS::TripleO::NodeExtraConfigPost: post-install.yaml
Under
parameter_defaults, disable the tunnel type (set the value to""), and set network type tovlan.NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'
Under
parameter_defaults, map the physical network to the logical bridge.NeutronBridgeMappings: 'tenant:br-link'
Under
parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range.NeutronNetworkVLANRanges: 'tenant:420:420,tenant:421:421'
This example sets the VLAN ranges on the physical network (
tenant).Under
parameter_defaults, set the SR-IOV configuration parameters.Enable the SR-IOV mechanism driver (
sriovnicswitch).NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
Configure the Compute
pci_passthrough_whitelistparameter, and setdevnamefor the SR-IOV interface. The whitelist sets the PCI devices available to instances.NovaPCIPassthrough: - devname: "ens2f1" physical_network: "tenant"Specify the physical network and SR-IOV interface in the format -
PHYSICAL_NETWORK:PHYSICAL DEVICE.All physical networks listed in the
network_vlan_rangeson the server should have mappings to the appropriate interfaces on each agent.NeutronPhysicalDevMappings: "tenant:ens2f1"
This example uses
tenantas thephysical_networkname.Provide the number of Virtual Functions (VFs) to be reserved for each SR-IOV interface.
NeutronSriovNumVFs: "ens2f1:5"
This example reserves 5 VFs for the SR-IOV interface.
NoteCurrently, Red Hat OpenStack Platform with NFV supports 30 or fewer VFs.
Under
parameter_defaults, list the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']
Under
parameter_defaults, reserve the RAM for the host processes.NovaReservedHostMemory: 2048
Under
parameter_defaults, set a comma-separated list or range of physical CPU cores to reserve for virtual machine processes.NovaVcpuPinSet: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31"
Under
parameter_defaults, define theComputeKernelArgsparameters to be included in the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"
NoteYou need to add
hw:mem_page_size=1GBto the flavor you associate with the SR-IOV instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be appended to the tunedcpu-partitioningprofile and isolated from the host.HostIsolatedCoreList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31"
3.2.5. Modify controller.yaml
Create the interface for an isolated network.
- type: interface name: ens1f1 use_dhcp: false dns_servers: {get_param: DnsServers}Assign VLANs to this interface.
- type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} device: ens1f1 addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} device: ens1f1 addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: bond_api addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: bond_api addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: ens1f1 addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - default: true next_hop: {get_param: ExternalInterfaceDefaultRoute}Create the OVS bridge to the Compute node.
- type: ovs_bridge name: br-link use_dhcp: false members: - type: interface name: ens2f1
3.2.6. Modify compute-sriov.yaml
Create compute-sriov.yaml from the default compute.yaml file. This is the file that controls the parameters for the Compute nodes that use the ComputeSriov composable role.
Create the interface for an isolated network.
- type: interface name: ens1f1 use_dhcp: false dns_servers: {get_param: DnsServers}Assign VLANs to this interface.
- type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} device: ens1f1 addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} device: ens1f1 addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: ens1f1 addresses: - ip_netmask: {get_param: StorageIpSubnet}Create an interface to the controller node.
- type: interface name: ens2f1 use_dhcp: false defroute: false
3.2.7. Run the overcloud_deploy.sh Script
The following example defines the openstack overcloud deploy Bash script that uses composable roles:
#!/bin/bash openstack overcloud deploy \ --templates \ -r /home/stack/ospd-11-vlan-sriov-single-port-composable-roles/roles-data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/ospd-11-vlan-sriov-single-port-composable-roles/network-environment.yaml
home/stack/ospd-11-vlan-sriov-single-port-composable-roles/roles-data.yaml is the location of the updated roles_data.yaml file, which defines the SR-IOV composable role.
Reboot the Compute nodes to enforce the tuned profile after the overcloud is deployed.
3.3. Configure Two-Port SR-IOV with HCI
This section describes how to configure a composable role for SR-IOV with VLAN tunnelling for your OpenStack HCI environment. The process includes:
-
Defining the new role in a local copy of the
role_data.yamlfile. - Defining the OpenStack flavor to use this new role.
-
Modifying the
network_environment.yamlfile to include this new role. - Deploying the overcloud with this updated set of roles.
In this example, the Compute role is modified to enable SR-IOV and Ceph OSD only on the nodes that have the appropriate hardware and NICs. The existing set of default roles provided by Red Hat OpenStack Platform is stored in the /home/stack/roles_data.yaml file.
See the Hyper-Converged Infrastructure Guide for complete details on configuring HCI. The SR-IOV configuration described here is complementary to the HCI configuration in that guide.
3.3.1. Modify custom-roles.yaml to Create an SR-IOV Composable Role
Copy the roles_data.yaml file to your /home/stack/templates directory and modify the Compute role.
- name: Compute
CountDefault: 1
disable_upgrade_deployment: True
ServicesDefault:
- OS::TripleO::Services::CephOSD
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::NovaCompute
- OS::TripleO::Services::NovaLibvirt
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::ComputeNeutronCorePlugin
- OS::TripleO::Services::ComputeNeutronOvsAgent
- OS::TripleO::Services::ComputeCeilometerAgent
- OS::TripleO::Services::ComputeNeutronL3Agent
- OS::TripleO::Services::ComputeNeutronMetadataAgent
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::NeutronSriovAgent
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::AuditD
- OS::TripleO::Services::Collectd3.3.2. Modify first-boot.yaml
Add the NFV
tunedconfiguration to the HCIfirst-boot.yamlfile to enable CPU affinity.tuned_config: type: OS::Heat::SoftwareConfig properties: config: str_replace: template: | #!/bin/bash set -x FORMAT=$COMPUTE_HOSTNAME_FORMAT if [[ -z $FORMAT ]] ; then FORMAT="compute" ; else # Assumption: only %index% and %stackname% are the variables in Host name format FORMAT=$(echo $FORMAT | sed 's/\%index\%//g' | sed 's/\%stackname\%//g') ; fi if [[ $(hostname) == *$FORMAT* ]] ; then tuned_conf_path="/etc/tuned/cpu-partitioning-variables.conf" if [ -n "$TUNED_CORES" ]; then grep -q "^isolated_cores" $tuned_conf_path if [ "$?" -eq 0 ]; then sed -i 's/^isolated_cores=.*/isolated_cores=$TUNED_CORES/' $tuned_conf_path else echo "isolated_cores=$TUNED_CORES" >> $tuned_conf_path fi tuned-adm profile cpu-partitioning fi fi params: $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat} $TUNED_CORES: {get_param: HostIsolatedCoreList}Set the initial boot parameters.
compute_kernel_args: type: OS::Heat::SoftwareConfig properties: config: str_replace: template: | #!/bin/bash set -x FORMAT=$COMPUTE_HOSTNAME_FORMAT if [[ -z $FORMAT ]] ; then FORMAT="compute" ; else # Assumption: only %index% and %stackname% are the variables in Host name format FORMAT=$(echo $FORMAT | sed 's/\%index\%//g' | sed 's/\%stackname\%//g') ; fi if [[ $(hostname) == *$FORMAT* ]] ; then sed 's/^\(GRUB_CMDLINE_LINUX=".*\)"/\1 $KERNEL_ARGS isolcpus=$TUNED_CORES"/g' -i /etc/default/grub ; grub2-mkconfig -o /etc/grub2.cfg reboot fi params: $KERNEL_ARGS: {get_param: ComputeKernelArgs} $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat} $TUNED_CORES: {get_param: HostIsolatedCoreList}
3.3.3. Modify post-install.yaml
Set the
tunedconfiguration to enable CPU affinity.ExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/bash set -x FORMAT=$COMPUTE_HOSTNAME_FORMAT if [[ -z $FORMAT ]] ; then FORMAT="compute" ; else # Assumption: only %index% and %stackname% are the variables in Host name format FORMAT=$(echo $FORMAT | sed 's/\%index\%//g' | sed 's/\%stackname\%//g') ; fi if [[ $(hostname) == *$FORMAT* ]] ; then tuned_service=/usr/lib/systemd/system/tuned.service grep -q "network.target" $tuned_service if [ "$?" -eq 0 ]; then sed -i '/After=.*/s/network.target//g' $tuned_service fi grep -q "Before=.*network.target" $tuned_service if [ ! "$?" -eq 0 ]; then grep -q "Before=.*" $tuned_service if [ "$?" -eq 0 ]; then sed -i 's/^\(Before=.*\)/\1 network.target openvswitch.service/g' $tuned_service else sed -i '/After/i Before=network.target openvswitch.service' $tuned_service fi fi systemctl daemon-reload fi params: $COMPUTE_HOSTNAME_FORMAT: {get_param: ComputeHostnameFormat}
If you have the appropriate hardware, configure Ceph NUMA pinning as well for optimal performance. See Configure Ceph NUMA Pinning.
3.3.4. Modify network-environment.yaml
Add the custom resources for SR-IOV under
resource_registry. This is in addition to any resources configured for HCI. See the Hyper-Converged Infrastructure Guide for complete details on configuring HCI.resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml # First boot and Kernel Args OS::TripleO::NodeUserData: first-boot.yaml OS::TripleO::NodeExtraConfigPost: post-install.yamll
Under
parameter_defaults, disable the tunnel type (set the value to""), and set network type tovlan.NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'
Under
parameter_defaults, map the physical network to the logical bridge.NeutronBridgeMappings: 'datacentre:br-isolated,tenant:br-sriov,tenant2:br-sriov2'
Under
parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range.NeutronNetworkVLANRanges: 'datacentre:419:419,tenant:420:420,tenant2:421:421'
This example sets the VLAN ranges on the physical network (
tenant).Under
parameter_defaults, set the SR-IOV configuration parameters.Enable the SR-IOV mechanism driver (
sriovnicswitch).NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
Configure the Compute
pci_passthrough_whitelistparameter, and setdevnamefor the SR-IOV interface. The whitelist sets the PCI devices available to instances.NovaPCIPassthrough: - devname: "ens2f0" physical_network: "tenant" - devname: "ens2f1" physical_network: "tenant2"Specify the physical network and SR-IOV interface in the format -
PHYSICAL_NETWORK:PHYSICAL DEVICE.All physical networks listed in the
network_vlan_rangeson the server should have mappings to the appropriate interfaces on each agent.NeutronPhysicalDevMappings: "tenant:ens2f0,tenant2:ens2f1"
This example uses
tenantas thephysical_networkname.Provide the number of Virtual Functions (VFs) to be reserved for each SR-IOV interface.
NeutronSriovNumVFs: "ens2f0:7,ens2f1:7"
This example reserves 7 VFs for each of the SR-IOV interfaces.
NoteCurrently, the Red Hat OpenStack Platform with NFV supports 30 or fewer VFs.
Under
parameter_defaults, list the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']
Under
parameter_defaults, define theComputeKernelArgsparameters to be included in the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=12 intel_iommu=on iommu=pt"
NoteYou need to add
hw:mem_page_size=1GBto the flavor you associate with the SR-IOV instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be appended to the tunedcpu-partitioningprofile and isolated from the host.HostIsolatedCoreList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31"
3.3.5. Modify controller.yaml
Create the bridge and interface for an isolated network.
- type: ovs_bridge name: br-isolated use_dhcp: false dns_servers: {get_param: DnsServers} members: - type: interface name: nic2 # force the MAC address of the bridge to this interface primary: trueAssign VLANs to this bridge.
- type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - default: true next_hop: {get_param: ExternalInterfaceDefaultRoute} - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet}Create the OVS bridges to the Compute node.
- type: ovs_bridge name: br-sriov use_dhcp: false members: - type: interface name: nic3 - type: ovs_bridge name: br-sriov2 use_dhcp: false members: - type: interface name: nic4
3.3.6. Modify compute.yaml
Add the following parameters to the compute.yaml file you created for your HCI deployment.
HCI includes compute parameters not shown in the example YAML files in this guide. See the Configuring Resource Isolation on Hyper-Converged Nodes for details.
Create the bridge and interface for an isolated network.
- type: ovs_bridge name: br-isolated use_dhcp: false dns_servers: {get_param: DnsServers} members: - type: interface name: ens1f1 # force the MAC address of the bridge to this interface primary: trueAssign VLANs to this bridge.
- type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet}- Create interfaces to the controller node.
- type: interface name: ens2f0 use_dhcp: false defroute: false - type: interface name: ens2f1 use_dhcp: false defroute: false
3.3.7. Run the overcloud_deploy.sh Script
The following example defines the openstack overcloud deploy Bash script that uses composable roles.
This example shows SR-IOV only. You need to add the appropriate templates and YAML files to support HCI. See the Deploying Pure HCI for a sample HCI overcloud deploy command.
#!/bin/bash openstack overcloud deploy \ --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml \ -r /home/stack/ospd-11-vlan-ovs-sriov-two-ports-and-hci/custom-roles.yaml \ -e /home/stack/ospd-11-vlan-ovs-sriov-two-ports-and-hci/network-environment.yaml
Reboot the Compute nodes to enforce the tuned profile after the overcloud is deployed.
3.4. Create a Flavor and Deploy an Instance for SR-IOV
After you have completed configuring SR-IOV for your Red Hat OpenStack Platform deployment with NFV, you need to create a flavor and deploy an instance by performing the following steps:
Create an aggregate group and add a host to it for SR-IOV.
# openstack aggregate create --zone=sriov sriov # openstack aggregate add host sriov compute-sriov-0.localdomain
Create a flavor.
# openstack flavor create compute --ram 4096 --disk 150 --vcpus 4
compute is the flavor name,
4096is the memory size in MB,150is the disk size in GB (default 0G), and4is the number of vCPUs.Set additional flavor properties.
# openstack flavor set --property hw:cpu_policy=dedicated --property hw:mem_page_size=large compute
compute is the flavor name and the remaining parameters set the other properties for the flavor.
Create the network.
# openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
Create the port.
Use
vnic-typedirect to create an SR-IOV VF port:# openstack port create --network net1 --vnic-type direct sriov_port
Use
vnic-typedirect-physical to create an SR-IOV PF port.# openstack port create --network net1 --vnic-type direct-physical sriov_port
Deploy an instance.
# openstack server create --flavor compute --availability-zone sriov --image rhel_7.3 --nic port-id=sriov_port sriov_vm
Where:
- compute is the flavor name or ID.
-
sriovis the availability zone for the server. -
rhel_7.3is the image (name or ID) used to create an instance. -
sriov_portis the NIC on the server. -
sriov_vmis the name of the instance.
You have now deployed an instance for the SR-IOV with NFV use case.
