Chapter 6. Configuring VDPA Compute nodes to enable instances that use VDPA ports

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

VIRTIO data path acceleration (VDPA) provides wirespeed data transfer over VIRTIO. A VDPA device provides a VIRTIO abstraction over an SR-IOV virtual function (VF), which enables VFs to be consumed without a vendor-specific driver on the instance.

Note

When you use a NIC as a VDPA interface it must be dedicated to the VDPA interface. You cannot use the NIC for other connections because you must configure the NIC’s physical function (PF) in switchdev mode and manage the PF by using hardware offloaded OVS.

To enable your cloud users to create instances that use VDPA ports, complete the following tasks:

  1. Optional: Designate Compute nodes for VDPA.
  2. Configure the Compute nodes for VDPA that have the required VDPA drivers.
  3. Deploy the overcloud.
Tip

If the VDPA hardware is limited, you can also configure a host aggregate to optimize scheduling on the VDPA Compute nodes. To schedule only instances that request VDPA on the VDPA Compute nodes, create a host aggregate of the Compute nodes that have the VDPA hardware, and configure the Compute scheduler to place only VDPA instances on the host aggregate. For more information, see Filtering by isolating host aggregates and Creating and managing host aggregates.

Prerequisites

  • Your Compute nodes have the required VDPA devices and drivers.
  • Your Compute nodes have Mellanox NICs.
  • Your overcloud is configured for OVS hardware offload. For more information, see Configuring OVS hardware offload.
  • Your overcloud is configured to use ML2/OVN.

6.1. Designating Compute nodes for VDPA

To designate Compute nodes for instances that request a VIRTIO data path acceleration (VDPA) interface, create a new role file to configure the VDPA role, and configure the bare-metal nodes with a VDPA resource class to tag the Compute nodes for VDPA.

Note

The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, scale down the overcloud to unprovision the node, then scale up the overcloud to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    [stack@director ~]$ source ~/stackrc
  3. Generate a new roles data file named roles_data_vdpa.yaml that includes the Controller, Compute, and ComputeVdpa roles:

    (undercloud)$ openstack overcloud roles \
     generate -o /home/stack/templates/roles_data_vdpa.yaml \
     ComputeVdpa Compute Controller
  4. Update the roles_data_vdpa.yaml file for the VDPA role:

    ###############################################################################
    # Role: ComputeVdpa                                                         #
    ###############################################################################
    - name: ComputeVdpa
      description: |
        VDPA Compute Node role
      CountDefault: 1
      # Create external Neutron bridge
      tags:
        - compute
        - external_bridge
      networks:
        InternalApi:
          subnet: internal_api_subnet
        Tenant:
          subnet: tenant_subnet
        Storage:
          subnet: storage_subnet
      HostnameFormatDefault: '%stackname%-computevdpa-%index%'
      deprecated_nic_config_name: compute-vdpa.yaml
  5. Register the VDPA Compute nodes for the overcloud by adding them to your node definition template: node.json or node.yaml. For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide.
  6. Inspect the node hardware:

    (undercloud)$ openstack overcloud node introspect \
     --all-manageable --provide

    For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide.

  7. Tag each bare-metal node that you want to designate for VDPA with a custom VDPA resource class:

    (undercloud)$ openstack baremetal node set \
     --resource-class baremetal.VDPA <node>

    Replace <node> with the name or UUID of the bare-metal node.

  8. Add the ComputeVdpa role to your node definition file, overcloud-baremetal-deploy.yaml, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:

    - name: Controller
      count: 3
    - name: Compute
      count: 3
    - name: ComputeVdpa
      count: 1
        defaults:
          resource_class: baremetal.VDPA
          network_config:
            template: /home/stack/templates/nic-config/<role_topology_file>
    • Replace <role_topology_file> with the name of the topology file to use for the ComputeVdpa role, for example, myRoleTopology.j2. You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Director Installation and Usage guide. To use the default network definition settings, do not include network_config in the role definition.

    For more information about the properties that you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes. For an example node definition file, see Example node definition file.

  9. Provision the new nodes for your role:

    (undercloud)$ openstack overcloud node provision \
    [--stack <stack>] \
    [--network-config \]
    --output <deployment_file> \
    /home/stack/templates/overcloud-baremetal-deploy.yaml
    • Optional: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. Defaults to overcloud.
    • Optional: Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you have not defined the network definitions in the node definition file by using the network_config property, then the default network definitions are used.
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml.
  10. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active:

    (undercloud)$ watch openstack baremetal node list
  11. If you ran the provisioning command without the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files:

    parameter_defaults:
       ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2
       ComputeVdpaNetworkConfigTemplate: /home/stack/templates/nic-configs/<vdpa_net_top>.j2
       ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2

    Replace <vdpa_net_top> with the name of the file that contains the network topology of the ComputeVdpa role, for example, compute.yaml to use the default network topology.

6.2. Configuring a VDPA Compute node

To enable your cloud users to create instances that use VIRTIO data path acceleration (VDPA) ports, configure the Compute nodes that have the VDPA devices.

Procedure

  1. Create a new Compute environment file for configuring VDPA Compute nodes, for example, vdpa_compute.yaml.
  2. Add PciPassthroughFilter and NUMATopologyFilter to the NovaSchedulerDefaultFilters parameter in vdpa_compute.yaml:

    parameter_defaults:
      NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']
  3. Add the NovaPCIPassthrough parameter to vdpa_compute.yaml to specify the available PCIs for the VDPA devices on the Compute node. For example, to add NVIDIA® ConnectX®-6 Dx devices to the pool of PCI devices that are available for passthrough to instances, add the following configuration to vdpa_compute.yaml:

    parameter_defaults:
      ...
      ComputeVdpaParameters:
        NovaPCIPassthrough:
        - vendor_id: "15b3"
          product_id: "101d"
          address: "06:00.0"
          physical_network: "tenant"
        - vendor_id: "15b3"
          product_id: "101d"
          address: "06:00.1"
          physical_network: "tenant"

    For more information about how to configure NovaPCIPassthrough, see Guidelines for configuring NovaPCIPassthrough.

  4. Enable the input–output memory management unit (IOMMU) in each Compute node BIOS by adding the KernelArgs parameter to vdpa_compute.yaml. For example, use the following KernalArgs settings to enable an Intel Corporation IOMMU:

    parameter_defaults:
      ...
      ComputeVdpaParameters:
        ...
        KernelArgs: "intel_iommu=on iommu=pt"

    To enable an AMD IOMMU, set KernelArgs to "amd_iommu=on iommu=pt".

    Note

    When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes automatically reboot during overcloud deployment. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs.

  5. Open your network environment file, and add the following configuration to define the physical network:

    parameter_defaults:
      ...
      NeutronBridgeMappings:
      - <bridge_map_1>
      - <bridge_map_n>
      NeutronTunnelTypes: '<tunnel_types>'
      NeutronNetworkType: '<network_types>'
      NeutronNetworkVLANRanges:
      - <network_vlan_range_1>
      - <network_vlan_range_n>
    • Replace <bridge_map_1>, and all bridge mappings up to <bridge_map_n>, with the logical to physical bridge mappings that you want to use for the VDPA bridge. For example, tenant:br-tenant.
    • Replace <tunnel_types> with a comma-separated list of the tunnel types for the project network. For example, geneve.
    • Replace <network_types> with a comma-separated list of the project network types for the Networking service (neutron). The first type that you specify is used until all available networks are exhausted, then the next type is used. For example, geneve,vlan.
    • Replace <network_vlan_range_1>, and all physical network and VLAN ranges up to <network_vlan_range_n>, with the ML2 and OVN VLAN mapping ranges that you want to support. For example, datacentre:1:1000,tenant:100:299.
  6. Open your network interface template, and add the following configuration to specify your VDPA-supported network interfaces as a member of the OVN bridge:

    - type: ovn_bridge
      name: br-tenant
      members:
        - type: sriov_pf
          name: enp6s0f0
          numvfs: 8
          use_dhcp: false
          vdpa: true
          link_mode: switchdev
        - type: sriov_pf
          name: enp6s0f1
          numvfs: 8
          use_dhcp: false
          vdpa: true
          link_mode: switchdev
  7. Add your custom environment files to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -r /home/stack/templates/roles_data_vdpa.yaml \
      -e /home/stack/templates/network-environment.yaml \
      -e /home/stack/templates/vdpa_compute.yaml \
      -e /home/stack/templates/overcloud-baremetal-deployed.yaml \
      -e /home/stack/templates/node-info.yaml

Verification

  1. Create an instance with a VDPA device. For more information, see Creating an instance with a VDPA interface in the Creating and Managing Instances guide.
  2. Log in to the instance as a cloud user. For more information, see Connecting to an instance in the Creating and Managing Instances guide.
  3. Verify that the VDPA device is accessible from the instance:

    $ openstack port show vdpa-port