Chapter 3. Install OpenDaylight on the overcloud

This document only focuses on OpenDaylight installation. Before you can deploy OpenDaylight, you must make sure that you have a working undercloud environment and that the overcloud nodes are connected to the physical network.

See Installing the Undercloud and Configuring Basic Overcloud Requirements with the CLI Tools of the Director Installation and Usage guide, which describes the necessary procedures to deploy the undercloud and overcloud.

There are several methods to install OpenDaylight in Red Hat OpenStack platform. The following chapter introduces the most useful scenarios of OpenDaylight and how to install them.

3.1. Understand default configuration and customizing settings

The recommended approach to installing OpenDaylight is to use the default environment file neutron-opendaylight.yaml and pass it as an argument to the deployment command on the undercloud. This will deploy the default installation of OpenDaylight.

Other OpenDaylight installation and configuration scenarios are based on this installation method. Basically, you can deploy OpenDaylight with various different scenarios just by providing specific environment files to the deployment command.

3.1.1. Understand the default environment file

The default environment file is called neutron-opendaylight.yaml and you can find it in the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ directory. The file enables or disables services that the OpenDaylight will support and use. It also can define necessary parameters, that will be set by the director during the deployment.

The following is an example neutron-opendaylight.yaml file that can be used for a Docker based deployment:

# A Heat environment that can be used to deploy OpenDaylight with L3 DVR using Docker containers
resource_registry:
  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::Heat::None
  OS::TripleO::Services::OpenDaylightApi: ../../docker/services/opendaylight-api.yaml
  OS::TripleO::Services::OpenDaylightOvs: ../../puppet/services/opendaylight-ovs.yaml
  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None
  OS::TripleO::Docker::NeutronMl2PluginBase: ../../puppet/services/neutron-plugin-ml2-odl.yaml

parameter_defaults:
  NeutronEnableForceMetadata: true
  NeutronPluginExtensions: 'port_security'
  NeutronMechanismDrivers: 'opendaylight_v2'
  NeutronServicePlugins: 'odl-router_v2,trunk'

In Red Hat OpenStack Platform director, the resource_registry is used to map resources for a deployment to the corresponding resource definition yaml file. Services are one type of resource that can be mapped. If you want to disable a particular service, set the value to the OS::Heat::None option and that service will not be used in your OpenDaylight environment. In the default file, the OpenDaylightApi and OpenDaylightOvs services are enabled, while default neutron agents are explicitly disabled as their functionality is taken over by OpenDaylight.

Heat parameters are used in order to configure settings for a deployment with director. You can override their default values by using the parameter_defaults section of the environment file.

In the example above, the NeutronEnableForceMetadata, NeutronMechanismDrivers, and NeutronServicePlugins parameters are set to enable OpenDaylight.

Note

The list of other services and their configuration options are provided further in the text.

3.1.2. Configuring the OpenDaylight API Service

You can configure the OpenDaylight API service to suit your needs by changing the default values stored in the opendaylight-api.yaml file, located in the /usr/share/openstack-tripleo-heat-templates/puppet/services directory. However, you should never overwrite the settings in this file directly. It is wise to keep the file as a fallback solution and preferably create a new copy of the file and set the required values in the parameter_defaults section of this newly created environment file. This one you will later pass to the deployment command.

Note

In the deployment command, all settings made in the environment files mentioned earlier will be replaced by settings in those mentioned later. Thus, the order of the environment files matters and you should pay attention to it.

3.1.2.1. Configurable Options

When configuring the OpenDaylight API Service, you can set several parameters:

OpenDaylightPort

Sets the port used for Northbound communication. Defaults to 8081.

OpenDaylightUsername

Sets the login user name for OpenDaylight. Defaults to admin.

OpenDaylightPassword

Sets the login password for OpenDaylight. Defaults to admin.

OpenDaylightEnableDHCP

Enables OpenDaylight to act as the DHCP service. Defaults to false.

OpenDaylightFeatures

Comma-delimited list of features to boot in OpenDaylight. Defaults to [odl-netvirt-openstack, odl-netvirt-ui, odl-jolokia].

OpenDaylightConnectionProtocol

Sets the L7 protocol used for REST access. Defaults to http.

OpenDaylightManageRepositories

Sets whether to manage the OpenDaylight repository. Defaults to false.

OpenDaylightSNATMechanism

Sets the SNAT mechanism to be used by OpenDaylight. You can choose between conntrack and controller. The default value is conntrack.

3.1.3. Configuring the OpenDaylight OVS Service

You can configure the OpenDaylight OVS service by referencing the parameters and their default values in the opendaylight-ovs.yaml file, located in the /usr/share/openstack-tripleo-heat-templates/puppet/services directory. However, you should never overwrite the settings in this file directly. It is wise to keep the file as a fallback solution and preferably create a new copy of the file and set the required values in the parameter_defaults section of this newly created environment file. This one you will later pass to the deployment command.

Note

In the deployment command, all settings made in the environment files mentioned earlier will be replaced by settings in those mentioned later. Thus, the order of the environment files matters and you should pay attention to it.

3.1.3.1. Configurable options

When configuring the OpenDaylight OVS Service, you can set several parameters:

OpenDaylightPort

Sets the port used for Northbound communication to OpenDaylight. Defaults to 8081. The OVS Service uses the Northbound to query OpenDaylight to ensure that it is fully up before connecting.

OpenDaylightConnectionProtocol

Layer 7 protocol used for REST access. Defaults to http. Currently, http is the only supported protocol in OpenDaylight.

OpenDaylightCheckURL

The URL to use to verify OpenDaylight is fully up before OVS connects. Defaults to restconf/operational/network-topology:network-topology/topology/netvirt:1

OpenDaylightProviderMappings

Comma-delimited list of mappings between logical networks and physical interfaces. This setting is required for VLAN deployments. Defaults to datacentre:br-ex.

Username

Allows to set up a custom username for the OpenDaylight OVS service.

Password

Allows to set up a custom password for the OpenDaylight OVS service.

HostAllowedNetworkTypes

Defines allowed tenant network types for this OVS host. They can vary per host or role to constrain which hosts nova instances and networks are scheduled to. The default is ['local', 'vlan', 'vxlan', 'gre'].

OvsEnableDpdk

Chooses whether to configure enable DPDK in OVS. The default values is false.

OvsVhostuserMode

Specifies the mode for OVS with vhostuser port creation. In client mode, the hypervisor will be responsible for creating vhostuser sockets. In server mode, OVS will create them. The default value is client.

VhostuserSocketDir

Specifies the directory to use for vhostuser sockets. The default value is /var/run/openvswitch.

3.1.4. Using neutron metadata service with OpenDaylight

The OpenStack Compute service allows virtual machines to query metadata associated with them by making a web request to a special address, 169.254.169.254 The OpenStack Networking proxies such requests to the nova-api, even when the requests come from isolated or multiple networks with overlapping IP addresses.

The Metadata service uses either the neutron L3 agent router to serve the metadata requests or the DHCP agent instance. Deploying OpenDaylight with the Layer 3 routing plug-in enabled disables the neutron L3 agent. Therefore Metadata must be configured to flow through the DHCP instance, even when a router exists in a tenant network. This functionality is enabled in the default environment file neutron-opendaylight.yaml. To disable it, set the NeutronEnableForceMetadata to false.

VM instances will have a static host route installed, using the DHCP option 121, for 169.254.169.254/32. With this static route in place, Metadata requests to 169.254.169.254:80 will go to the Metadata name server proxy in the DHCP network namespace. The namespace proxy then adds the HTTP headers with the instance’s IP to the request, and connects it to the Metadata agent through the Unix domain socket. The Metadata agent queries neutron for the instance ID that corresponds to the source IP and the network ID and proxies it to the nova Metadata service. The additional HTTP headers are required to maintain isolation between tenants and allow overlapping IP support.

3.1.5. Understanding the network configuration and NIC template

In Red Hat OpenStack Platform director, the physical neutron network datacenter is mapped to an OVS bridge called br-ex by default. It is consistently the same with the OpenDaylight integration. If you use the default OpenDaylightProviderMappings and plan to create a flat or VLAN _External network, you have to configure the OVS br-ex bridge in the NIC template for Compute nodes. Since the Layer 3 plug-in uses distributed routing to these nodes, it is not necessary to configure br-ex on the controller role NIC template any more.

The br-ex bridge can be mapped to any network in network isolation, but it is typically mapped to the External network as you can see in the example.

type: ovs_bridge
  name: {get_input: bridge_name}
  use_dhcp: false
  members:
    -
      type: interface
      name: nic3
      # force the MAC address of the bridge to this interface
      primary: true
  dns_servers: {get_param: DnsServers}
  addresses:
    -
      ip_netmask: {get_param: ExternalIpSubnet}
  routes:
    -
      default: true
      ip_netmask: 0.0.0.0/0
      next_hop: {get_param: ExternalInterfaceDefaultRoute}

With the DPDK, you have to create another OVS bridge, that is most typically called br-phy and provide it with the ovs-dpdk-port. The IP address of the bridge is configured for VXLAN overlay network tunnels.

type: ovs_user_bridge
      name: br-phy
      use_dhcp: false
      addresses:
          -
              ip_netmask: {get_param: TenantIpSubnet}
              members:
                  -
                      type: ovs_dpdk_port
                      name: dpdk0
                      driver: uio_pci_generic
                      members:
                          -
                              type: interface
                              name: nic1
                              # force the MAC address of the bridge to this interface
                              primary: true
Note

When using network isolation, you do not have to place an IP address, or a default route, in this bridge on Compute nodes.

Alternatively, it is possible to configure external network access without using the br-ex bridge completely. To use the method, you must know the interface name of the overcloud Compute node in advance. For example, if eth3 is the deterministic name of the third interface on the Compute node, then you can use it to specify an interface in the NIC template for the Compute node.

-
  type: interface
  name: eth3
  use_dhcp: false

3.2. Basic installation of OpenDaylight

This section shows how to deploy OpenDaylight using the standard environment files.

3.2.1. Prepare the OpenDaylight environment files for overcloud

Before you start

  • Install the undercloud (see Installing the undercloud).
  • Optionally, create a local registry with the container images that will be used during the overcloud and OpenDaylight installation. To create it follow the Configuring registry details in the Director installation and usage guide.

Procedure

  1. Log onto the undercloud and load the admin credentials.

    $ source ~/stackrc
  2. Create a remote docker registry file odl-images.yaml which contains references to docker container images need for the OpenStack and OpenDaylight installation.

    $ openstack overcloud container image prepare --namespace registry.access.redhat.com/rhosp12  \
     --prefix=openstack- --suffix=-docker --tag latest \
     -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml \
     --output-env-file /home/stack/templates/odl-images.yaml
  3. You have now succesfully prepared the environment to deploy overcloud and you are now ready to start the installation described in Section 3.2.2, “Install overcloud with OpenDaylight”.

More information

The openstack overcloud image prepare command prepares the container images environment files for the installation of overcloud and OpenDaylight. It uses the following options:

-e
specifies the service environment file to add specific container images required by that environment, such as OpenDaylight, OVS, and so on
--env-file
creates a new container image environment file with a list of container images that will be used for the installation
--pull-source
sets the location of the Docker containers registry
--namespace
sets the version of the Docker containers
--prefix
adds a prefix to the image name
--suffix
adds a suffix to the image name
--tag
defines the release of the images

3.2.2. Install overcloud with OpenDaylight

Before you start

Procedure

  1. Log onto the undercloud and load the admin credentials.

    $ source ~/stackrc
  2. Deploy the overcloud using previously created environment files.

    $ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates \
     -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml \
     -e /home/stack/templates/odl-images.yaml
     -e <other needed environment files>
Note

When the same parameters are mentioned in more environment files, any later environment file overrides the earlier settings. It is necessary that you pay attention to the order of the environment files to avoid parameters being incorrectly set.

Tip

You can easily override some of the parameters by creating a minimal environment file that only sets the parameters you want to change and combining it with the default environment files.

More information

The openstack overcloud deploy command above that deploys the overcloud and OpenDaylight uses the following options:

--templates
defines path to the directory where the heat templates are stored
-e
specifies the environment file to load

3.3. Install OpenDaylight in custom role

Installing OpenDaylight in a custom role results in an isolated OpenDaylightApi service that runs on a designated OpenDaylight node, different from the controller node.

If you want to use a custom role for OpenDaylight, you have to create a role file where you configure the layout of the nodes and their functions.

3.3.1. Customize the role file based on default roles.

OpenStack offers the option of deploying with a user-defined list of roles, each running a user defined list of services (where “role” means group of nodes, e.g “Controller”, and “service” refers to the individual services or configurations e.g “nova API”). Example roles are provided in openstack-tripleo-heat-templates.

You can use these roles to generate a roles_data.yaml file that contains the roles they you want for the overcloud nodes. You can also create your personal custom roles by creating individual files in a directory and use them to generate a new roles_data.yaml.

To create customized environment files that would only install certain OpenStack roles, follow this procedure.

Procedure

  • Load the admin credentials.

    $ source ~/stackrc
  • List the default roles that you can use to generate the roles_data.yaml file that you will use for the later deployment.

    $ openstack overcloud role list
  • If you want to use all these roles, generate the roles_data.yaml file by using the following command:

    $ openstack overcloud roles generate -o roles_data.yaml
  • If you want to customize the role file to only include some of the roles, you can pass the names of the roles as arguments to the above mentioned command. To create the roles_data.yaml file with the Controller, Compute and Telemetry roles, use:

    $ openstack overcloud roles generate - roles_data.yaml Controller Compute Telemetry

3.3.2. Create a custom role for OpenDaylight

Creating a custom role requires you to make a new role file where you define the role. Then you place the in the directory with other role files and then you generate the roles_data.yaml file which will include the newly created role. For each custom role, you will need a specific role file, that will only include the specific role. The name of the file should match the role name.

Minimally, the file must define these parameters:

  • Name: defines the name of the role. The name must always be a non-empty unique string.

    - Name: Custom_role
  • ServicesDefault: lists the services used in this role. The variable can remain empty, if there are no services used. The example format looks like this:

    ServicesDefault:
        - OS::TripleO::Services::AuditD
        - OS::TripleO::Services::CACerts
        - OS::TripleO::Services::CertmongerUser
        - OS::TripleO::Services::Collectd
        - OS::TripleO::Services::Docker

Besides the required parameters, you can also define further settings:

  • CountDefault: defines the default number of nodes. If empty, it defaults to zero.

    CountDefault: 1
  • HostnameFormatDefault: defines the format string for a host name. The value is optional.

    HostnameFormatDefault: '%stackname%-computeovsdpdk-%index%'
  • Description: describes the role and adds information about it.

    Description:
        Compute OvS DPDK Role

Procedure

  1. Copy the default role files into a new directory and keep the original files as a fallback solution.

    $ mkdir ~/roles
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
  2. Switch off the OpenDaylightAPI service on the controller node. To do so, modify the default controller role in the Controller.yaml file in ~/roles and remove the OpenDaylightApi line from the file:

       - name: Controller
         CountDefault: 1
         ServicesDefault:
          - OS::TripleO::Services::TripleoFirewall
          - OS::TripleO::Services::OpenDaylightApi #<--Remove this
          - OS::TripleO::Services::OpenDaylightOvs
  3. Create a new OpenDaylight.yaml file in the ~/roles directory and add the OpenDaylight role description:

    - name: OpenDaylight
       CountDefault: 1
       ServicesDefault:
         - OS::TripleO::Services::Kernel
         - OS::TripleO::Services::Ntp
         - OS::TripleO::Services::OpenDaylightApi
         - OS::TripleO::Services::TripleoPackages
         - OS::TripleO::Services::TripleoFirewall
         - OS::TripleO::Services::Docker
         - OS::TripleO::Services::Sshd
  4. Save the file.
  5. Generate the new role file that you will use for the deployment of the OpenStack overcloud with OpenDaylight in the custom role.

    $ openstack overcloud roles generate --roles-path ~/roles -o ~/roles_data.yaml Controller Compute OpenDaylight

3.3.3. Install OverCloud with OpenDaylight in the custom role

Before you start

Procedure

  1. Run the deployment command with the -r argument to override the default role definitions. This option tells the deployment command to use another roles_data.yaml where the customized roles have been set up. In this example, there are three ironic nodes in total, from which one is reserved for the custom OpenDaylight role:

    $ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
    -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
    -e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r ~/roles_data.yaml
    -e /home/stack/templates/docker-images.yaml
    -e /home/stack/templates/odl-images.yaml
Note

The parameters in the latter environment files override those set in previous environment files. It is necessary that you pay attention to the order of the environment files to avoid parameters being accidentally overwritten.

Tip

You can easily override some of the parameters by creating a minimal environment file that only sets the parameters you want to change and combining it with the default environment files.

More information

  • This argument is used to override the role definitions within Red Hat OpenStack Platform director at installation time:

    -r <roles_data>.yaml
  • Using a custom role requires an extra ironic node that will be used for the custom role during the installation.

3.3.4. Verify the installation of OpenDaylight in custom role

Before you start

Procedure

  1. List the existing instances:

    $ openstack server list
  2. Check the outcome and verify that the new OpenDaylight role is dedicated as an instance:

    +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+
    | ID                                   | Name                     | Status | Task State | Power State | Networks           |
    +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+
    | 360fb1a6-b5f0-4385-b68a-ff19bcf11bc9 | overcloud-controller-0   | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.4 |
    | e38dde02-82da-4ba2-b5ad-d329a6ceaef1 | overcloud-novacompute-0  | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.5 |
    | c85ca64a-77f7-4c2c-a22e-b71d849a72e8 | overcloud-opendaylight-0 | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.8 |
    +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+

3.4. Install OpenDaylight with SR-IOV support

OpenDaylight may be deployed with compute nodes that support the Single Root Input/Output Virtualization (SR-IOV). In this deployment, compute nodes must operate as dedicated SR-IOV only nodes and should not be mixed with nova instances based on OVS. It is possible to deploy both OVS and SR-IOV compute nodes in a single OpenDaylight deployment.

This section follows the above scenario and makes use of a custom SR-IOV compute role in order to accomplish this kind of deployment.

The SR-IOV deployment requires to use the neutron SR-IOV agent in order to configure the virtual functions (VFs) which are directly passed to the Compute instance when it is deployed where they serve as a network port. The VFs are derived from a host NIC on the Compute node, and therefore some information about the host interface is required before you start the deployment.

3.4.1. Prepare the SR-IOV compute role

Following the same methodology as shown in Install of OpenDaylight In Custom Role, it is necessary to create a custom role for the SR-IOV compute nodes to allow creation of the SR-IOV based instances, while the default compute role will serve the OVS based nova instances.

Before you start

Procedure

  1. Copy the default role files into a new directory and keep the original files as a fallback solution.

    $ mkdir ~/roles
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
  2. Create a new ComputeSriov.yaml file in the ~/roles directory and add the role description:

     - name: ComputeSRIOV
       CountDefault: 1
       ServicesDefault:
         - OS::TripleO::Services::Kernel
         - OS::TripleO::Services::Ntp
         - OS::TripleO::Services::NeutronSriovHostConfig
         - OS::TripleO::Services::NeutronSriovAgent
         - OS::TripleO::Services::TripleoPackages
         - OS::TripleO::Services::TripleoFirewall
         - OS::TripleO::Services::Sshd
         - OS::TripleO::Services::NovaCompute
         - OS::TripleO::Services::NovaLibvirt
         - OS::TripleO::Services::NovaMigrationTarget
         - OS::TripleO::Services::Timezone
         - OS::TripleO::Services::ComputeNeutronCorePlugin
         - OS::TripleO::Services::Securetty
  3. Save the file.
  4. Remove the NeutronSriovAgent and NeutronSriovHostConfig services from the default Compute role and save the corresponding role file.

         - OS::TripleO::Services::NeutronSriovHostConfig
         - OS::TripleO::Services::NeutronSriovAgent
  5. Generate the new role file that you will use for the deployment of the OpenStack overcloud with OpenDaylight compute SR-IOV support.

    $ openstack overcloud roles generate --roles-path ~/roles -o ~/roles_data.yaml Controller Compute ComputeSriov

3.4.2. Configuring the SR-IOV agent service

In order to deploy OpenDaylight with the SR-IOV support, you must override the default parameters that are set in the neutron-opendaylight.yaml file. You can use a standard SR-IOV environment file that resides in /usr/share/openstack-tripleo-heat-templates. However, it is a good practice not to edit the original files. Therefore, you should create a new copy of the original environmental file and modify the required parameters in that copy.

Alternatively, you can create a new environment file in which you only provide those parameters you want to change and use both of the files for deployment. To deploy the customized OpenDaylight, you pass both files to the deployment command. Since later environment files override any previous settings, you have to use them in the correct order, that is neutron-opendaylight.yaml first, and then the neutron-opendaylight-sriov.yaml file.

If you want to deploy OpenDaylight and SR-IOV with the default settings, you can use the neutron-opendaylight-sriov.yaml that is provided by Red Hat. If you need to change or add parameters, make a copy of the default SR-IOV environment file and edit the newly created file.

The following is an illustrative example of a customized neutron-opendaylight-sriov.yaml file:

# A Heat environment that can be used to deploy OpenDaylight with SRIOV
resource_registry:
  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronCorePlugin: ../puppet/services/neutron-plugin-ml2.yaml
  OS::TripleO::Services::NeutronCorePlugin: ../puppet/services/neutron-plugin-ml2-odl.yaml
  OS::TripleO::Services::OpenDaylightApi: ../docker/services/opendaylight-api.yaml
  OS::TripleO::Services::OpenDaylightOvs: ../puppet/services/opendaylight-ovs.yaml
  OS::TripleO::Services::NeutronSriovAgent: ../puppet/services/neutron-sriov-agent.yaml
  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None

parameter_defaults:
  NeutronEnableForceMetadata: true
  NeutronPluginExtensions: 'port_security'
  NeutronMechanismDrivers: ['sriovnicswitch','opendaylight_v2']
  NeutronServicePlugins: 'odl-router_v2,trunk'

  # Add PciPassthroughFilter to the scheduler default filters
  #NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter',         'ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']
  #NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter"]

  #NeutronPhysicalDevMappings: "datacentre:ens20f2"

  # Number of VFs that needs to be configured for a physical interface
  #NeutronSriovNumVFs: "ens20f2:5"

  #NovaPCIPassthrough:
  #  - devname: "ens20f2"
  #    physical_network: "datacentre"

More information

The following options can be configured in the above mentioned yaml file. The table describes individual options and mentions the required settings to enable the SRIOV functionality:

NovaSchedulerDefaultFilters

Allows the use of PCI Passthrough for SR-IOV. This must be uncommented in the environment file and include PciPassthroughFilter

NovaSchedulerAvailableFilters

Enables specifying PCI Passthrough Filter for Nova Default filters. Must be set and include nova.scheduler.filters.all_filters

NeutronPhysicalDevMappings

Maps the logical neutron network to a host network interface. This must be specified so that neutron is able to bind the virtual network to a physical port.

NeutronSriovNumVFs

Number of VFs to create for a host network interface. Syntax: <Interface name>:<number of VFs>

NovaPCIPassthrough

Configures the whitelist of allowed PCI devices in nova to be used for PCI Passthrough in a list format, for example:

NovaPCIPassthrough:
    - vendor_id: "8086"
      product_id: "154c"
      address: "0000:05:00.0"
      physical_network: "datacentre"

It can also simply use logical device name rather than specific hardware attributes:

NovaPCIPassthrough:
  - devname: "ens20f2"
    physical_network: "datacentre"

3.4.3. Install OpenDaylight with SR-IOV

Before you start

Procedure

  1. Run the deployment command using the -r argument to include your customized role file and the necessary environment files to set up the SR-IOV functionality with OpenDaylight.

    $ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
    -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight-sriov.yaml
    -e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r my_roles_data.yaml
    -e /home/stack/templates/docker-images.yaml
    -e /home/stack/templates/odl-images.yaml
    -e <other needed environment files>
Note

The parameters in the latter environment files override those set in previous environment files. It is necessary that you pay attention to the order of the environment files to avoid parameters being accidentally overwritten.

Tip

You can easily override some of the parameters by creating a minimal environment file that only sets the parameters you want to change and combining it with the default environment files.

More information

  • The -r option is used to override the role definitions at installation time.

    -r <roles_data>.yaml
  • Using a custom role requires an extra ironic node that will be used for the custom role during the installation.

3.5. Install OpenDaylight with OVS-DPDK support

OpenDaylight may be deployed with Open vSwitch Data Plane Development Kit (DPDK) acceleration with director. This deployment offers higher dataplane performance as packets are processed in user space rather than in the kernel. Deploying with OVS-DPDK requires knowledge of the hardware physical layout for each compute node in order to take advantage of potential performance gains.

You especially should consider:

3.5.1. Prepare the OVS-DPDK deployment files

In order to deploy OVS-DPDK, you will use a different environment file. The file will override some of the parameters set by the neutron-opendaylight.yaml file that is located in /usr/share/openstack-tripleo-heat-templates/environments/services-docker directory. However, you should not change the original file. Rather, you can create a new environment file, for example neutron-opendaylight-dpdk.yaml where you set up the necessary parameters.

If you want to deploy OpenDaylight with OVS-DPDK with the default settings, you can use the neutron-opendaylight-dpdk.yaml that is provided by Red Hat and you will find it in the /usr/share/openstack-tripleo-heat-templates/environments/services-docker directory.

The default file contains these values:

# A Heat environment that can be used to deploy OpenDaylight with L3 DVR and DPDK.
# This file is to be used with neutron-opendaylight.yaml

parameter_defaults:
  NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,NUMATopologyFilter"
  OpenDaylightSNATMechanism: 'controller'

  ComputeOvsDpdkParameters:
    OvsEnableDpdk: True

    ## Host configuration Parameters
    #TunedProfileName: "cpu-partitioning"
    #IsolCpusList: ""               # Logical CPUs list to be isolated from the host process (applied via cpu-partitioning tuned).
                                    # It is mandatory to provide isolated cpus for tuned to achive optimal performance.
                                    # Example: "3-8,12-15,18"
    #KernelArgs: ""                 # Space separated kernel args to configure hugepage and IOMMU.
                                    # Deploying DPDK requires enabling hugepages for the overcloud compute nodes.
                                    # It also requires enabling IOMMU when using the VFIO (vfio-pci) OvsDpdkDriverType.
                                    # This should be done by configuring parameters via host-config-and-reboot.yaml environment file.

    ## Attempting to deploy DPDK without appropriate values for the below parameters may lead to unstable deployments
    ## due to CPU contention of DPDK PMD threads.
    ## It is highly recommended to to enable isolcpus (via KernelArgs) on compute overcloud nodes and set the following parameters:
    #OvsDpdkSocketMemory: ""       # Sets the amount of hugepage memory to assign per NUMA node.
                                   # It is recommended to use the socket closest to the PCIe slot used for the
                                   # desired DPDK NIC.  Format should be comma separated per socket string such as:
                                   # "<socket 0 mem MB>,<socket 1 mem MB>", for example: "1024,0".
    #OvsDpdkDriverType: "vfio-pci" # Ensure the Overcloud NIC to be used for DPDK supports this UIO/PMD driver.
    #OvsPmdCoreList: ""            # List or range of CPU cores for PMD threads to be pinned to.  Note, NIC
                                   # location to cores on socket, number of hyper-threaded logical cores, and
                                   # desired number of PMD threads can all play a role in configuring this setting.
                                   # These cores should be on the same socket where OvsDpdkSocketMemory is assigned.
                                   # If using hyperthreading then specify both logical cores that would equal the
                                   # physical core.  Also, specifying more than one core will trigger multiple PMD
                                   # threads to be spawned, which may improve dataplane performance.
    #NovaVcpuPinSet: ""            # Cores to pin Nova instances to.  For maximum performance, select cores
                                   # on the same NUMA node(s) selected for previous settings.

3.5.2. Configuring the OVS-DPDK deployment

You can configure the OVS-DPDK service by changing the values in neutron-opendaylight-dpdk.yaml.

TunedProfileName

Enables pinning of IRQs in order to isolate them from the CPU cores to be used with OVS-DPDK. Default profile: cpu-partitioning

IsolCpusList

Specifies a list of CPU cores to prevent the kernel scheduler from using these cores that can instead be assigned and dedicated to OVS-DPDK. The format takes a comma separated list of individual or a range of cores, for example 1,2,3,4-8,10-12

KernelArgs

Lists arguments to be passed to the kernel at boot time. For OVS-DPDK, it is required to enable IOMMU and Hugepages, for example:

---- intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=60 ----

Note the amount of RAM for specified above is 60 GB for hugepages. It is important to consider the available amount of RAM on compute nodes when setting this value.

OvsDpdkSocketMemory

Specifies the amount of hugepage memory (in MB) to assign to each NUMA node. For maximum performance, assign memory to the socket closest to the DPDK NIC. List format of memory per socket:

---- "<socket 0 mem MB>,<socket 1 mem MB>" ----

For example: "1024,0"

OvsDpdkDriverType

Specifies the UIO driver type to use with PMD threads. The DPDK NIC must support the driver specified. Red Hat OpenStack Platform deployments support the driver type vfio-pci. Red Hat OpenStack Platform deployments do not support UIO drivers, including uio_pci_generic and igb_uio.

OvsPmdCoreList

Lists single cores or ranges of cores for PMD threads to be pinned to. The cores specified here should be on the same NUMA node where memory was assigned with the OvsDpdkSocketMemory setting. If hyper-threading is being used, then specify the logical cores that would make up the physical core on the host.

OvsDpdkMemoryChannels

Specifies the number of memory channels per socket.

NovaVcpuPinSet

Cores to pin nova instances to with libvirtd. For best performance use cores on the same socket where OVS PMD Cores have been pinned to.

3.5.3. Install OpenDaylight with OVS-DPDK

Before you start

Procedure

  1. Run the deployment command using the necessary environment files to set up the DPDK functionality with OpenDaylight.
$ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight-dpdk.yaml
-e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r my_roles_data.yaml
-e /home/stack/templates/docker-images.yaml
-e /home/stack/templates/odl-images.yaml
-e <other environmental files>
Note

The parameters in the latter environment files override those set in previous environment files. It is necessary that you pay attention to the order of the environment files to avoid parameters being accidentally overwritten.

Tip

You can easily override some of the parameters by creating a minimal environment file that only sets the parameters you want to change and combining it with the default environment files.

3.6. Install OpenDaylight with L2GW support

Layer 2 gateway services allow a tenant’s virtual network to be bridged to a physical network. This integration provides users with the capability to access resources on a physical server through a layer 2 network connection rather than through a routed layer 3 connection, that means extending the layer 2 broadcast domain instead of going through L3 or Floating IPs.

3.6.1. Prepare L2GW deployment files

In order to deploy OpenDaylight with L2GW support, you will use the neutron-l2gw-opendaylight.yaml file that is located in /usr/share/openstack-tripleo-heat-templates/environments directory. If you need to change the settings in that file, you can do it by creating a new copy of the environment file, where you will set up the necessary parameters.

If you want to deploy OpenDaylight and L2GW with the default settings, you can use the neutron-l2gw-opendaylight.yaml that is provided by Red Hat and resides in /usr/share/openstack-tripleo-heat-templates/environments/services-docker directory.

The default file contains these values:

# A Heat environment file that can be used to deploy Neutron L2 Gateway service
#
# Currently there are only two service provider for Neutron L2 Gateway
# This file enables L2GW service with OpenDaylight as driver.
#
# - OpenDaylight: L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default
resource_registry:
  OS::TripleO::Services::NeutronL2gwApi: ../puppet/services/neutron-l2gw-api.yaml

parameter_defaults:
  NeutronServicePlugins: "networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin"
  L2gwServiceProvider: ['L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default']

  # Optional
  # L2gwServiceDefaultInterfaceName: "FortyGigE1/0/1"
  # L2gwServiceDefaultDeviceName: "Switch1"
  # L2gwServiceQuotaL2Gateway: 10
  # L2gwServicePeriodicMonitoringInterval: 5

3.6.2. Configuring OpenDaylight L2GW deployment

You can configure the service by changing the values in the neutron-l2gw-opendaylight.yaml file:

NeutronServicePlugins

Comma-separated list of service plugin entrypoints to be loaded from the neutron.service_plugins namespace. Defaults to router.

L2gwServiceProvider

Defines the provider that should be used to provide this service. Defaults to L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default

L2gwServiceDefaultInterfaceName

Sets the name of the default interface.

L2gwServiceDefaultDeviceName

Sets the name of the default device.

L2gwServiceQuotaL2Gateway

Specifies the service quota for the L2 gateway. Defaults to 10.

L2gwServicePeriodicMonitoringInterval

Specifies the monitoring interval for the L2GW service.

3.6.3. Install OpenDaylight with L2GW

Before you start

Procedure

  1. Run the deployment command using the necessary environment files to set up the L2GW functionality with OpenDaylight.
$ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-l2gw-opendaylight.yaml
-e /home/stack/templates/docker-images.yaml
-e /home/stack/templates/odl-images.yaml
-e <other environmental files>
Note

The parameters in the latter environment files override those set in previous environment files. It is necessary that you pay attention to the order of the environment files to avoid parameters being accidentally overwritten.

Tip

You can easily override some of the parameters by creating a minimal environment file that only sets the parameters you want to change and combining it with the default environment files.