Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 3. Install OpenDaylight on the overcloud

This document focuses only on OpenDaylight installation. Before you can deploy OpenDaylight, you must ensure that you have a working undercloud environment and that the overcloud nodes are connected to the physical network.

See Installing the Undercloud and Configuring Basic Overcloud Requirements with the CLI Tools of the Director Installation and Usage guide, which describes the procedures necessary to deploy the undercloud and overcloud.

There are several methods to install OpenDaylight in Red Hat OpenStack platform. The following chapter introduces the most useful scenarios of OpenDaylight and how to install them.

3.1. Understand default configuration and customizing settings

The recommended approach to installing OpenDaylight is to use the default environment file neutron-opendaylight.yaml and pass it as an argument to the deployment command on the undercloud. This deploys the default installation of OpenDaylight.

Other OpenDaylight installation and configuration scenarios are based on this installation method. You can deploy OpenDaylight with various different scenarios by providing specific environment files to the deployment command.

3.1.1. Understanding the default environment file

The default environment file is neutron-opendaylight.yaml in the /usr/share/openstack-tripleo-heat-templates/environments/services directory. This environment file enables or disables services that the OpenDaylight supports. The environment file also defines necessary parameters that the director sets during deployment.

The following file is an example neutron-opendaylight.yaml file that you can use for a Docker based deployment:

# A Heat environment that can be used to deploy OpenDaylight with L3 DVR using Docker containers
resource_registry:
  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::Heat::None
  OS::TripleO::Services::OpenDaylightApi: ../../docker/services/opendaylight-api.yaml
  OS::TripleO::Services::OpenDaylightOvs: ../../puppet/services/opendaylight-ovs.yaml
  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None
  OS::TripleO::Docker::NeutronMl2PluginBase: ../../puppet/services/neutron-plugin-ml2-odl.yaml

parameter_defaults:
  NeutronEnableForceMetadata: true
  NeutronPluginExtensions: 'port_security'
  NeutronMechanismDrivers: 'opendaylight_v2'
  NeutronServicePlugins: 'odl-router_v2,trunk'
  OpenDaylightLogMechanism: 'console'

Red Hat OpenStack Platform director uses the resource_registry to map resources for a deployment to the corresponding resource definition yaml file. Services are one type of resource that you can map. If you want to disable a particular service, set the value OS::Heat::None. In the default file, the OpenDaylightApi and OpenDaylightOvs services are enabled, while default neutron agents are explicitly disabled as OpenDaylight inherits their functionality.

You can use heat parameters to configure settings for a deployment with director. You can override their default values with the parameter_defaults section of the environment file.

In this example, the NeutronEnableForceMetadata, NeutronMechanismDrivers, and NeutronServicePlugins parameters are set to enable OpenDaylight.

Note

The list of other services and their configuration options are provided later in this guide.

3.1.2. Configuring the OpenDaylight API Service

You can change the default values in the /usr/share/openstack-tripleo-heat-templates/puppet/services/opendaylight-api.yaml file to suit your needs. Do not overwrite the settings in this file directly. Duplicate this file and retain the original as a backup solution. Only modify the duplicate and pass the duplicate to the deployment command.

Note

The parameters in the latter environment files override those set in previous environment files. Ensure that you pay attention to the order of the environment files to avoid overwriting parameters accidentally.

3.1.2.1. Configurable Options

When you configure the OpenDaylight API Service, you can set several parameters:

OpenDaylightPort

The port used for Northbound communication. The default value is 0. This parameter is deprecated in OSP 13.

OpenDaylightUsername

The login user name for OpenDaylight. The default value is admin.

OpenDaylightPassword

The login password for OpenDaylight. The default value is admin.

OpenDaylightEnableDHCP

Enables OpenDaylight to act as the DHCP service. The default value is false.

OpenDaylightFeatures

A comma-delimited list of features to boot in OpenDaylight. The default value is [odl-netvirt-openstack, odl-jolokia].

OpenDaylightConnectionProtocol

The L7 protocol used for REST access. The default value is http.

OpenDaylightManageRepositories

Defines whether to manage the OpenDaylight repository. The default value is false.

OpenDaylightSNATMechanism

The SNAT mechanism to be used by OpenDaylight. Select conntrack or controller. The default value is conntrack.

OpenDaylightLogMechanism

The logging mechanism for OpenDaylight. Select file or console. The default value is file.

OpenDaylightTLSKeystorePassword

The password for the OpenDaylight TLS keystore. The default value is opendaylight. Passwords must be at least 6 characters.

EnableInternalTLS

Enables or disables TLS in the internal network. You can use values true or false. The default value is false.

InternalTLSCAFile

If you enable TLS for services in the internal network, you must use the InternalTLSCAFile parameter to specify the default CA cert. The default value is /etc/ipa/ca.crt.

For more information on how to deploy with TLS, see the Advanced Overcloud Customization Guide.

3.1.3. Configuring the OpenDaylight OVS Service

You can change the default values in the /usr/share/openstack-tripleo-heat-templates/puppet/services/opendaylight-ovs.yaml file to suit your needs. Do not overwrite the settings in this file directly. Duplicate this file and retain the original as a backup solution. Modify only the duplicate and pass the duplicate to the deployment command.

Note

The parameters in the latter environment files override those set in previous environment files. Ensure that you pay attention to the order of the environment files to avoid overwriting parameters accidentally.

3.1.3.1. Configurable options

When you configure the OpenDaylight OVS Service, you can set several parameters:

OpenDaylightPort

The port used for Northbound communication to OpenDaylight. The default value is 0. The OVS Service uses the Northbound to query OpenDaylight to ensure that it is fully up before connecting. This parameter is deprecated in OSP 13.

OpenDaylightConnectionProtocol

The Layer 7 protocol used for REST access. The default value is http. http is the only supported protocol in OpenDaylight. This parameter is deprecated in OSP 13.

OpenDaylightCheckURL

The URL to verify OpenDaylight is fully up before OVS connects. The default value is restconf/operational/network-topology:network-topology/topology/netvirt:1

OpenDaylightProviderMappings

A comma-delimited list of mappings between logical networks and physical interfaces. This setting is required for VLAN deployments. The default value is datacentre:br-ex.

OpenDaylightUsername

The custom username for the OpenDaylight OVS service. The default value is admin.

OpenDaylightPassword

The custom password for the OpenDaylight OVS service. The default value is admin.

HostAllowedNetworkTypes

Defines the allowed tenant network types for this OVS host. They can vary per host or role to constrain the hosts that nova instances and networks are scheduled to. The default value is ['local', 'vlan', 'vxlan', 'gre', 'flat'].

OvsEnableDpdk

Enable or disable DPDK in OVS. The default values is false.

OvsVhostuserMode

The mode for OVS with vhostuser port creation. In client mode, the hypervisor is responsible for creating vhostuser sockets. In server mode, OVS creates them. The default value is client.

VhostuserSocketDir

The directory to use for vhostuser sockets. The default value is /var/run/openvswitch.

OvsHwOffload

Enables or disables OVS Hardware Offload. You can use true or false. The default value is false. This parameter is in technical preview for this release.

EnableInternalTLS

Enables or disables TLS in the internal network. You can use values true or false. The default value is false.

InternalTLSCAFile

If you enable TLS for services in the internal network, you must use the InternalTLSCAFile parameter to specify the default CA cert. The default value is /etc/ipa/ca.crt.

ODLUpdateLevel

The OpenDaylight update level. You can use values 1 or 2. The default value is 1.

VhostuserSocketGroup

The vhost-user socket directory group. When vhostuser is in the default dpdkvhostuserclient mode, qemu creates the vhost socket. The default value for VhostuserSocketGroup is qemu.

VhostuserSocketUser

The vhost-user socket directory user name. When vhostuser is in the default dpdkvhostuserclient mode, qemu creates the vhost socket. The default value for VhostuserSocketUser is qemu.

3.1.4. Using neutron metadata service with OpenDaylight

The OpenStack Compute service allows virtual machines to query metadata associated with them by making a web request to a special address, 169.254.169.254. The OpenStack Networking proxies such requests to the nova-api, even when the requests come from isolated or multiple networks with overlapping IP addresses.

The Metadata service uses either the neutron L3 agent router to serve the metadata requests or the DHCP agent instance. Deploying OpenDaylight with the Layer 3 routing plug-in enabled disables the neutron L3 agent. Therefore Metadata must be configured to flow through the DHCP instance, even when a router exists in a tenant network. This functionality is enabled in the default environment file neutron-opendaylight.yaml. To disable it, set the NeutronEnableForceMetadata to false.

VM instances have a static host route installed, using the DHCP option 121, for 169.254.169.254/32. With this static route in place, Metadata requests to 169.254.169.254:80 go to the Metadata name server proxy in the DHCP network namespace. The namespace proxy then adds the HTTP headers with the instance’s IP to the request, and connects it to the Metadata agent through the Unix domain socket. The Metadata agent queries neutron for the instance ID that corresponds to the source IP and the network ID and proxies it to the nova Metadata service. The additional HTTP headers are required to maintain isolation between tenants and allow overlapping IP support.

3.1.5. Understanding the network configuration and NIC template

In Red Hat OpenStack Platform director, the physical neutron network datacenter is mapped to an OVS bridge called br-ex by default. It is consistently the same with the OpenDaylight integration. If you use the default OpenDaylightProviderMappings and plan to create a flat or VLAN _External network, you have to configure the OVS br-ex bridge in the NIC template for Compute nodes. Since the Layer 3 plug-in uses distributed routing to these nodes, it is not necessary to configure br-ex on the Controller role NIC template any more.

The br-ex bridge can be mapped to any network in network isolation, but it is typically mapped to the External network, as shown in the example.

type: ovs_bridge
  name: {get_input: bridge_name}
  use_dhcp: false
  members:
    -
      type: interface
      name: nic3
      # force the MAC address of the bridge to this interface
      primary: true
  dns_servers: {get_param: DnsServers}
  addresses:
    -
      ip_netmask: {get_param: ExternalIpSubnet}
  routes:
    -
      default: true
      ip_netmask: 0.0.0.0/0
      next_hop: {get_param: ExternalInterfaceDefaultRoute}

With the DPDK, you must create another OVS bridge, typically called br-phy, and provide it with the ovs-dpdk-port. The IP address of the bridge is configured for VXLAN overlay network tunnels.

type: ovs_user_bridge
      name: br-phy
      use_dhcp: false
      addresses:
          -
              ip_netmask: {get_param: TenantIpSubnet}
              members:
                  -
                      type: ovs_dpdk_port
                      name: dpdk0
                      driver: uio_pci_generic
                      members:
                          -
                              type: interface
                              name: nic1
                              # force the MAC address of the bridge to this interface
                              primary: true
Note

When using network isolation, you do not need to place an IP address, or a default route, in this bridge on Compute nodes.

Alternatively, you can configure external network access without using the br-ex bridge. To use this method, you must know the interface name of the overcloud Compute node in advance. For example, if eth3 is the deterministic name of the third interface on the Compute node, then you can use it to specify an interface in the NIC template for the Compute node.

-
  type: interface
  name: eth3
  use_dhcp: false

3.2. Basic installation of OpenDaylight

This section shows how to deploy OpenDaylight with the standard environment files.

3.2.1. Prepare the OpenDaylight environment files for overcloud

Before you start

  • Install the undercloud. For more information, see Installing the undercloud.
  • Optionally, create a local registry with the container images that you want to use during the overcloud and OpenDaylight installation. For more information, see Configuring a container image source in the Director installation and usage guide.

Procedure

  1. Log in to the undercloud and load the admin credentials.

    $ source ~/stackrc
  2. Create a Docker registry file odl-images.yaml that contains references to the Docker container images that you need for the OpenStack and OpenDaylight installation.

    $ openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml --output-env-file /home/stack/templates/odl-images.yaml

You now successfully prepared the environment to deploy overcloud and you are ready to start the installation described in Section 3.2.2, “Install overcloud with OpenDaylight”.

More information

The openstack overcloud image prepare command prepares the container images environment files for the installation of overcloud and OpenDaylight. This command uses the following options:

-e
specifies the service environment file to add specific container images required by that environment, such as OpenDaylight and OVS
--env-file
creates a new container image environment file with a list of container images to use for the installation
--pull-source
sets the location of the Docker containers registry
--namespace
sets the version of the Docker containers
--prefix
adds a prefix to the image name
--suffix
adds a suffix to the image name
--tag
defines the release of the images

3.2.2. Install overcloud with OpenDaylight

Before you start

Procedure

  1. Log in to the undercloud and load the admin credentials.

    $ source ~/stackrc
  2. Deploy the overcloud using previously created environment files.

    $ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates \
     -e <other environment files>
     -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml \
     -e /home/stack/templates/odl-images.yaml
Note

Environment files present in the deployment command overwrite environment files that you include earlier in the command. You must pay attention to the order of the environment files that you include to avoid overwriting parameters accidentally.

Tip

You can override some of the parameters by creating a minimal environment file that sets only the parameters that you want to change and combining it with the default environment files.

More information

The openstack overcloud deploy command in this procedure uses the following options:

--templates
defines the path to the heat templates directory
-e
specifies an environment file

3.3. Install OpenDaylight in custom role

Installing OpenDaylight in a Custom role results in an isolated OpenDaylightApi service that runs on a designated OpenDaylight node, different from the Controller node.

If you want to use a Custom role for OpenDaylight, you must create a role file that contains node layout and function configuration.

3.3.1. Customize the role file based on default roles

You can deploy OpenStack with a user-defined list of roles, each role running a user-defined list of services. A role is a group of nodes that contains individual services or configurations. For example, you can create a Controller role that contains the nova API service. You can view example roles in openstack-tripleo-heat-templates.

Use these roles to generate a roles_data.yaml file that contains the roles that you want for the overcloud nodes. You can also create custom roles by creating individual files in a directory and use them to generate a new roles_data.yaml.

To create customized environment files that install only specific OpenStack roles, complete the following steps:

Procedure

  • Load the admin credentials.

    $ source ~/stackrc
  • List the default roles that you can use to generate the custom roles_data.yaml file.

    $ openstack overcloud role list
  • If you want to use all of these roles, run the following command to generate a roles_data.yaml file:

    $ openstack overcloud roles generate -o roles_data.yaml
  • If you want to customize the role file to include only some of the roles, you can pass the names of the roles as arguments to the command in the previous step. For example, to create a roles_data.yaml file with the Controller, Compute and Telemetry roles, run the following command:

    $ openstack overcloud roles generate - roles_data.yaml Controller Compute Telemetry

3.3.2. Create a custom role for OpenDaylight

To create a custom role, create a new role file in the role files directory and generate a new roles_data.yaml file. For each custom role that you create, you must create a new role file. Each custom role file must include the data only for a specific role, and the custom role file name must match the role name.

Minimally, the file must define these parameters:

  • Name: defines the name of the role. The name must always be a non-empty unique string.

    - Name: Custom_role
  • ServicesDefault: lists the services used in this role. The variable can remain empty, if there are no services used. The example format looks like this:

    ServicesDefault:
        - OS::TripleO::Services::AuditD
        - OS::TripleO::Services::CACerts
        - OS::TripleO::Services::CertmongerUser
        - OS::TripleO::Services::Collectd
        - OS::TripleO::Services::Docker

In addition to the required parameters, you can also define further settings:

  • CountDefault: defines the default number of nodes. If CountDefault: is empty, it defaults to zero.

    CountDefault: 1
  • HostnameFormatDefault: defines the format string for a host name. The value is optional.

    HostnameFormatDefault: '%stackname%-computeovsdpdk-%index%'
  • Description: describes and adds information about the role.

    Description:
        Compute OvS DPDK Role

Procedure

  1. Copy the default role files into a new directory and keep the original files as a backup.

    $ mkdir ~/roles
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
  2. Modify the default Controller role in the Controller.yaml file in ~/roles and remove the OpenDaylightApi line from the file to disable the OpenDaylightAPI service on the Controller node:

       - name: Controller
         CountDefault: 1
         ServicesDefault:
          - OS::TripleO::Services::TripleoFirewall
          - OS::TripleO::Services::OpenDaylightApi #<--Remove this
          - OS::TripleO::Services::OpenDaylightOvs
  3. Create a new OpenDaylight.yaml file in the ~/roles directory and add the OpenDaylight role description:

    - name: OpenDaylight
       CountDefault: 1
       ServicesDefault:
         - OS::TripleO::Services::Aide
         - OS::TripleO::Services::AuditD
         - OS::TripleO::Services::CACerts
         - OS::TripleO::Services::CertmongerUser
         - OS::TripleO::Services::Collectd
         - OS::TripleO::Services::Docker
         - OS::TripleO::Services::Fluentd
         - OS::TripleO::Services::Ipsec
         - OS::TripleO::Services::Kernel
         - OS::TripleO::Services::LoginDefs
         - OS::TripleO::Services::MySQLClient
         - OS::TripleO::Services::Ntp
         - OS::TripleO::Services::ContainersLogrotateCrond
         - OS::TripleO::Services::Rhsm
         - OS::TripleO::Services::RsyslogSidecar
         - OS::TripleO::Services::Securetty
         - OS::TripleO::Services::SensuClient
         - OS::TripleO::Services::Snmp
         - OS::TripleO::Services::Sshd
         - OS::TripleO::Services::Timezone
         - OS::TripleO::Services::TripleoFirewall
         - OS::TripleO::Services::TripleoPackages
         - OS::TripleO::Services::Tuned
         - OS::TripleO::Services::Ptp
         - OS::TripleO::Services::OpenDaylightApi
  4. Save the file.
  5. Generate the new role file to use when you deploy the OpenStack overcloud with OpenDaylight in the custom role.

    $ openstack overcloud roles generate --roles-path ~/roles -o ~/roles_data.yaml Controller Compute OpenDaylight

3.3.3. Install OverCloud with OpenDaylight in the custom role

Before you start

Procedure

  1. Create a custom role. Set the following parameter values in the environment file:

          - OvercloudOpenDaylightFlavor: opendaylight
          - OvercloudOpenDaylightCount: 3

    For more information, see Creating a roles_data file.

  2. Run the deployment command with the -r argument to override the default role definitions. This option tells the deployment command to use the roles_data.yaml file that contains your custom role. Pass the odl-composable.yaml environment file that you created in the previous step to this deployment command. In this example, there are three ironic nodes in total. One ironic node is reserved for the custom OpenDaylight role:

    $ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
    -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
    -e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r ~/roles_data.yaml
    -e /home/stack/templates/docker-images.yaml
    -e /home/stack/templates/odl-images.yaml
    -e /home/stack/templates/odl-composable.yaml
Note

Environment files present in the deployment command overwrite environment files that you include earlier in the command. You must pay attention to the order of the environment files that you include to avoid overwriting parameters accidentally.

Tip

You can override some of the parameters by creating a minimal environment file that sets only the parameters that you want to change and combining it with the default environment files.

More information

  • The -r option overrides the role definitions at installation time.

    -r <roles_data>.yaml
  • A custom role requires an extra ironic node during the installation.
  • To override the node counter in the rhosp13 composable role for any custom role, use the syntax in this example: <role-name>Count: <value> The role name updates with accurate name details from role_data.yaml file.

3.3.4. Verify the installation of OpenDaylight in custom role

Before you start

Procedure

  1. List the existing instances:

    $ openstack server list
  2. Verify that the new OpenDaylight role is dedicated as an instance:

    +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+
    | ID                                   | Name                     | Status | Task State | Power State | Networks           |
    +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+
    | 360fb1a6-b5f0-4385-b68a-ff19bcf11bc9 | overcloud-controller-0   | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.4 |
    | e38dde02-82da-4ba2-b5ad-d329a6ceaef1 | overcloud-novacompute-0  | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.5 |
    | c85ca64a-77f7-4c2c-a22e-b71d849a72e8 | overcloud-opendaylight-0 | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.8 |
    +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+

3.4. Install OpenDaylight with SR-IOV support

OpenDaylight might be deployed with Compute nodes that support Single Root Input/Output Virtualization (SR-IOV). In this deployment, Compute nodes must operate as dedicated SR-IOV nodes and must not be mixed with nova instances based on OVS. It is possible to deploy both OVS and SR-IOV Compute nodes in a single OpenDaylight deployment.

This scenario utilizes a custom SR-IOV Compute role to accomplish this kind of deployment.

The SR-IOV deployment requires that you use the neutron SR-IOV agent to configure the virtual functions (VFs). These functions are then passed to the Compute instance directly when it is deployed, and they serve as a network port. The VFs derive from a host NIC on the Compute node, and therefore some information about the host interface is required before you start the deployment.

3.4.1. Prepare the SR-IOV Compute role

Following the same methodology as shown in Install of OpenDaylight In Custom Role, you must create a custom role for the SR-IOV Compute nodes to allow creation of the SR-IOV based instances, while the default Compute role serves the OVS based nova instances.

Before you start

Procedure

  1. Copy the default role files into a new directory and keep the original files as a backup.

    $ mkdir ~/roles
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
  2. Create a new ComputeSriov.yaml file in the ~/roles directory and add the following role description:

     - name: ComputeSRIOV
       CountDefault: 1
       ServicesDefault:
         - OS::TripleO::Services::Kernel
         - OS::TripleO::Services::Ntp
         - OS::TripleO::Services::NeutronSriovHostConfig
         - OS::TripleO::Services::NeutronSriovAgent
         - OS::TripleO::Services::TripleoPackages
         - OS::TripleO::Services::TripleoFirewall
         - OS::TripleO::Services::Sshd
         - OS::TripleO::Services::NovaCompute
         - OS::TripleO::Services::NovaLibvirt
         - OS::TripleO::Services::NovaMigrationTarget
         - OS::TripleO::Services::Timezone
         - OS::TripleO::Services::ComputeNeutronCorePlugin
         - OS::TripleO::Services::Securetty
  3. Save the file.
  4. Remove the NeutronSriovAgent and NeutronSriovHostConfig services from the default Compute role and save the information in roles_data.yaml.

         - OS::TripleO::Services::NeutronSriovHostConfig
         - OS::TripleO::Services::NeutronSriovAgent
  5. Generate the new role file to use to deploy the OpenStack overcloud with OpenDaylight Compute SR-IOV support.

    $ openstack overcloud roles generate --roles-path ~/roles -o ~/roles_data.yaml Controller Compute ComputeSriov
  6. Create the local registry:

    openstack overcloud container image prepare   --namespace=192.168.24.1:8787/rhosp13   --prefix=openstack-   --tag=2018-05-07.2
    -e /home/stack/templates/environments/services-docker/neutron-opendaylight.yaml -e /home/stack/templates/environments/services-docker/neutron-opendaylight-sriov.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml --roles-file /home/stack/templates/roles_data.yaml

3.4.2. Configuring the SR-IOV agent service

To deploy OpenDaylight with the SR-IOV support, you must override the default parameters in the neutron-opendaylight.yaml file. You can use a standard SR-IOV environment file that resides in /usr/share/openstack-tripleo-heat-templates and the neutron-opendaylight.yaml environment file. However, it is a good practice not to edit the original files. Instead, duplicate the original environment file and modify the parameters in the duplicate file.

Alternatively, you can create a new environment file in which you provide only the parameters that you want to change, and use both files for deployment. To deploy the customized OpenDaylight, pass both files to the deployment command. Because newer environment files override any previous settings, you must include them in the deployment command in the correct order. The correct order is neutron-opendaylight.yaml first, and then neutron-opendaylight-sriov.yaml.

If you want to deploy OpenDaylight and SR-IOV with the default settings, you can use the neutron-opendaylight-sriov.yaml that is provided by Red Hat. If you need to change or add parameters, make a copy of the default SR-IOV environment file and edit the newly created file.

The following is an illustrative example of a customized neutron-opendaylight-sriov.yaml file:

# A Heat environment that can be used to deploy OpenDaylight with SRIOV
resource_registry:
  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronCorePlugin: ../puppet/services/neutron-plugin-ml2.yaml
  OS::TripleO::Services::NeutronCorePlugin: ../puppet/services/neutron-plugin-ml2-odl.yaml
  OS::TripleO::Services::OpenDaylightApi: ../docker/services/opendaylight-api.yaml
  OS::TripleO::Services::OpenDaylightOvs: ../puppet/services/opendaylight-ovs.yaml
  OS::TripleO::Services::NeutronSriovAgent: ../puppet/services/neutron-sriov-agent.yaml
  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None

parameter_defaults:
  NeutronEnableForceMetadata: true
  NeutronPluginExtensions: 'port_security'
  NeutronMechanismDrivers: ['sriovnicswitch','opendaylight_v2']
  NeutronServicePlugins: 'odl-router_v2,trunk'

  # Add PciPassthroughFilter to the scheduler default filters
  #NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter',         'ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']
  #NovaSchedulerAvailableFilters: ["nova.scheduler.filters.all_filters","nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter"]

  #NeutronPhysicalDevMappings: "datacentre:ens20f2"

  # Number of VFs that needs to be configured for a physical interface
  #NeutronSriovNumVFs: "ens20f2:5"

  #NovaPCIPassthrough:
  #  - devname: "ens20f2"
  #    physical_network: "datacentre"

More information

You can configure the following options in the neutron-opendaylight-sriov.yaml file. The table describes individual options and mentions the required settings to enable the SR-IOV functionality:

NovaSchedulerDefaultFilters

Allows the use of PCI Passthrough for SR-IOV. This must be uncommented in the environment file and include PciPassthroughFilter

NovaSchedulerAvailableFilters

Enables specifying PCI Passthrough Filter for Nova Default filters. Must be set and include nova.scheduler.filters.all_filters

NeutronPhysicalDevMappings

Maps the logical neutron network to a host network interface. This must be specified so that neutron is able to bind the virtual network to a physical port.

NeutronSriovNumVFs

Number of VFs to create for a host network interface. Syntax: <Interface name>:<number of VFs>

NovaPCIPassthrough

Configures the whitelist of allowed PCI devices in nova to be used for PCI Passthrough in a list format, for example:

NovaPCIPassthrough:
    - vendor_id: "8086"
      product_id: "154c"
      address: "0000:05:00.0"
      physical_network: "datacentre"

It can also simply use logical device name rather than specific hardware attributes:

NovaPCIPassthrough:
  - devname: "ens20f2"
    physical_network: "datacentre"

3.4.3. Install OpenDaylight with SR-IOV

Before you start

Procedure

  1. Run the deployment command with the -r argument to include your custom role file and the necessary environment files to enable the SR-IOV functionality with OpenDaylight.

    $ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
    -e <other environment files>
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight-sriov.yaml
    -e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r my_roles_data.yaml
    -e /home/stack/templates/docker-images.yaml
    -e /home/stack/templates/odl-images.yaml
Note

Environment files present in the deployment command overwrite environment files that you include earlier in the command. You must pay attention to the order of the environment files that you include to avoid overwriting parameters accidentally.

Tip

You can override some of the parameters by creating a minimal environment file that sets only the parameters that you want to change and combining it with the default environment files.

More information

  • The -r option overrides the role definitions at installation time.

    -r <roles_data>.yaml
  • A custom role requires an extra ironic node during the installation.

3.5. Install OpenDaylight with OVS-DPDK support

OpenDaylight might be deployed with Open vSwitch Data Plane Development Kit (DPDK) acceleration with director. This deployment offers higher dataplane performance as packets are processed in user space rather than in the kernel. Deploying with OVS-DPDK requires knowledge of the hardware physical layout for each Compute node to take advantage of potential performance gains.

You should consider especially:

  • that the network interface on the host supports DPDK
  • the NUMA node topology of the Compute node (number of sockets, CPU cores, and memory per socket)
  • that the DPDK NIC PCI bus proximity to each NUMA node
  • the amount of RAM available on the Compute node
  • consulting the Network Functions Virtualization Planning and Configuration Guide.

3.5.1. Prepare the OVS-DPDK deployment files

To deploy OVS-DPDK, use a different environment file. The file will override some of the parameters set by the neutron-opendaylight.yaml environment file in the /usr/share/openstack-tripleo-heat-templates/environments/services-docker directory. Do not modify the original environment file. Instead, create a new environment file that contains the necessary parameters, for example neutron-opendaylight-dpdk.yaml.

If you want to deploy OpenDaylight with OVS-DPDK with the default settings, use the default neutron-opendaylight-dpdk.yaml environment file in the /usr/share/openstack-tripleo-heat-templates/environments/services-docker directory.

The default file contains the following values:

# A Heat environment that can be used to deploy OpenDaylight with L3 DVR and DPDK.
# This file is to be used with neutron-opendaylight.yaml

parameter_defaults:
  NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,NUMATopologyFilter"
  OpenDaylightSNATMechanism: 'controller'

  ComputeOvsDpdkParameters:
    OvsEnableDpdk: True

    ## Host configuration Parameters
    #TunedProfileName: "cpu-partitioning"
    #IsolCpusList: ""               # Logical CPUs list to be isolated from the host process (applied via cpu-partitioning tuned).
                                    # It is mandatory to provide isolated cpus for tuned to achive optimal performance.
                                    # Example: "3-8,12-15,18"
    #KernelArgs: ""                 # Space separated kernel args to configure hugepage and IOMMU.
                                    # Deploying DPDK requires enabling hugepages for the overcloud compute nodes.
                                    # It also requires enabling IOMMU when using the VFIO (vfio-pci) OvsDpdkDriverType.
                                    # This should be done by configuring parameters via host-config-and-reboot.yaml environment file.

    ## Attempting to deploy DPDK without appropriate values for the below parameters may lead to unstable deployments
    ## due to CPU contention of DPDK PMD threads.
    ## It is highly recommended to to enable isolcpus (via KernelArgs) on compute overcloud nodes and set the following parameters:
    #OvsDpdkSocketMemory: ""       # Sets the amount of hugepage memory to assign per NUMA node.
                                   # It is recommended to use the socket closest to the PCIe slot used for the
                                   # desired DPDK NIC.  Format should be comma separated per socket string such as:
                                   # "<socket 0 mem MB>,<socket 1 mem MB>", for example: "1024,0".
    #OvsDpdkDriverType: "vfio-pci" # Ensure the Overcloud NIC to be used for DPDK supports this UIO/PMD driver.
    #OvsPmdCoreList: ""            # List or range of CPU cores for PMD threads to be pinned to.  Note, NIC
                                   # location to cores on socket, number of hyper-threaded logical cores, and
                                   # desired number of PMD threads can all play a role in configuring this setting.
                                   # These cores should be on the same socket where OvsDpdkSocketMemory is assigned.
                                   # If using hyperthreading then specify both logical cores that would equal the
                                   # physical core.  Also, specifying more than one core will trigger multiple PMD
                                   # threads to be spawned, which may improve dataplane performance.
    #NovaVcpuPinSet: ""            # Cores to pin Nova instances to.  For maximum performance, select cores
                                   # on the same NUMA node(s) selected for previous settings.

3.5.2. Configuring the OVS-DPDK deployment

You can configure the OVS-DPDK service by changing the values in neutron-opendaylight-dpdk.yaml.

TunedProfileName

Enables pinning of IRQs in order to isolate them from the CPU cores to be used with OVS-DPDK. Default profile: cpu-partitioning

IsolCpusList

Specifies a list of CPU cores to prevent the kernel scheduler from using these cores that can instead be assigned and dedicated to OVS-DPDK. The format takes a comma separated list of individual or a range of cores, for example 1,2,3,4-8,10-12

KernelArgs

Lists arguments to be passed to the kernel at boot time. For OVS-DPDK, it is required to enable IOMMU and Hugepages, for example:

---- intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G hugepages=60 ----

Note the amount of RAM for specified is 60 GB for hugepages. It is important to consider the available amount of RAM on Compute nodes when setting this value.

OvsDpdkSocketMemory

Specifies the amount of hugepage memory (in MB) to assign to each NUMA node. For maximum performance, assign memory to the socket closest to the DPDK NIC. List format of memory per socket:

---- "<socket 0 mem MB>,<socket 1 mem MB>" ----

For example: "1024,0"

OvsDpdkDriverType

Specifies the UIO driver type to use with PMD threads. The DPDK NIC must support the driver specified. Red Hat OpenStack Platform deployments support the driver type vfio-pci. Red Hat OpenStack Platform deployments do not support UIO drivers, including uio_pci_generic and igb_uio.

OvsPmdCoreList

Lists single cores or ranges of cores for PMD threads to be pinned to. The cores specified here should be on the same NUMA node where memory was assigned with the OvsDpdkSocketMemory setting. If hyper-threading is being used, then specify the logical cores that would make up the physical core on the host.

OvsDpdkMemoryChannels

Specifies the number of memory channels per socket.

NovaVcpuPinSet

Cores to pin nova instances to with libvirtd. For best performance use cores on the same socket where OVS PMD Cores have been pinned to.

3.5.3. Install OpenDaylight with OVS-DPDK

Before you start

Procedure

  1. Run the deployment command with the necessary environment files to enable the DPDK functionality with OpenDaylight.
$ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
-e <other environment files>
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight-dpdk.yaml
-e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r my_roles_data.yaml
-e /home/stack/templates/docker-images.yaml
-e /home/stack/templates/odl-images.yaml
Note

Environment files present in the deployment command overwrite environment files that you include earlier in the command. You must pay attention to the order of the environment files that you include to avoid overwriting parameters accidentally.

Tip

You can override some of the parameters by creating a minimal environment file that sets only the parameters that you want to change and combining it with the default environment files.

3.5.4. Example: Configuring OVS-DPDK with ODL and VXLAN tunnelling

This section describes an example configuration of OVS-DPDK with ODL and VXLAN tunnelling.

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Deriving DPDK parameters with workflows for details.

3.5.4.1. Generating the ComputeOvsDpdk composable role

Generate roles_data.yaml for the ComputeOvsDpdk role.

# openstack overcloud roles generate --roles-path templates/openstack-tripleo-heat-templates/roles -o roles_data.yaml Controller ComputeOvsDpdk

3.5.4.2. Configuring OVS-DPDK parameters

Important

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure#proc_derive-dpdk for details.

  1. Add the custom resources for OVS-DPDK under resource_registry:

      resource_registry:
        # Specify the relative/absolute path to the config files you want to use for override the default.
        OS::TripleO::ComputeOvsDpdk::Net::SoftwareConfig: nic-configs/computeovsdpdk.yaml
        OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
  2. Under parameter_defaults, set the tunnel type and the tenant type to vxlan:

    NeutronTunnelTypes: 'vxlan'
    NeutronNetworkType: 'vxlan'
  3. Under parameters_defaults, set the bridge mappings:

    # The OVS logical->physical bridge mappings to use.
    NeutronBridgeMappings: 'tenant:br-link0'
    OpenDaylightProviderMappings: 'tenant:br-link0'
  4. Under parameter_defaults, set the role-specific parameters for the ComputeOvsDpdk role:

      ##########################
      # OVS DPDK configuration #
      ##########################
      ComputeOvsDpdkParameters:
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39"
        TunedProfileName: "cpu-partitioning"
        IsolCpusList: "2-19,22-39"
        NovaVcpuPinSet: ['4-19,24-39']
        NovaReservedHostMemory: 4096
        OvsDpdkSocketMemory: "4096,4096"
        OvsDpdkMemoryChannels: "4"
        OvsDpdkCoreList: "0,20,1,21"
        OvsPmdCoreList: "2,22,3,23"
        OvsEnableDpdk: true
    Note

    You must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.

    Note

    These huge pages are consumed by the virtual machines, and also by OVS-DPDK using the OvsDpdkSocketMemory parameter as shown in this procedure. The number of huge pages available for the virtual machines is the boot parameter minus the OvsDpdkSocketMemory.

    You must also add hw:mem_page_size=1GB to the flavor you associate with the DPDK instance.

    Note

    OvsDPDKCoreList and OvsDpdkMemoryChannels are the required settings for this procedure. Attempting to deploy DPDK without appropriate values causes the deployment to fail or lead to unstable deployments.

3.5.4.3. Configuring the Controller node

  1. Create the control plane Linux bond for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        addresses:
        - ip_netmask:
            list_join:
            - /
            - - get_param: ControlPlaneIp
              - get_param: ControlPlaneSubnetCidr
        routes:
        - ip_netmask: 169.254.169.254/32
          next_hop:
            get_param: EC2MetadataIp
        members:
        - type: interface
          name: eth1
          primary: true
  2. Assign VLANs to this Linux bond.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageMgmtNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageMgmtIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: ExternalNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: ExternalIpSubnet
        routes:
        - default: true
          next_hop:
            get_param: ExternalInterfaceDefaultRoute
  3. Create the OVS bridge for access to the floating IPs into cloud networks.

      - type: ovs_bridge
        name: br-link0
        use_dhcp: false
        mtu: 9000
        members:
        - type: interface
          name: eth2
          mtu: 9000
        - type: vlan
          vlan_id:
            get_param: TenantNetworkVlanID
          mtu: 9000
          addresses:
          - ip_netmask:
              get_param: TenantIpSubnet

3.5.4.4. Configuring the Compute node for DPDK interfaces

Create the compute-ovs-dpdk.yaml file from the default compute.yaml file and make the following changes:

  1. Create the control plane Linux bond for an isolated network.

      - type: linux_bond
        name: bond_api
        bonding_options: "mode=active-backup"
        use_dhcp: false
        dns_servers:
          get_param: DnsServers
        members:
        - type: interface
          name: nic7
          primary: true
        - type: interface
          name: nic8
  2. Assign VLANs to this Linux bond.

      - type: vlan
        vlan_id:
          get_param: InternalApiNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: InternalApiIpSubnet
    
      - type: vlan
        vlan_id:
          get_param: StorageNetworkVlanID
        device: bond_api
        addresses:
        - ip_netmask:
            get_param: StorageIpSubnet
  3. Set a bridge with a DPDK port to link to the controller.

      - type: ovs_user_bridge
        name: br-link0
        use_dhcp: false
        ovs_extra:
          - str_replace:
              template: set port br-link0 tag=_VLAN_TAG_
              params:
                _VLAN_TAG_:
                   get_param: TenantNetworkVlanID
        addresses:
          - ip_netmask:
              get_param: TenantIpSubnet
        members:
          - type: ovs_dpdk_bond
            name: dpdkbond0
            mtu: 9000
            rx_queue: 2
            members:
              - type: ovs_dpdk_port
                name: dpdk0
                members:
                  - type: interface
                    name: nic3
              - type: ovs_dpdk_port
                name: dpdk1
                members:
                  - type: interface
                    name: nic4
    Note

    To include multiple DPDK devices, repeat the type code section for each DPDK device you want to add.

    Note

    When using OVS-DPDK, all bridges on the same Compute node should be of type ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node.

3.5.4.5. Deploying the overcloud

Run the overcloud_deploy.sh script to deploy the overcloud.

  #!/bin/bash

  openstack overcloud deploy \
--templates \
-r /home/stack/ospd-13-vxlan-dpdk-odl-ctlplane-dataplane-bonding-hybrid/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight-dpdk.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-dpdk-permissions.yaml \
-e /home/stack/ospd-13-vxlan-dpdk-odl-ctlplane-dataplane-bonding-hybrid/docker-images.yaml \
-e /home/stack/ospd-13-vxlan-dpdk-odl-ctlplane-dataplane-bonding-hybrid/network-environment.yaml

3.6. Install OpenDaylight with L2GW support

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Layer 2 gateway services allow a tenant’s virtual network to be bridged to a physical network. This integration enables users to access resources on a physical server through a layer 2 network connection rather than through a routed layer 3 connection. This means extending the layer 2 broadcast domain instead of going through L3 or Floating IPs.

3.6.1. Prepare L2GW deployment files

To deploy OpenDaylight with L2GW support, use the neutron-l2gw-opendaylight.yaml file in the /usr/share/openstack-tripleo-heat-templates/environments directory. If you need to change the settings in that file, do not modify the existing file. Instead, create a new copy of the environment file that contains the necessary parameters.

If you want to deploy OpenDaylight and L2GW with the default settings, you can use neutron-l2gw-opendaylight.yaml in the /usr/share/openstack-tripleo-heat-templates/environments/services-docker directory.

The default file contains these values:

# A Heat environment file that can be used to deploy Neutron L2 Gateway service
#
# Currently there are only two service provider for Neutron L2 Gateway
# This file enables L2GW service with OpenDaylight as driver.
#
# - OpenDaylight: L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default
resource_registry:
  OS::TripleO::Services::NeutronL2gwApi: ../../docker/services/neutron-l2gw-api.yaml

parameter_defaults:
  NeutronServicePlugins: "odl-router_v2,trunk,l2gw"
  L2gwServiceProvider: ['L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default']

  # Optional
  # L2gwServiceDefaultInterfaceName: "FortyGigE1/0/1"
  # L2gwServiceDefaultDeviceName: "Switch1"
  # L2gwServiceQuotaL2Gateway: 10
  # L2gwServicePeriodicMonitoringInterval: 5

3.6.2. Configuring OpenDaylight L2GW deployment

You can configure the service by changing the values in the neutron-l2gw-opendaylight.yaml file:

NeutronServicePlugins

Comma-separated list of service plugin entrypoints to be loaded from the neutron.service_plugins namespace. Defaults to router.

L2gwServiceProvider

Defines the provider that should be used to provide this service. Defaults to L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default

L2gwServiceDefaultInterfaceName

Sets the name of the default interface.

L2gwServiceDefaultDeviceName

Sets the name of the default device.

L2gwServiceQuotaL2Gateway

Specifies the service quota for the L2 gateway. Defaults to 10.

L2gwServicePeriodicMonitoringInterval

Specifies the monitoring interval for the L2GW service.

3.6.3. Install OpenDaylight with L2GW

Before you start

Procedure

  1. Run the deployment command with the necessary environment files to enable the L2GW functionality with OpenDaylight.
$ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
-e <other environment files>
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-l2gw-opendaylight.yaml
-e /home/stack/templates/docker-images.yaml
-e /home/stack/templates/odl-images.yaml
Note

Environment files present in the deployment command overwrite environment files that you include earlier in the command. You must pay attention to the order of the environment files that you include to avoid overwriting parameters accidentally.

Tip

You can override some of the parameters by creating a minimal environment file that sets only the parameters that you want to change and combining it with the default environment files.