Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 3. Deploying OVN with director

The following events are triggered when you deploy OVN on the Red Hat OpenStack Platform:

  1. Enables the OVN ML2 plugin and generates the necessary configuration options.
  2. Deploys the OVN databases and the ovn-northd service on the controller node(s).
  3. Deploys ovn-controller on each Compute node.
  4. Deploys neutron-ovn-metadata-agent on each Compute node.

3.1. Deploying ML2/OVN with DVR

To deploy and manage distributed virtual routing (DVR) in an ML2/OVN deployment, you configure settings in heat templates and environment files.

Note

This procedures in this guide deploy OVN with the default DVR in an HA environment.

The default settings are provided as guidelines only. They are not expected to work in production or test environments which may require customization for network isolation, dedicated NICs, or any number of other variable factors.

The following example procedure shows how to configure a proof-of-concept deployment of ML2/OVN, HA, DVR using the typical defaults.

Procedure

  1. Verify that the value for OS::TripleO::Compute::Net::SoftwareConfig in the environments/services/neutron-ovn-dvr-ha.yaml file is the same as the OS::TripleO::Controller::Net::SoftwareConfig value in use. This can normally be found in the network environment file used to deploy the overcloud, such as the environments/net-multiple-nics.yaml file. This creates the appropriate external network bridge on the Compute node.

    Note

    If you customize the network configuration of the Compute node, you may need to add the appropriate configuration to your custom files instead.

  2. Include environments/services/neutron-ovn-dvr-ha.yaml as an environment file when deploying the overcloud. For example:

    $ openstack overcloud deploy \
        --templates /usr/share/openstack-tripleo-heat-templates \
        ...
     -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml
  3. Ensure that the Compute and Controller roles in roles_data.yaml include the tag external_bridge, and that an external network entry is added to the Compute nodes. For example:

    - name: Compute
      description: |
        Basic Compute Node role
      CountDefault: 1
      # Create external Neutron bridge (unset if using ML2/OVS without DVR)
      tags:
        - external_bridge
      networks:
        External:
          subnet: external_subnet
    ...
    - name: Controller
      description: |
        Controller role that has all the controller services loaded and handles
        Database, Messaging and Network functions.
      CountDefault: 1
      tags:
        - primary
        - controller
        - external_bridge

3.2. Deploying the OVN metadata agent on Compute nodes

The OVN metadata agent is configured in the tripleo-heat-templates/docker/services/ovn-metadata.yaml file and included in the default Compute role through OS::TripleO::Services::OVNMetadataAgent. As such, the OVN metadata agent with default parameters is deployed as part of the OVN deployment. See Chapter 3, Deploying OVN with director.

OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.

The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<net_uuid>).

3.2.1. Troubleshooting Metadata issues

You can use metadata namespaces for troubleshooting to access the local instances on the Compute node. To troubleshoot metadata namespace issues, run the following command as root on the Compute node:

# ip netns exec ovnmeta-fd706b96-a591-409e-83be-33caea824114 ssh USER@INSTANCE_IP_ADDRESS

USER@INSTANCE_IP_ADDRESS is the user name and IP address for the local instance you want to troubleshoot.

3.3. Deploying Internal DNS with OVN

To use domain names instead of IP addresses on your local network for east-west traffic, use internal domain name service (DNS). With internal DNS, ovn-controller responds to DNS queries locally on the compute node. Note that internal DNS overrides any custom DNS server specified in an instance’s /etc/resolv.conf file. With internal DNS deployed, the instance’s DNS queries are handled by ovn-controller instead of the custom DNS server.

Procedure

  1. Enable DNS with the NeutronPluginExtensions parameter:

    parameter_defaults:
      NeutronPluginExtensions: "dns"
  2. Set the DNS domain before you deploy the overcloud:

      NeutronDnsDomain: "mydns-example.org"
  3. Deploy the overcloud:

    $ openstack overcloud deploy \
        --templates /usr/share/openstack-tripleo-heat-templates \
        ...
     -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovn-dvr-ha.yaml