Menu Close

Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 10. Use Open Virtual Network (OVN)

Open Virtual Network (OVN) is an Open vSwitch-based SDN for supplying network services to instances. This chapter describes the steps required to deploy OVN using director.


OVN is currently available as a Technology Preview feature. For more information on the support scope for features marked as technology previews, see


For deployments using OVN as the ML2 mechanism driver, only nodes with connectivity to the external networks are eligible to schedule the router gateway ports on them. However, there is currently a known issue that will make all nodes as eligible, which becomes a problem when the Compute nodes do not have external connectivity. As a result, if a router gateway port is scheduled on Compute nodes without external connectivity, ingress and egress connections for the external networks will not work; in which case the router gateway port has to be rescheduled to a controller node. As a workaround, you can provide connectivity on all your compute nodes, or you can consider deleting NeutronBridgeMappings, or set it to datacentre:br-ex. For more information, see and

10.1. Deploying the OVN base profile

To deploy the base profile, pass the environments/neutron-ml2-ovn.yaml file to openstack overcloud deploy. For example:

$ openstack overcloud deploy \
    --templates /usr/share/openstack-tripleo-heat-templates \
    -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-ovn.yaml

10.2. Deploying OVN HA profile

To deploy the Pacemaker HA profile, you need to pass the environments/neutron-ml2-ovn-ha.yaml file to the openstack overcloud deploy command. For example:

$ openstack overcloud deploy \
    --templates /usr/share/openstack-tripleo-heat-templates \
    -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-ovn-ha.yaml

10.3. The OVN Components

OVN requires the following components and services:

  • OVN Northbound (NB) database server. Runs on the controller node and listens on TCP port 6641.
  • OVN Southbound (SB) database server. Runs on the controller node and listens on TCP port 6642.
  • ovn-northd - Runs on the controller node.
  • ovn-controller - Runs on all Controller and Compute nodes where OS::Tripleo::Services::OVNController is defined. Connects to the OVN SB database server.
  • The ovn ML2 mechanism driver. Connects to the OVN NB and SB database servers.
ovn components

10.4. Packages and dependencies

The following packages are required for OVN:

  • openvswitch-ovn-common
  • openvswitch-ovn-central
  • openvswitch-ovn-host

These are subpackages of the main openvswitch package; the minimum version required is OVS 2.7.2. These packages should already be included with the overcloud-full.qcow2 image.

10.5. Using director to deploy OVN

When director is used to deploy OVN, it performs the following steps:

  1. Enables the OVN ML2 mechanism driver, and generates the necessary configuration options.
  2. Deploys the OVN database servers and ovn-northd on the controller node(s).
  3. Deploys ovn-controller on each Compute node.

The following director components are used:

  • tripleo-heat-templates
  • puppet-tripleo
  • puppet-neutron
  • puppet-ovn and puppet-vswitch (to deploy the OVN services).

To use OVN, your director deployment must use geneve encapsulation, and not VXLAN.

10.6. The OVN composable service

Director has a composable service for OVN named ovn-dbs with two profiles: the base profile, and the pacemaker HA profile. The OVN northbound and southbound databases are hosted by the ovsdb-server service. Similarly, the ovsdb-server process runs alongside ovs-vswitchd to host the OVS database (conf.db).


The schema file for the NB database is located in /usr/share/openvswitch/ovn-nb.ovsschema , and the SB database schema file is in /usr/share/openvswitch/ovn-sb.ovsschema.

10.7. High Availability

The ovsdb-server service does not currently support active-active mode, however it does support HA with the master-slave mode, which is managed by pacemaker using the resource agent OCF script. Having ovsdb-server run in master mode allows write access to the database, while all the other slave ovsdb-server services will replicate the database locally from the master, and do not allow write access.

For this reason, the base profile and HA profile are supplied to support both scenarios. When using the base profile, OVN database servers are started only in the bootstrap controller (if the deployment has multiple controllers). If HA profile is enabled, the OVN database servers are started on all the controllers, with pacemaker then selecting one to serve as the master role.

10.8. Using the base profile

The YAML file for the base profile is available in tripleo-heat-templates/puppet/services/ovn-dbs.yaml. When this service is enabled, the OVN database servers are started only in the bootstrap controller.

For example, if a deployment has 3 controllers (controller-0, controller-1, and controller-2), the OVN database servers will be started on controller-0. If controller-0 goes down, then the OVN database servers are also unavailable, and they are not started on the other controllers. This is a single point of failure.

Director creates a virtual IP address for its internal network, being active on one of the controller nodes. This virtual IP is mapped to OVN_DBS_VIP. To enable the OVN ML2 driver and ovn-controller services to connect to the OVN database servers, puppet-tripleo generates the following HAproxy configuration in haproxy.cfg on each controller node:

  • OVN NB database server:

    Front end :
  • OVN SB database server:

    Front end :

The OVN ML2 mechanism driver is configured to connect to the OVN_DBS_VIP (in the [ovn] section of ml2_conf.ini).

Since the OVN database servers are not started in controller-1 and controller-2, HAproxy always redirects the traffic to the OVN database servers running on controller-0.

10.9. Using the Pacemaker HA profile

The YAML file for this profile is located in tripleo-heat-templates/puppet/services/pacemaker/ovn-dbs.yaml. When enabled, the OVN database servers are managed by Pacemaker, and puppet-tripleo creates a pacemaker OCF resource named ovn:ovndb-servers.

The OVN database servers are started on each controller node, and the controller owning the virtual IP address (OVN_DBS_VIP) will be running the OVN DB servers in master mode. The OVN ML2 mechanism driver and ovn-controller then connect to the database servers using the OVN_DBS_VIP value. In event of a failover, Pacemaker moves the virtual IP address (OVN_DBS_VIP) to another controller, and also promotes the OVN database server running on that node to master.


OVS version 2.7.2 is required for the Pacemaker HA profile. Failover capabilities are expected to be available in 2.7.4. For more information, see

10.10. Configuring ovn-controller

The ovn-controller service runs on each Compute node and connects to the OVN SB database server to retrieve the logical flows. It then translates these logical flows into physical OF flows and adds them to the OVS bridge (br-int). In order to communicate with ovs-vswitchd and install the OF flows, ovn-controller connects to the local ovsdb-server (that hosts conf.db) using the UNIX socket path which was passed when ovn-controller was started (for example unix:/var/run/openvswitch/db.sock).

The ovn-controller service expects certain key-value pairs in the external_ids column of “Open_vSwitch” table; puppet-ovn uses puppet-vswitch to populate these fields. Below are the key-value pairs which puppet-vswitch configures in the external_ids column:

hostname=<HOST NAME>
ovn-encap-ip=<IP OF THE NODE>

10.11. Known Limitations

This section describes known issues or other limitations known to exist with the Technology Preview of OVN:

  • Open vSwitch versions - Red Hat OpenStack Platform ships with OVS version 2.7.
  • Metadata API - This functionality requires OVS version 2.8.
  • L3 Gateway HA - This functionality requires OVS version 2.8. It also requires updates to networking-ovn that are expected in a future release.
  • New deployments only - OVN is only available for new deployments. There is currently no migration path to OVN for an existing deployment. This is expected in a future release.
  • IPv6 Routers - There is currently a known issue with determining the logical port status. This is expected to be addressed with OVS version 2.7.4. For more information, see