Chapter 2. Planning your OVN deployment
Deploy OVN in HA deployments only. We recommend you deploy with distributed virtual routing (DVR) enabled.
To use OVN, your director deployment must use Generic Network Virtualization Encapsulation (Geneve), and not VXLAN. Geneve allows OVN to identify the network using the 24-bit Virtual Network Identifier (VNI) field and an additional 32-bit Type Length Value (TLV) to specify both the source and destination logical ports. You should account for this larger protocol header when you determine your MTU setting.
DVR HA with OVN
Deploy OVN with DVR in an HA environment. OVN is supported only in an HA environment. DVR is enabled by default in new ML2/OVN deployments and disabled by default in new ML2/OVS deployments. The neutron-ovn-dvr-ha.yaml
environment file configures the required DVR-specific parameters for deployments using OVN in an HA environment.
2.1. The ovn-controller on Compute nodes
The ovn-controller
service runs on each Compute node and connects to the OVN SB database server to retrieve the logical flows. The ovn-controller
translates these logical flows into physical OpenFlow flows and adds the flows to the OVS bridge (br-int
). To communicate with ovs-vswitchd
and install the OpenFlow flows, the ovn-controller
connects to the local ovsdb-server
(that hosts conf.db
) using the UNIX socket path that was passed when ovn-controller
was started (for example unix:/var/run/openvswitch/db.sock
).
The ovn-controller
service expects certain key-value pairs in the external_ids
column of the Open_vSwitch
table; puppet-ovn
uses puppet-vswitch
to populate these fields. Below are the key-value pairs that puppet-vswitch
configures in the external_ids
column:
hostname=<HOST NAME> ovn-encap-ip=<IP OF THE NODE> ovn-encap-type=geneve ovn-remote=tcp:OVN_DBS_VIP:6642
2.2. The OVN composable service
The director has a composable service for OVN named ovn-dbs
with two profiles: the base profile and the pacemaker HA profile. The OVN northbound and southbound databases are hosted by the ovsdb-server
service. Similarly, the ovsdb-server
process runs alongside ovs-vswitchd
to host the OVS database (conf.db
).
The schema file for the NB database is located in /usr/share/openvswitch/ovn-nb.ovsschema
, and the SB database schema file is in /usr/share/openvswitch/ovn-sb.ovsschema
.
2.3. High Availability with pacemaker and DVR
In addition to the using the required HA profile, deploy OVN with the DVR to ensure the availability of networking services. With the HA profile enabled, the OVN database servers start on all the Controllers, and pacemaker
then selects one controller to serve in the master role.
The ovsdb-server
service does not currently support active-active mode. It does support HA with the master-slave mode, which is managed by Pacemaker using the resource agent Open Cluster Framework (OCF) script. Having ovsdb-server
run in master mode allows write access to the database, while all the other slave ovsdb-server
services replicate the database locally from the master, and do not allow write access.
The YAML file for this profile is the tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml
file. When enabled, the OVN database servers are managed by Pacemaker, and puppet-tripleo
creates a pacemaker OCF resource named ovn:ovndb-servers
.
The OVN database servers are started on each Controller node, and the controller owning the virtual IP address (OVN_DBS_VIP
) runs the OVN DB servers in master mode. The OVN ML2 mechanism driver and ovn-controller
then connect to the database servers using the OVN_DBS_VIP
value. In the event of a failover, Pacemaker moves the virtual IP address (OVN_DBS_VIP
) to another controller, and also promotes the OVN database server running on that node to master.
2.4. Layer 3 high availability with OVN
OVN supports Layer 3 high availability (L3 HA) without any special configuration. OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis
column in the OVN Logical_Router_Port
table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller
handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller
.
L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck.
BFD monitoring
OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to node.
Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements.
Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes.
External network failures are not detected as would happen with an ML2-OVS configuration.
L3 HA for OVN supports the following failure modes:
- The gateway node becomes disconnected from the network (tunneling interface).
-
ovs-vswitchd
stops (ovs-switchd
is responsible for BFD signaling) -
ovn-controller
stops (ovn-controller
removes itself as a registered node).
This BFD monitoring mechanism only works for link failures, not for routing failures.