Chapter 1. Planning your migration of the ML2 mechanism driver from OVS to OVN

Note

Red Hat strongly recommends that you upgrade from RHOSP 16.2 to RHOSP 17.1 before you migrate from the OVS mechanism driver to the OVN mechanism driver.

RHOSP 17.1 affords a better baseline for migration in most cases. For example, in RHOSP 16.2, you cannot migrate from the OVS mechanism driver with trunk ports or iptables hybrid firewalls to the OVN mechanism driver. In RHOSP 17.1, you can.

In addition, some improvements in the RHOSP 17.1 version of the OVN migration tools that are not available in the 16.2 version of the migration tools. For example, the RHOSP 17.1 migration tools feature the ability to revert back to the OVS mechanism driver in case of a failure during the migration.

Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP 15.0 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers today. Those advantages multiply with each release while we continue to enhance and improve the ML2/OVN feature set.

The ML2/OVS mechanism driver is deprecated in RHOSP 17.0. Over several releases, Red Hat is replacing ML2/OVS with ML2/OVN.

Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases. During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal support, and most new feature development happens in the ML2/OVN mechanism driver.

In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop supporting it.

If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, start now to evaluate the benefits and feasibility of replacing the ML2/OVS mechanism driver with the ML2/OVN mechanism driver. Migration is supported in RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test purposes only.

Note

Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to submit a Proactive Case.

Engage your Red Hat Technical Account Manager or Red Hat Global Professional Services early in this evaluation. In addition to helping you file the required proactive support case if you decide to migrate, Red Hat can help you plan and prepare, starting with the following basic questions.

When should you migrate?
Timing depends on many factors, including your business needs and the status of our continuing improvements to the ML2/OVN offering. See Feature support in OVN and OVS mechanism drivers and ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios.
In-place migration or parallel migration?

Depending on a variety of factors, you can choose between the following basic approaches to migration.

  • Parallel migration. Create a new, parallel deployment that uses ML2/OVN and then move your operations to that deployment.
  • In-place migration. Use the ovn_migration.sh script as described in this document. Note that Red Hat supports the ovn_migration.sh script only in deployments that are managed by RHOSP director.
Warning

An ML2/OVS to ML2/OVN migration alters the environment in ways that might not be completely reversible. A failed or interrupted migration can leave the OpenStack environment inoperable. Before migrating in a production environment, file a proactive support case. Then work with your Red Hat Technical Account Manager or Red Hat Global Professional Services to create a backup and migration plan and test the migration in a stage environment that closely resembles your production environment.

1.1. Feature support in OVN and OVS mechanism drivers

Review the availability of Red Hat OpenStack Platform (RHOSP) features as part of your OVS to OVN mechanism driver migration plan.

FeatureOVN RHOSP 16.2OVN RHOSP 17.1OVS RHOSP 16.2OVS RHOSP 17.1Additional information

Provisioning Baremetal Machines with OVN DHCP

No

No

Yes

Yes

The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging (--dhcp-match in dnsmasq), which is not supported in the OVN DHCP server. See https://bugzilla.redhat.com/show_bug.cgi?id=1622154.

North/south routing on VF(direct) ports on VLAN project (tenant networks)

No

No

Yes

Yes

Core OVN limitation. See https://bugs.launchpad.net/neutron/+bug/1875852.

Reverse DNS for internal DNS records

No

Yes

Yes

Yes

See https://bugzilla.redhat.com/show_bug.cgi?id=2211426.

Internal DNS resolution for isolated networks

No

No

Yes

Yes

OVN does not support internal DNS resolution for isolated networks because it does not allocate ports for DNS service. This does not affect OVS deployments because OVS uses dnsmasq. See https://issues.redhat.com/browse/OSP-25661.

Security group logging

Tech Preview

Yes

No

No

RHOSP does not support security group logging with the OVS mechanism driver.

Stateless security groups

No

Yes

No

No

See Configuring security groups.

Load-balancing service distributed virtual routing (DVR)

Yes

Yes

No

No

The OVS mechanism driver routes Load-balancing service traffic through Controller or Network nodes even with DVR enabled. The OVN mechanism driver routes Load-balancing service traffic directly through the Compute nodes.

IPv6 DVR

Yes

Yes

No

No

With the OVS mechanism driver, RHOSP does not distribute IPv6 traffic to the Compute nodes, even when DVR is enabled. All ingress/egress traffic goes through the centralized Controller or Network nodes. If you need IPv6 DVR, use the OVN mechanism driver.

DVR and layer 3 high availability (L3 HA)

Yes

Yes

No

No

RHOSP deployments with the OVS mechanism driver do not support DVR in conjunction with L3 HA. If you use DVR with RHOSP director, L3 HA is disabled. This means that the Networking service still schedules routers on the Network nodes and load-shares them between the L3 agents. However, if one agent fails, all routers hosted by this agent also fail. This affects only SNAT traffic. Red Hat recommends using the allow_automatic_l3agent_failover feature in such cases, so that if one Network node fails, the routers are rescheduled to a different node.

1.2. ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios

Red Hat continues to test and refine in-place migration scenarios. Work with your Red Hat Technical Account Manager or Global Professional Services to determine whether your OVS deployment meets the criteria for a valid in-place migration scenario.

1.2.1. Validated ML2/OVS to ML2/OVN migration scenarios

DVR to DVR

Start: RHOSP 16.1.1 or later with OVS with DVR.

End: Same RHOSP version and release with OVN with DVR.

SR-IOV was not present in the starting environment or added during or after the migration.

Centralized routing + SR-IOV with virtual function (VF) ports only

Start: RHOSP 16.1.1 or later with OVS (no DVR)and SR-IOV.

End: Same RHOSP version and release with OVN (no DVR) and SR-IOV.

Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused migration failure.

1.2.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified

You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red Hat announces that the underlying issues are resolved.

OVS deployment uses network functions virtualization (NFV)
Red Hat supports new deployments with ML2/OVN and NFV, but has not successfully tested migration of an ML2/OVS and NFV deployment to ML2/OVN. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1925290.
SR-IOV with physical function (PF) ports
Migration tests failed when any workload uses an SR-IOV PF port. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1879546.
OVS uses trunk ports
If your ML2/OVS deployment uses trunk ports, do not perform an ML2/OVS to ML2/OVN migration. The migration does not properly set up the trunked ports in the OVN environment. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1857652.
DVR with VLAN project (tenant) networks
Do not migrate to ML2/OVN with DVR and VLAN project networks. You can migrate to ML2/OVN with centralized routing. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1766930.

1.2.3. ML2/OVS to ML2/OVN in-place migration and security group rules

Ensure that any custom security group rules in your originating ML2/OVS deployment are compatible with the target ML2/OVN deployment.

For example, the default security group includes rules that allow egress to the DHCP server. If you deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance would not boot. In this case, to restore DHCP access, you could add the following rules:

# Allow VM to contact dhcp server (ipv4)
   openstack security group rule create --egress --ethertype IPv4 --protocol udp --dst-port 67 ${SEC_GROUP_ID}
   # Allow VM to contact metadata server (ipv4)
   openstack security group rule create --egress --ethertype IPv4 --protocol tcp --remote-ip 169.254.169.254 ${SEC_GROUP_ID}


   # Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary depending on your use case!
   openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --remote-ip ff02::1:2 ${SEC_GROUP_ID}
   # Allow VM to contact metadata server (ipv6)
   openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip fe80::a9fe:a9fe ${SEC_GROUP_ID}