Testing Migration of the Networking Service to the ML2/OVN Mechanism Driver
Testing migration of Red Hat OpenStack Platform Networking service (neutron) from the ML2/OVS mechanism driver to the ML2/OVN mechanism driver
OpenStack Documentation Team
rhos-docs@redhat.com
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Using the Direct Documentation Feedback (DDF) function
Use the Add Feedback DDF function for direct comments on specific sentences, paragraphs, or code blocks.
- View the documentation in the Multi-page HTML format.
- Ensure that you see the Feedback button in the upper right corner of the document.
- Highlight the part of text that you want to comment on.
- Click Add Feedback.
- Complete the Add Feedback field with your comments.
- Optional: Add your email address so that the documentation team can contact you for clarification on your issue.
- Click Submit.
Chapter 1. Planning your migration of the ML2 mechanism driver from OVS to OVN
Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP 15.0 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers today. Those advantages multiply with each release while we continue to enhance and improve the ML2/OVN feature set.
The ML2/OVS mechanism driver is deprecated in RHOSP 17.0. Over several releases, Red Hat is replacing ML2/OVS with ML2/OVN.
Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases. During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal support, and most new feature development happens in the ML2/OVN mechanism driver.
In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop supporting it.
If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, start now to evaluate the benefits and feasibility of replacing the ML2/OVS mechanism driver with the ML2/OVN mechanism driver. Migration is supported in RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test purposes only.
Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to submit a Proactive Case.
Engage your Red Hat Technical Account Manager or Red Hat Global Professional Services early in this evaluation. In addition to helping you file the required proactive support case if you decide to migrate, Red Hat can help you plan and prepare, starting with the following basic questions.
- When should you migrate?
- Timing depends on many factors, including your business needs and the status of our continuing improvements to the ML2/OVN offering. For instance, security groups logging is planned for a future RHOSP release. If you need that feature, you might plan for a migration after the feature is available. See Limitations of the ML2/OVN mechanism driver and ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios.
- In-place migration or parallel migration?
Depending on a variety of factors, you can choose between the following basic approaches to migration.
- Parallel migration. Create a new, parallel deployment that uses ML2/OVN and then move your operations to that deployment.
-
In-place migration. Use the ovn_migration.sh script as described in this document. Note that Red Hat supports the
ovn_migration.sh
script only in deployments that are managed by RHOSP director.
An ML2/OVS to ML2/OVN migration alters the environment in ways that might not be completely reversible. A failed or interrupted migration can leave the OpenStack environment inoperable. Before migrating in a production environment, file a proactive support case. Then work with your Red Hat Technical Account Manager or Red Hat Global Professional Services to create a backup and migration plan and test the migration in a stage environment that closely resembles your production environment.
1.1. Limitations of the ML2/OVN mechanism driver
Some features available with the ML2/OVS mechanism driver are not yet supported with the ML2/OVN mechanism driver.
1.1.1. ML2/OVS features not yet supported by ML2/OVN
Feature | Notes | Track this Feature |
---|---|---|
Provisioning Baremetal Machines with OVN DHCP |
The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging ( |
1.1.2. Core OVN limitations
North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router’s gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852.
1.2. ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios
Red Hat continues to test and refine in-place migration scenarios. Work with your Red Hat Technical Account Manager or Global Professional Services to determine whether your OVS deployment meets the criteria for a valid in-place migration scenario.
1.2.1. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified
You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red Hat announces that the underlying issues are resolved.
- OVS uses trunk ports
- If your ML2/OVS deployment uses trunk ports, do not perform an ML2/OVS to ML2/OVN migration. The migration does not properly set up the trunked ports in the OVN environment. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1857652.
- DVR with VLAN project (tenant) networks
- Do not migrate to ML2/OVN with DVR and VLAN project networks. You can migrate to ML2/OVN with centralized routing. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1766930.
1.2.2. ML2/OVS to ML2/OVN in-place migration and security group rules
Ensure that any custom security group rules in your originating ML2/OVS deployment are compatible with the target ML2/OVN deployment.
For example, the default security group includes rules that allow egress to the DHCP server. If you deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance would not boot. In this case, to restore DHCP access, you could add the following rules:
# Allow VM to contact dhcp server (ipv4) openstack security group rule create --egress --ethertype IPv4 --protocol udp --dst-port 67 ${SEC_GROUP_ID} # Allow VM to contact metadata server (ipv4) openstack security group rule create --egress --ethertype IPv4 --protocol tcp --remote-ip 169.254.169.254 ${SEC_GROUP_ID} # Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary depending on your use case! openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --remote-ip ff02::1:2 ${SEC_GROUP_ID} # Allow VM to contact metadata server (ipv6) openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip fe80::a9fe:a9fe ${SEC_GROUP_ID}
Chapter 2. Migrating the ML2 mechanism driver from OVS to OVN
2.1. Preparing the environment for migration of the ML2 mechanism driver from OVS to OVN
Environment assessment and preparation is critical to a successful migration. Your Red Hat Technical Account Manager or Global Professional Services will guide you through these steps.
Prerequisites
- Your pre-migration deployment is Red Hat OpenStack Platform (RHOSP) 16.2 or later.
- Your RHOSP deployment is up to date. In other words, if you need to upgrade or update your OpenStack version, perform the upgrade or update first, and then perform the ML2/OVS to ML2/OVN migration.
At least one IP address is available for each subnet pool.
The OVN mechanism driver creates a metadata port for each subnet. Each metadata port claims an IP address from the IP address pool.
- You have worked with your Red Hat Technical Account Manager or Global Professional Services to plan the migration and have filed a proactive support case. See How to submit a Proactive Case.
Procedure
Create an ML2/OVN stage deployment to obtain the baseline configuration of your target ML2/OVN deployment and test the feasibility of the target deployment.
Design the stage deployment with the same basic roles, routing, and topology as the planned post-migration production deployment. Save the
overcloud-deploy.sh
file and any files referenced by the deployment, such as environment files. You need these files later in this procedure to configure the migration target environment.NoteUse these files only for creation of the stage deployment and in the migration. Do not re-use them after the migration.
If your ML2/OVS deployment uses VXLAN or GRE project networks, schedule for a waiting period of up to 24 hours after the setup-mtu-t1 step.
- This waiting period allows the VM instances to renew their DHCP leases and receive the new MTU value. During this time you might need to manually set MTUs on some instances and reboot some instances.
- 24 hours is the time based on default configuration of 86400 seconds. The actual time depends on /var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini dhcp_renewal_time and /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf dhcp_lease_duration parameters.
Install python3-networking-ovn-migration-tool.
sudo dnf install python3-networking-ovn-migration-tool @container-tools
The
@container-tools
argument also installs the container tools if they are not already present.Create a directory on the undercloud, and copy the Ansible playbooks:
mkdir ~/ovn_migration cd ~/ovn_migration cp -rfp /usr/share/ansible/neutron-ovn-migration/playbooks .
Copy your ML2/OVN stage deployment files to the migration home directory, such as
~/ovn_migration
.The stage migration deployment files include
overcloud-deploy.sh
and any files referenced by the deployment, such as environment files. Rename the copy ofovercloud-deploy.sh
toovercloud-deploy-ovn.sh
. Use this script for migration only. Do not use it for other purposes.Find your migration scenario in the following list and perform the appropriate steps to customize the
openstack deploy
command inovercloud-deploy-ovn.sh
.- Scenario 1: DVR to DVR, compute nodes have connectivity to the external network
Add the following environment files to the
openstack deploy
command in overcloud-deploy-ovn.sh. Add them in the order shown. This command example uses the defaultneutron-ovn-dvr-ha.yaml
file. If you use a different file, replace the file name in the command.-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml \ -e $HOME/ovn-extras.yaml
- Scenario 2: Centralized routing to centralized routing (no DVR)
-
If your deployment uses SR-IOV, add the service definition
OS::TripleO::Services::OVNMetadataAgent
to the Controller role in the fileroles_data.yaml
. Preserve the pre-migration custom bridge mappings.
Run this command on a networker or combined networker/controller node to get the current bridge mappings:
sudo podman exec -it neutron_ovs_agent crudini --get /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings
Example output
datacentre:br-ex,tenant:br-isolated
-
On the undercloud, create an environment file for the bridge mappings:
/home/stack/neutron_bridge_mappings.yaml
. Set the defaults in the environment file. For example:
parameter_defaults: ComputeParameters: NeutronBridgeMappings: "datacentre:br-ex,tenant:br-isolated"
Add the following environment files to the
openstack deploy
command in overcloud-deploy-ovn.sh. Add them in the order shown. If your environment does not use SR-IOV, omit the neutron-ovn-sriov.yaml file. The file ovn-extras.yaml does not exist yet but it is created by the script ovn_migration.sh before theopenstack deploy
command is run.-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml \ -e /home/stack/ovn-extras.yaml \ -e /home/stack/neutron_bridge_mappings.yaml
- Leave any custom network modifications the same as they were before migration.
-
If your deployment uses SR-IOV, add the service definition
- Scenario 3: Centralized routing to DVR, with Geneve type driver, and compute nodes connected to external networks through
br-ex
- Warning
If your ML2/OVS deployment uses centralized routing and VLAN project (tenant) networks, do not migrate to ML2/OVN with DVR. You can migrate to ML2/OVN with centralized routing. To track progress on this limitation, see https://bugzilla.redhat.com/show_bug.cgi?id=1766930.
Ensure that compute nodes are connected to the external network through the
br-ex
bridge. For example, in an environment file such as compute-dvr.yaml, set the following:type: ovs_bridge # Defaults to br-ex, anything else requires specific # bridge mapping entries for it to be used. name: bridge_name use_dhcp: false members: - type: interface name: nic3 # force the MAC address of the bridge to this interface primary: true
Ensure that all users have execution privileges on the file
overcloud-deploy-ovn.sh
. The script requires execution privileges during the migration process.$ chmod a+x ~/overcloud-deploy-ovn.sh
Use
export
commands to set the following migration-related environment variables. For example:$ export PUBLIC_NETWORK_NAME=my-public-network
STACKRC_FILE - the stackrc file in your undercloud.
Default: ~/stackrc
OVERCLOUDRC_FILE - the overcloudrc file in your undercloud.
Default: ~/overcloudrc
OVERCLOUD_OVN_DEPLOY_SCRIPT - the deployment script.
Default: ~/overcloud-deploy-ovn.sh
PUBLIC_NETWORK_NAME - the name of your public network.
Default: public.
IMAGE_NAME - the name or ID of the glance image to use to boot a test server.
Default: cirros.
The image is automatically downloaded during the pre-validation / post-validation process.
VALIDATE_MIGRATION - Create migration resources to validate the migration. Before starting the migration, the migration script boots a server and validates that the server is reachable after the migration.
Default: True.
WarningMigration validation requires at least two available floating IP addresses, two networks, two subnets, two instances, and two routers as admin.
Also, the network specified by PUBLIC_NETWORK_NAME must have available floating IP addresses, and you must be able to ping them from the undercloud.
If your environment does not meet these requirements, set VALIDATE_MIGRATION to False.
SERVER_USER_NAME - User name to use for logging to the migration instances.
Default: cirros.
DHCP_RENEWAL_TIME - DHCP renewal time in seconds to configure in DHCP agent configuration file.
Default: 30
Ensure you are in the ovn-migration directory and run the command
ovn_migration.sh generate-inventory
to generate the inventory filehosts_for_migration
and theansible.cfg
file.$ ovn_migration.sh generate-inventory | sudo tee -a /var/log/ovn_migration_output.txt
Review the
hosts_for_migration
file for accuracy.- Ensure the lists match your environment.
- Ensure there are ovn controllers on each node.
- Ensure there are no list headings (such as [ovn-controllers]) that do not have list items under them.
- From the ovn migration directory, run the command ansible -i hosts_for_migration -m ping all
If your original deployment uses VXLAN or GRE, you need to adjust maximum transmission unit (MTU) values. Proceed to Adjusting MTU for migration from the OVS mechanism driver to the OVN mechanism driver.
If your original deployment uses VLAN networks, you can skip the MTU adjustments and proceed to Preparing container images for migration from the OVS mechanism driver to the OVN mechanism driver.
2.2. Adjusting MTU for migration of the ML2 mechanism driver from OVS to OVN
If you are migrating from RHOSP 17.0 with the OVN mechanism driver with VXLAN or GRE to the OVN mechanism driver with Geneve, you must ensure that the maximum transmission unit (MTU) settings are smaller than or equal to the smallest MTU in the network.
If your current deployment uses VLAN instead of VXLAN or GRE, skip this procedure and proceed to Preparing container images for migration from the OVS mechanism driver to the OVN mechanism driver.
Prerequisites
- You have completed the steps in Preparing the environment for migration from the OVS mechanism driver to the OVN mechanism driver.
- Your pre-migration deployment is Red Hat OpenStack Platform (RHOSP) 16.2 or later with VXLAN or GRE.
Procedure
Run
ovn_migration.sh setup-mtu-t1
. This lowers the T1 parameter of the internal neutron DHCP servers that configure thedhcp_renewal_time
in /var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini in all the nodes where DHCP agent is running.$ ovn_migration.sh setup-mtu-t1 | sudo tee -a /var/log/ovn_migration_output.txt
- If your original OVS deployment uses VXLAN or GRE project networking, wait until the DHCP leases have been renewed on all VM instances. This can take up to 24 hours depending on lease renewal settings and the number of instances.
Verify that the T1 parameter has propagated to existing VMs.
- Connect to one of the compute nodes.
Run
tcpdump`
over one of the VM taps attached to a project network.If T1 propagation is successful, expect to see requests occur approximately every 30 seconds:
[heat-admin@overcloud-novacompute-0 ~]$ sudo tcpdump -i tap52e872c2-e6 port 67 or port 68 -n tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap52e872c2-e6, link-type EN10MB (Ethernet), capture size 262144 bytes 13:17:28.954675 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300 13:17:28.961321 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355 13:17:56.241156 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 30013:17:56.249899 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
NoteThis verification is not possible with cirros VMs. The cirros
udhcpc`
implementation does not respond to DHCP option 58 (T1). Try this verification on a port that belongs to a full Linux VM. Red Hat recommends that you check all the different operating systems represented in your workloads, such as variants of Windows and Linux distributions.
- If any VM instances were not updated to reflect the change to the T1 parameter of DHCP, reboot them.
Lower the MTU of the pre-migration VXLAN and GRE networks:
$ ovn_migration.sh reduce-mtu | sudo tee -a /var/log/ovn_migration_output.txt
This step reduces the MTU network by network and tags the completed network with adapted_mtu. The tool acts only on VXLAN and GRE networks. This step will not change any values if your deployment has only VLAN project networks.
If you have any instances with static IP assignment on VXLAN or GRE project networks, manually modify the configuration of those instances to configure the new Geneve MTU, which is the current VXLAN MTU minus 8 bytes. For example, if the VXLAN-based MTU was 1450, change it to 1442.
NotePerform this step only if you have manually provided static IP assignments and MTU settings on VXLAN or GRE project networks. By default, DHCP provides the IP assignment and MTU settings.
- Proceed to Preparing container images for migration from the OVS mechanism driver to the OVN mechanism driver.
2.3. Preparing container images for migration of the ML2 mechanism driver from OVS to OVN
Environment assessment and preparation is critical to a successful migration. Your Red Hat Technical Account Manager or Global Professional Services will guide you through these steps.
Prerequisites
- You have completed the steps in Preparing the environment for migration of the ML2 mechanism driver from OVS to OVN
- If your original deployment uses VXLAN or GRE, you also completed the steps in Adjusting MTU for migration from the OVS mechanism driver to the OVN mechanism driver.
Procedure
Prepare the new container images for use after the migration to ML2/OVN.
Create
containers-prepare-parameter.yaml
file in the home directory if it is not present.$ test -f $HOME/containers-prepare-parameter.yaml || sudo openstack tripleo container image prepare default \ --output-env-file $HOME/containers-prepare-parameter.yaml
-
Verify that
containers-prepare-parameter.yaml
is present at the end of your $HOME/overcloud-deploy-ovn.sh and $HOME/overcloud-deploy.sh files. Change the neutron_driver in the
containers-prepare-parameter.yaml
file to ovn:$ sed -i -E 's/neutron_driver:([ ]\w+)/neutron_driver: ovn/' $HOME/containers-prepare-parameter.yaml
Verify the changes to the neutron_driver:
$ grep neutron_driver $HOME/containers-prepare-parameter.yaml neutron_driver: ovn
Update the images:
$ sudo openstack tripleo container image prepare \ --environment-file /home/stack/containers-prepare-parameter.yaml
NoteProvide the full path to your
containers-prepare-parameter.yaml
file. Otherwise, the command completes very quickly without updating the image list or providing an error message.
On the undercloud, validate the updated images.
. Log in to the undercloud as the user `stack` and source the stackrc file. $ source ~/stackrc $ openstack tripleo container image list | grep '\-ovn'
Your list should resemble the following example. It includes containers for the OVN databases, OVN controller, the metadata agent, and the neutron server agent.
docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-northd:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-sb-db-server:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-controller:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-server-ovn:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-nb-db-server:16.2_20211110.2 docker://undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:16.2_20211110.2
- Proceed to Migrating from ML2/OVS to ML2/OVN.
2.4. Migrating the ML2 mechanism driver from OVS to OVN
The ovn-migration script performs environmental setup, migration, and cleanup tasks related to the in-place migration of the ML2 mechanism driver from OVS to OVN.
Prerequisites
- You have completed the steps in Preparing the environment for migration of the ML2 mechanism driver from OVS to OVN
- If your original deployment uses VXLAN or GRE, you also completed the steps in Adjusting MTU for migration from the OVS mechanism driver to the OVN mechanism driver.
- You also completed all required migration steps through Preparing container images for migration from the OVS mechanism driver to the OVN mechanism driver.
Procedure
Stop all operations that interact with the Networking Service (neutron) API, such as creating new networks, subnets, or routers, or migrating virtual machine instances between compute nodes.
Interaction with Networking API during migration can cause undefined behavior. You can restart the API operations after completing the migration.
Run
ovn_migration.sh start-migration
to begin the migration process. Thetee
command creates a copy of the script output for troubleshooting purposes.$ ovn_migration.sh start-migration | sudo tee -a /var/log/ovn_migration_output.txt
Result
The script performs the following actions.
- Creates pre-migration resources (network and VM) to validate existing deployment and final migration.
- Updates the overcloud stack to deploy OVN alongside reference implementation services using the temporary bridge br-migration instead of br-int. The temporary bridge helps to limit downtime during migration.
- Generates the OVN northbound database by running neutron-ovn-db-sync-util. The utility examines the Neutron database to create equivalent resources in the OVN northbound database.
- Clones the existing resources from br-int to br-migration, to allow ovn to find the same resource UUIDS over br-migration.
- Re-assigns ovn-controller to br-int instead of br-migration.
Removes node resources that are not used by ML2/OVN, including the following.
- Cleans up network namespaces (fip, snat, qrouter, qdhcp).
-
Removes any unnecessary patch ports on
br-int
. -
Removes
br-tun
andbr-migration
ovs bridges. -
Deletes ports from
br-int
that begin withqr-
,ha-
, andqg-
(using neutron-netns-cleanup).
- Deletes Networking Service (neutron) agents and Networking Service HA internal networks from the database through the Networking Service API.
- Validates connectivity on pre-migration resources.
- Deletes pre-migration resources.
- Creates post-migration resources.
- Validates connectivity on post-migration resources.
- Cleans up post-migration resources.
-
Re-runs the deployment tool to update OVN on
br-int
.