Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 3. Director-Based Environments: Performing Upgrades to Major Versions

Warning

Before performing an upgrade to the latest major version, ensure the undercloud and overcloud are updated to the latest minor versions. This includes both OpenStack Platform services and the base operating system. For the process on performing a minor version update, see "Director-Based Environments: Performing Updates to Minor Versions" in the Red Hat OpenStack Platform 8 Upgrading Red Hat OpenStack Platform guide. Performing a major version upgrade without first performing a minor version update can cause failures in the upgrade process.

Warning

With High Availaibility for Compute instances (or Instance HA, as described in High Availability for Compute Instances), upgrades or scale-up operations are not possible. Any attempts to do so will fail.

If you have Instance HA enabled, disable it before performing an upgrade or scale-up. To do so, perform a rollback as described in Rollback.

This chapter explores how to upgrade your environment. This includes upgrading aspects of both the Undercloud and Overcloud. This upgrade process provides a means for you to move to the next major version. In this case, it is a upgrade from Red Hat OpenStack Platform 8 to Red Hat OpenStack Platform 9.

This procedure for both situations involves the following workflow:

  1. Update the Red Hat OpenStack Platform director packages
  2. Update the Overcloud images on the Red Hat OpenStack Platform director
  3. Update the Overcloud stack and its packages using the Red Hat OpenStack Platform director
Important

Make sure to read the information in Section 3.1, “Important Pre-Upgrade Notes” before attempting a version upgrade.

3.1. Important Pre-Upgrade Notes

Make sure you have read the following notes before upgrading your environment.

  • Upgrade in Red Hat OpenStack Platform director requires full testing with specific configurations before being performed on any live production environment. Red Hat has tested most use cases and combinations that are offered as standard options through the director but, due to the number of possible combinations, this can never be a fully exhaustive list. In addition, if the configuration has been modified from the standard deployment, either manually or through post configuration hooks, testing the upgrade feature in a non-production environment becomes even more critical. Therefore, we advise you to:

    • Perform a backup of your undercloud node before starting any steps in the upgrade procedure. See the Back Up and Restore the Director Undercloud guide for backup procedures.
    • Run the upgrade procedure in a test environment that includes all of the changes made before running the procedure in your production environment.
    • Contact Red Hat and request guidance and assistance on the upgrade process before proceeding if you feel uncomfortable about performing this upgrade.
  • The upgrade process outlined in this section only accommodates customizations through the director. If you customized an overcloud feature outside of director then disable the feature, upgrade the overcloud, and re-enable the feature after the upgrade completes. This means the customized feature is unavailable until the completion of the entire upgrade.
  • Red Hat OpenStack Platform director 9 can manage certain overcloud versions of Red Hat OpenStack Platform 8. See the support matrix below for information.

    Table 3.1. Support Matrix for Red Hat OpenStack Platform director 9

    Version

    Overcloud Updating

    Overcloud Deploying

    Overcloud Scaling

    Red Hat OpenStack Platform 9

    8.0

    All versions

    All versions

  • If managing an older Overcloud version, use the following Heat template collections:

    • For Red Hat OpenStack Platform 8: /usr/share/openstack-tripleo-heat-templates/liberty/

      For example:

      $ openstack overcloud deploy -templates /usr/share/openstack-tripleo-heat-templates/liberty/ [OTHER_OPTIONS]
  • Make sure that you have upgraded your undercloud and overcloud to the latest minor release of Red Hat OpenStack Platform 8 and Red Hat Enterprise Linux 7 before attempting a major upgrade to Red Hat OpenStack Platform 9. See "Director-Based Environments: Performing Updates to Minor Versions" for instructions on performing a package update to your undercloud and overcloud. If the kernel updates to the latest version, perform a reboot so that new kernel parameters take effect.
  • Apply a version lock to libvirt as described in Solutions.

3.2. Upgrading the Director

Important

Read the information in Section 3.1, “Important Pre-Upgrade Notes” before attempting the following procedure.

To upgrade the Red Hat OpenStack Platform director, follow this procedure:

  1. Log into the director as the stack user.
  2. Update the OpenStack Platform repository:

    $ sudo subscription-manager repos --disable=rhel-7-server-openstack-8-rpms --disable=rhel-7-server-openstack-8-director-rpms
    $ sudo subscription-manager repos --enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-openstack-9-director-rpms

    This sets yum to use the latest repositories.

  3. Stop the main OpenStack Platform services:

    $ sudo systemctl stop 'openstack-*' 'neutron-*' httpd
    Note

    This causes a short period of downtime for the undercloud. The overcloud is still functional during the undercloud update.

  4. Use yum to upgrade the director:

    $ sudo yum update python-tripleoclient
  5. Use the following command to upgrade the undercloud:

    $ openstack undercloud upgrade

    This command upgrades the director’s packages, refreshes the director’s configuration, and populates any settings that are unset since the version change. This command does not delete any stored data, such Overcloud stack data or data for existing nodes in your environment.

Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If they have, perform a reboot of each node:

  1. Reboot the node:

    $ sudo reboot
  2. Wait until the node boots.

When the node boots, check the status of all services:

$ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"

Verify the existence of your Overcloud and its nodes:

$ source ~/stackrc
$ openstack server list
$ ironic node-list
$ openstack stack list

Be aware of the following notes after upgrading the Overcloud to Red Hat OpenStack Platform 9:

  • The Undercloud’s admin user might require an additional role (_member_) not included with Red Hat OpenStack Platform 9. This role is important for Overcloud communication. Check for this role:

    $ openstack role list

    If the role does not exist, create it:

    $ openstack role create _member_

    Add the role to the admin user on the admin tenant:

    $ openstack role add --user admin --project admin _member_
Important

If using customized core Heat templates, make sure to check for differences between the updated core Heat templates and your current set. Red Hat provides updates to the Heat template collection over subsequent releases. Using a modified template collection can lead to a divergence between your custom copy and the original copy in /usr/share/openstack-tripleo-heat-templates. Run the following command to see differences between your custom Heat template collection and the updated original version:

# diff -Nary /usr/share/openstack-tripleo-heat-templates/ ~/templates/my-overcloud/

Make sure to either apply these updates to your custom Heat template collection, or create a new copy of the templates in /usr/share/openstack-tripleo-heat-templates/ and apply your customizations.

3.3. Upgrading the Overcloud Images on the Director

Important

Make sure to read the information in Section 3.1, “Important Pre-Upgrade Notes” before attempting any step in the following procedure.

This procedure ensures you have the latest images for node discovery and Overcloud deployment. The new images from the rhosp-director-images and rhosp-director-images-ipa packages are already updated from the Undercloud upgrade.

Remove any existing images from the images directory on the stack user’s home (/home/stack/images):

$ rm -rf ~/images/*

Extract the archives:

$ cd ~/images
$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-9.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-9.0.tar; do tar -xvf $i; done

Import the latest images into the director and configure nodes to use the new images

$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
$ openstack baremetal configure boot

To finalize the image update, verify the existence of the new images:

$ openstack image list
$ ls -l /httpboot

The director is now upgraded with the latest images.

Important

Make sure the Overcloud image version corresponds to the Undercloud version.

3.4. Upgrading the Overcloud

Important

Make sure to read the information in Section 3.1, “Important Pre-Upgrade Notes” before attempting any step in the following procedure.

This section details the steps required to upgrade the Overcloud. Make sure to follow each section in order and only apply the sections relevant to your environment.

This process requires you to run your original openstack overcloud deploy command multiple times to provide a staged method of upgrading. Each time you run the command, you include a different upgrade environment file along with your existing environment files. These new upgrade environments files are:

  • major-upgrade-aodh.yaml - Installs the OpenStack Telemetry Alarming (aodh) service.
  • major-upgrade-keystone-liberty-mitaka.yaml - Migrates OpenStack Identity (keystone) from a standalone service to a Web Server Gateway Interface (WSGI) service under httpd.
  • major-upgrade-pacemaker-init.yaml - Provides the initialization for the upgrade. This includes updating the Red Hat OpenStack Platform repositories on each node in your Overcloud and provides special upgrade scripts to certain nodes.
  • major-upgrade-pacemaker.yaml - Provides an upgrade for the Controller nodes.
  • major-upgrade-pacemaker-converge.yaml - The finalization for the Overcloud upgrade. This aligns the resulting upgrade to match the contents for the director’s latest Heat template collection.

In between these deployment commands, you run the upgrade-non-controller.sh script on various node types. This script updates the packages on a non-Controller node.

Workflow

The Overcloud upgrade process uses the following workflow:

  1. Run your deployment command including the major-upgrade-aodh.yaml environment file.
  2. Run your deployment command including the major-upgrade-keystone-liberty-mitaka.yaml environment file.
  3. Run your deployment command including the major-upgrade-pacemaker-init.yaml environment file.
  4. Run the upgrade-non-controller.sh on each Object Storage node.
  5. Run your deployment command including the major-upgrade-pacemaker.yaml environment file.
  6. Run the upgrade-non-controller.sh on each Compute node.
  7. Run the upgrade-non-controller.sh on each Ceph Storage node.
  8. Run your deployment command including the major-upgrade-pacemaker-converge.yaml environment file.

3.4.1. Pre-Upgrade Notes for the Overcloud

  • If using an environment file for Satellite registration, make sure to update the following parameters in the environment file:

    • rhel_reg_repos - Repositories to enable for your Overcloud, including the new Red Hat OpenStack Platform 9 repositories. See Section 1.2, “Repository Requirements” for repositories to enable.
    • rhel_reg_activation_key - The new activation key for your Red Hat OpenStack Platform 9 repositories.
    • rhel_reg_sat_repo - A new parameter that defines the repository containing Red Hat Satellite 6’s management tools, such as katello-agent. Make sure to add this parameter if registering to Red Hat Satellite 6.
  • After each step, run the pcs status command on the Controller node cluster to ensure no resources have failed.
  • If upgrading an Overcloud with SSL/TLS enabled, ensure to include the latest EndpointMap for the new services. For the latest list of endpoints, see the example /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml environment file on the Undercloud. For more information about SSL/TLS configuration, see "6.11. Enabling SSL/TLS on the Overcloud" in the Red Hat OpenStack Platform Director Installation and Usage.
  • If upgrading an Overcloud with a custom ServiceNetMap, ensure to include the latest ServiceNetMap for the new services. For the latest list of services, see "6.2.3. Assigning OpenStack Services to Isolated Networks" in the Red Hat OpenStack Platform Director Installation and Usage.
  • If the Overcloud includes director-managed Ceph Storage nodes, the nodes require a new key for the "client.openstack" CephX user. Include the CephClientKey parameter in the environment file containing your other Ceph storage parameters (usually storage-environment.yaml). Generate the new key on any of the Overcloud nodes:

    $ ceph-authtool --gen-print-key
    AQAfoaRXSAy/HxAAShHIViinopC2xtPW+RceQA==

    And include it as the CephClientKey in your environment file.

    parameter_defaults:
      CephStorageCount: 3
      CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
      CephMonKey: 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ==''
      CephAdminKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ=='
      CephClientKey: 'AQAfoaRXSAy/HxAAShHIViinopC2xtPW+RceQA=='
  • If using a custom endpoint map for enabling TLS/SSL in the overcloud, make sure to update the map with endpoints for the following new services:

    • OpenStack Telemetry Metrics (gnocchi)
    • OpenStack Telemetry Alarming (aodh)
    • OpenStack Clustering (sahara)

    Check the latest TLS/SSL mappings from the core Heat template collection (see EndpointMap in /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml) and add the missing endpoints to the EndpointMap in your custom enable-tls.yaml file. For more information, see "Enabling SSL/TLS on the Overcloud" in the Red Hat OpenStack Platform Director Installation and Usage guide.

3.4.2. Using Modified Overcloud Templates

Important

This section is only required if using a modified version of the core Heat template collection. This is because the copy is a static snapshot of the original core Heat template collection from /usr/share/openstack-tripleo-heat-templates/. If using an unmodified core Heat template collection for your overcloud, you can skip this section.

To update your modified template collection, you need to:

  1. Backup your existing custom template collection:

    $ mv ~/templates/my-overcloud/ ~/templates/my-overcloud.bak
  2. Replace the new version of the template collection from /usr/share/openstack-tripleo-heat-templates:

    $ sudo cp -rv /usr/share/openstack-tripleo-heat-templates ~/templates/my-overcloud/
  3. Check for differences between the old and new custom template collection. To see changes between the two, use the following diff command:

    $ diff -Nary ~/templates/my-overcloud.bak/ ~/templates/my-overcloud/

This helps identify customizations from the old template collection that you can incorporate into the new template collection. Incorporate customization into the new custom template collection.

Important

Red Hat provides updates to the Heat template collection over subsequent releases. Using a modified template collection can lead to a divergence between your custom copy and the original copy in /usr/share/openstack-tripleo-heat-templates. To customize your overcloud, Red Hat recommends using the configuration hooks from "Configuring Advanced Customizations for the Overcloud". If creating a copy of the Heat template collection, you should track changes to the templates using a version control system such as git.

3.4.3. Installing Aodh

This step replaces the old OpenStack Telemetry Alarming (ceilometer-alarm) service with a new component (aodh). This process involves running openstack overcloud deploy with a new environment file (major-upgrade-aodh.yaml), which:

  • Automatically removes the openstack-ceilometer-alarm services from Pacemaker and removes the relevant packages from the Controller nodes.
  • Installs the aodh service to Controller nodes and configures Pacemaker to manage it.

Run the openstack overcloud deploy from your Undercloud and include the major-upgrade-aodh.yaml environment file. Make sure you also include all custom environment files relevant to your environment, such as network isolation and storage.

This following is an example of an openstack overcloud deploy command with the added file:

$ openstack overcloud deploy --force-postconfig --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml  \
  -e /home/stack/templates/network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-aodh.yaml
Important

Make sure to include the --force-postconfig option with this final deployment command. Without this option, new service endpoints fail to appear in the overcloud.

Wait until the Overcloud updates with the new environment file’s configuration.

Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.4. Upgrading Keystone

This step upgrades the OpenStack Identity (keystone) service to run as a Web Server Gateway Interface (WSGI) applet under httpd instead of a standalone service on Controller nodes. This process automatically disables the standalone openstack-keystone service and installs the necessary configuration to enable the WSGI applet.

Run the openstack overcloud deploy from your Undercloud and include the major-upgrade-keystone-liberty-mitaka.yaml environment file. Make sure you also include all custom environment files relevant to your environment, such as network isolation and storage.

This following is an example of an openstack overcloud deploy command with the added file:

$ openstack overcloud deploy --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml  \
  -e /home/stack/templates/network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-keystone-liberty-mitaka.yaml

Wait until the Overcloud updates with the new environment file’s configuration.

Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.5. Installing the Upgrade Scripts

This step installs scripts on each non-Controller node. These script perform the major version package upgrades and configuration. Each script differs depending on the node type. For example, Compute nodes receive different upgrade scripts to Ceph Storage nodes.

This initialization step also updates enabled repositories on all overcloud nodes. This means you do not need to disable old repositories and enable new repositories manually.

Run the openstack overcloud deploy from your Undercloud and include the major-upgrade-pacemaker-init.yaml environment file. Make sure you also include all custom environment files relevant to your environment, such as network isolation and storage.

This following is an example of an openstack overcloud deploy command with the added file:

$ openstack overcloud deploy --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml  \
  -e /home/stack/templates/network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker-init.yaml

Wait until the Overcloud updates with the new environment file’s configuration.

Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.6. Upgrading Object Storage Nodes

The director uses the upgrade-non-controller.sh command to run the upgrade script passed to each non-Controller node from the major-upgrade-pacemaker-init.yaml environment file. For this step, upgrade each Object Storage node using the following command:

$ for NODE in `nova list | grep swift | awk -F "|" '{ print $2 }'` ; do upgrade-non-controller.sh --upgrade $NODE ; done

Wait until each Object Storage node completes its upgrade.

Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If so, perform a reboot of each node:

  1. Select a Object Storage node to reboot. Log into it and reboot it:

    $ sudo reboot
  2. Wait until the node boots.
  3. Log into the node and check the status:

    $ sudo systemctl list-units "openstack-swift*"
  4. Log out of the node and repeat this process on the next Object Storage node.
Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.7. Upgrading Controller Nodes

Upgrading the Controller nodes involves including another environment file (major-upgrade-pacemaker.yaml) that provides a full upgrade to Controller nodes running high availability tools.

Run the openstack overcloud deploy from your Undercloud and include the major-upgrade-pacemaker.yaml environment file. Remember to include all custom environment files relevant to your environment, such as network isolation and storage.

This following is an example of an openstack overcloud deploy command with the added file:

$ openstack overcloud deploy --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
  -e network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker.yaml

Wait until the Overcloud updates with the new environment file’s configuration.

Important

This step disables the Neutron server and L3 Agent during the Controller upgrade. This means floating IP address are unavailable during this step.

Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If so, perform a reboot of each node:

  1. Select a node to reboot. Log into it and reboot it:

    $ sudo reboot

    The remaining Controller Nodes in the cluster retain the high availability services during the reboot.

  2. Wait until the node boots.
  3. Log into the node and check the cluster status:

    $ sudo pcs status

    The node rejoins the cluster.

    Note

    If any services fail after the reboot, run sudo pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

  4. Check all systemd services on the Controller Node are active:

    $ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
  5. Log out of the node, select the next Controller Node to reboot, and repeat this procedure until you have rebooted all Controller Nodes.

3.4.8. Upgrading Ceph Storage Nodes

The director uses the upgrade-non-controller.sh command to run the upgrade script passed to each non-Controller node from the major-upgrade-pacemaker-init.yaml environment file. For this step, upgrade each Ceph Storage node with the following command:

Upgrade each Ceph Storage nodes:

$ for NODE in `nova list | grep ceph | awk -F "|" '{ print $2 }'` ; do upgrade-non-controller.sh --upgrade $NODE ; done

Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If so, perform a reboot of each node:

  1. Select the first Ceph Storage node to reboot and log into it.
  2. Disable Ceph Storage cluster rebalancing temporarily:

    $ sudo ceph osd set noout
    $ sudo ceph osd set norebalance
  3. Reboot the node:

    $ sudo reboot
  4. Wait until the node boots.
  5. Log into the node and check the cluster status:

    $ sudo ceph -s

    Check that the pgmap reports all pgs as normal (active+clean).

  6. Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
  7. When complete, enable cluster rebalancing again:

    $ sudo ceph osd unset noout
    $ sudo ceph osd unset norebalance
  8. Perform a final status check to make sure the cluster reports HEALTH_OK:

    $ sudo ceph status
Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.9. Upgrading Compute Nodes

Upgrade each Compute node individually and ensure zero downtime of instances in your OpenStack Platform environment. This involves the following workflow:

  1. Select a Compute node to upgrade
  2. Migrate its instances to another Compute node
  3. Upgrade the empty Compute node

List all Compute nodes and their UUIDs:

$ nova list | grep "compute"

Select a Compute node to upgrade and first migrate its instances using the following process:

  1. From the undercloud, select a Compute Node to reboot and disable it:

    $ source ~/overcloudrc
    $ openstack compute service list
    $ openstack compute service set [hostname] nova-compute --disable
  2. List all instances on the Compute node:

    $ openstack server list --host [hostname]
  3. Select a second Compute Node to act as the target host for migrating instances. This host needs enough resources to host the migrated instances. From the undercloud, migrate each instance from the disabled host to the target host.

    $ nova live-migration [instance-name] [target-hostname]
    $ nova migration-list
    $ nova resize-confirm [instance-name]
  4. Repeat this step until you have migrated all instances from the Compute Node.
Important

For full instructions on configuring and migrating instances, see "Migrating VMs from an Overcloud Compute Node" in the Director Installation and Usage guide.

The director uses the upgrade-non-controller.sh command to run the upgrade script passed to each non-Controller node from the major-upgrade-pacemaker-init.yaml environment file. Upgrade each Compute node with the following command:

$ source ~/stackrc
$ upgrade-non-controller.sh --upgrade NODE_UUID

Replace NODE_UUID with the UUID of the chosen Compute node. Wait until the Compute node completes its upgrade.

Check the /var/log/yum.log file on the Compute node you have upgraded to see if either the kernel or openvswitch packages have updated their major or minor versions. If so, perform a reboot of each node:

  1. Log into the Compute Node and reboot it:

    $ sudo reboot
  2. Wait until the node boots.
  3. Enable the Compute Node again:

    $ source ~/overcloudrc
    $ openstack compute service set [hostname] nova-compute --enable
  4. Select the next node to reboot.

Repeat the migration and reboot process for each node individually until you have rebooted all nodes.

Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.10. Finalizing the Upgrade

The director needs to run through the upgrade finalization to ensure the Overcloud stack is synchronized with the current Heat template collection. This involves an environment file (major-upgrade-pacemaker-converge.yaml), which you include using the openstack overcloud deploy command.

Run the openstack overcloud deploy from your Undercloud and include the major-upgrade-pacemaker-converge.yaml environment file. Make sure you also include all custom environment files relevant to your environment, such as network isolation and storage.

This following is an example of an openstack overcloud deploy command with the added file:

$ openstack overcloud deploy --force-postconfig --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
  -e network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker-converge.yaml
Important

Make sure to include the --force-postconfig option with this final deployment command. Without this option, new service endpoints fail to appear in the overcloud.

Wait until the Overcloud updates with the new environment file’s configuration.

This completes the Overcloud upgrade procedure.

Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.4.11. Post-Upgrade Notes for the Overcloud

Be aware of the following notes after upgrading the Overcloud to Red Hat OpenStack Platform 9:

  • The Compute nodes might report a failure with neutron-openvswitch-agent. If this occurs, log into each Compute node and restart the service. For example:

    $ sudo systemctl restart neutron-openvswitch-agent
  • The update process does not reboot any nodes in the Overcloud automatically. If required, perform a reboot manually after the update command completes. Make sure to reboot cluster-based nodes (such as Ceph Storage nodes and Controller nodes) individually and wait for the node to rejoin the cluster. For Ceph Storage nodes, check with the ceph health and make sure the cluster status is HEALTH OK. For Controller nodes, check with the pcs resource and make sure all resources are running for each node.
  • In some circumstances, the corosync service might fail to start on IPv6 environments after rebooting Controller nodes. This is due to Corosync starting before the Controller node configures the static IPv6 addresses. In these situations, restart Corosync manually on the Controller nodes:

    $ sudo systemctl restart corosync
  • If you configured fencing for your Controller nodes, the update process might disable it. When the update process completes, reenable fencing with the following command on one of the Controller nodes:

    $ sudo pcs property set stonith-enabled=true
  • The next time you update or scale the Overcloud stack (i.e. running the openstack overcloud deploy command), you need to reset the identifier that triggers package updates in the Overcloud. Add a blank UpdateIdentifier parameter to an environment file and include it when you run the openstack overcloud deploy command. The following is an example of such an environment file:

    parameter_defaults:
      UpdateIdentifier: