Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 3. Director-Based Environments: Performing a Major Version Upgrade

Warning

Before performing an upgrade to the latest major version, ensure the undercloud and overcloud are updated to the latest minor versions. This includes both OpenStack Platform services and the base operating system. For the process on performing a minor version update, see "Director-Based Environments: Performing Updates to Minor Versions" in the Red Hat OpenStack Platform 10 Upgrading Red Hat OpenStack Platform guide. Performing a major version upgrade without first performing a minor version update can cause failures in the upgrade process.

Warning

With High Availaibility for Compute instances (or Instance HA, as described in High Availability for Compute Instances), upgrades or scale-up operations are not possible. Any attempts to do so will fail.

If you have Instance HA enabled, disable it before performing an upgrade or scale-up. To do so, perform a rollback as described in Rollback.

This chapter explores how to upgrade your undercloud and overcloud to the next major version. In this case, it is a upgrade from Red Hat OpenStack Platform 10 to Red Hat OpenStack Platform 11.

This procedure involves the following workflow:

  1. Upgrade the Red Hat OpenStack Platform director packages.
  2. Upgrade the overcloud images on the Red Hat OpenStack Platform director.
  3. Update any overcloud customizations, such as custom Heat templates and environment files.
  4. Upgrade all nodes that support composable service upgrades.
  5. Upgrade Object Storage nodes individually.
  6. Upgrade Compute nodes individually.
  7. Perform overcloud upgrade finalization.

3.1. Upgrade Support Statement

A successful upgrade process requires some preparation to accommodate changes from one major version to the next. Read the following support statement to help with Red Hat OpenStack Platform upgrade planning.

Upgrades in Red Hat OpenStack Platform director require full testing with specific configurations before performed on any live production environment. Red Hat has tested most use cases and combinations offered as standard options through the director. However, due to the number of possible combinations, this is never a fully exhaustive list. In addition, if the configuration has been modified from the standard deployment, either manually or through post configuration hooks, testing upgrade features in a non-production environment is critical. Therefore, we advise you to:

  • Perform a backup of your Undercloud node before starting any steps in the upgrade procedure. See the Back Up and Restore the Director Undercloud guide for backup procedures.
  • Run the upgrade procedure with your customizations in a test environment before running the procedure in your production environment.
  • If you feel uncomfortable about performing this upgrade, contact Red Hat’s support team and request guidance and assistance on the upgrade process before proceeding.

The upgrade process outlined in this section only accommodates customizations through the director. If you customized an Overcloud feature outside of director then:

  • Disable the feature
  • Upgrade the Overcloud
  • Re-enable the feature after the upgrade completes

This means the customized feature is unavailable until the completion of the entire upgrade.

Red Hat OpenStack Platform director 11 can manage previous Overcloud versions of Red Hat OpenStack Platform. See the support matrix below for information.

Table 3.1. Support Matrix for Red Hat OpenStack Platform director 11

Version

Overcloud Updating

Overcloud Deploying

Overcloud Scaling

Red Hat OpenStack Platform 11

Red Hat OpenStack Platform 11 and 10

Red Hat OpenStack Platform 11 and 10

Red Hat OpenStack Platform 11 and 10

If managing an older Overcloud version, use the following Heat template collections:

  • Red Hat OpenStack Platform 10: /usr/share/openstack-tripleo-heat-templates/newton/

For example:

$ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates/newton/ [OTHER_OPTIONS]

The following are some general upgrade tips:

  • After each step, run the pcs status command on the Controller node cluster to ensure no resources have failed.
  • Please contact Red Hat and request guidance and assistance on the upgrade process before proceeding if you feel uncomfortable about performing this upgrade.

3.2. Live Migration Updates

Upgrading the Compute nodes requires live migration to ensure instances remain available during the upgrade. This requires the OS::TripleO::Services::Sshd service, which is a new service added to the default roles in latest version of Red Hat OpenStack Platform 10. To ensure live migration is enabled during the upgrade to Red Hat OpenStack Platform 11:

  • Update to the undercloud to latest version of Red Hat OpenStack Platform 10.
  • If using the default roles data file, check that each role includes the OS::TripleO::Services::Sshd service. If using a custom roles data file, add this new service to each role.
  • Update the overcloud to latest version Red Hat OpenStack Platform 10 with the OS::TripleO::Services::Sshd service included.
  • Start the upgrade to Red Hat OpenStack Platform 11.

This ensures the Compute nodes have SSH access to each other, which is required for the live migration process.

Important

A recent security fix (CVE-2017-2637) disables live migration for previous versions of Red Hat OpenStack Platform. The OS::TripleO::Services::Sshd service resolves this issue for Red Hat OpenStack Platform 10 and later. For more information, see:

3.3. Checking the Overcloud

Check your overcloud is stable before performing the upgrade. Run the following steps on the director to ensure all services in your overcloud are running:

  1. Check the status of the high availability services:

    ssh heat-admin@[CONTROLLER_IP] "sudo pcs resource cleanup ; sleep 60 ; sudo pcs status"

    Replace [CONTROLLER_IP] with the IP address of a Controller node. This command refreshes the overcloud’s Pacemaker cluster, waits 60 seconds, then reports the status of the cluster.

  2. Check for any failed OpenStack Platform systemd services on overcloud nodes. The following command checks for failed services on all nodes:

    $ for IP in $(openstack server list -c Networks -f csv | sed '1d' | sed 's/"//g' | cut -d '=' -f2) ; do echo "Checking systemd services on $IP" ; ssh heat-admin@$IP "sudo systemctl list-units 'openstack-*' 'neutron-*' --state=failed --no-legend" ; done
  3. Check that os-collect-config is running on each node. The following command checks this service on each node:

    $ for IP in $(openstack server list -c Networks -f csv | sed '1d' | sed 's/"//g' | cut -d '=' -f2) ; do echo "Checking os-collect-config on $IP" ; ssh heat-admin@$IP "sudo systemctl list-units 'os-collect-config.service' --no-legend" ; done
Important

If using a standalone Keystone node on OpenStack Platform 10, the 'openstack-gnocchi-statsd' service might not have started correctly due to a race condition between keystone and gnocchi. Check the 'openstack-gnocchi-statsd' service on either Controller or Telemetry nodes and if it has failed, restart the service before upgrading the overcloud. This issue is addressed in BZ#1447422.

3.4. Undercloud Upgrade

3.4.1. Upgrading the Director

To upgrade the Red Hat OpenStack Platform director, follow this procedure:

  1. Log into the director as the stack user.
  2. Update the OpenStack Platform repository:

    $ sudo subscription-manager repos --disable=rhel-7-server-openstack-10-rpms
    $ sudo subscription-manager repos --enable=rhel-7-server-openstack-11-rpms

    This sets yum to use the latest repositories.

  3. Stop the main OpenStack Platform services:

    $ sudo systemctl stop 'openstack-*' 'neutron-*' httpd
    Note

    This causes a short period of downtime for the undercloud. The overcloud is still functional during the undercloud upgrade.

  4. Use yum to upgrade the director’s main packages:

    $ sudo yum update -y instack-undercloud openstack-puppet-modules openstack-tripleo-common python-tripleoclient
    Important

    The default Provisioning/Control Plane network has changed from 192.0.2.0/24 to 192.168.24.0/24. If you used default network values in your previous undercloud.conf file, your Provisioning/Control Plane network is set to 192.0.2.0/24. This means you need to set certain parameters in your undercloud.conf file to continue using the 192.0.2.0/24 network. These parameters are:

    • local_ip
    • network_gateway
    • undercloud_public_vip
    • undercloud_admin_vip
    • network_cidr
    • masquerade_network
    • dhcp_start
    • dhcp_end.

    Set the network values in undercloud.conf to ensure continued use of the 192.0.2.0/24 CIDR during future upgrades. Ensure your network configuration set correctly before running the openstack undercloud upgrade command.

  5. Use the following command to upgrade the undercloud:

    $ openstack undercloud upgrade

    This command upgrades the director’s packages, refreshes the director’s configuration, and populates any settings that are unset since the version change. This command does not delete any stored data, such Overcloud stack data or data for existing nodes in your environment.

Perform a reboot of the node to enable new system settings and refresh all undercloud services:

  1. Reboot the node:

    $ sudo reboot
  2. Wait until the node boots.

When the node boots, check the status of all services:

$ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
Note

It might take approximately 10 minutes for the openstack-nova-compute to become active after a reboot.

Verify the existence of your Overcloud and its nodes:

$ source ~/stackrc
$ openstack server list
$ openstack baremetal node list
$ openstack stack list

If necessary, review the configuration files on the director. The upgraded packages might have installed .rpmnew files appropriate to the Red Hat OpenStack Platform 11 version of the service.

3.4.2. Upgrading the Overcloud Images

This procedure ensures you have the latest images for node discovery and Overcloud deployment. The new images from the rhosp-director-images and rhosp-director-images-ipa packages are already updated from the Undercloud upgrade.

Remove any existing images from the images directory on the stack user’s home (/home/stack/images):

$ rm -rf ~/images/*

Extract the archives:

$ cd ~/images
$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-11.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-11.0.tar; do tar -xvf $i; done

Import the latest images into the director and configure nodes to use the new images

$ cd ~
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f csv --quote none | sed "1d" | paste -s -d " ")

To finalize the image update, verify the existence of the new images:

$ openstack image list
$ ls -l /httpboot

The director is now upgraded with the latest images.

Important

Make sure the Overcloud image version corresponds to the Undercloud version.

3.4.3. Using and Comparing Previous Template Versions

The upgrade process installs a new set of core Heat templates that correspond to the latest overcloud version. Red Hat OpenStack Platform’s repository retains the previous version of the core template collection in the openstack-tripleo-heat-templates-compat package. You install this package with the following command:

$ sudo yum install openstack-tripleo-heat-templates-compat

This installs the previous templates in the compat directory of your Heat template collection (/usr/share/openstack-tripleo-heat-templates/compat) and also creates a link to compat named after the previous version (newton). These templates are backwards compatible with the upgraded director, which means you can use the latest version of the director to install an overcloud of the previous version.

Comparing the previous version with the latest version helps identify changes to the overcloud during the upgrade. If you need to compare the current template collection with the previous version, use the following process:

  1. Create a temporary copy of the core Heat templates:

    $ cp -a /usr/share/openstack-tripleo-heat-templates /tmp/osp11
  2. Move the previous version into its own directory:

    $ mv /tmp/osp11/compat /tmp/osp10
  3. Perform a diff on the contents of both directories:

    $ diff -urN /tmp/osp10 /tmp/osp11

This shows the core template changes from one version to the next. These changes provide an idea of what should occur during the overcloud upgrade.

3.5. Overcloud Pre-Upgrade Configuration

3.5.1. Red Hat Subscription Details

Before upgrading the overcloud, update its subscription details to ensure your environment uses the latest repositories. If using an environment file for Satellite registration, update the following parameters in the environment file:

  • rhel_reg_repos - Repositories to enable for your Overcloud, including the new Red Hat OpenStack Platform 11 repositories. See Section 1.2, “Repository Requirements” for repositories to enable.
  • rhel_reg_activation_key - The new activation key for your Red Hat OpenStack Platform 11 repositories.
  • rhel_reg_sat_repo - A new parameter that defines the repository containing Red Hat Satellite 6’s management tools, such as katello-agent. Make sure to update this parameter if registering to Red Hat Satellite 6.

For more information and examples of the environment file format, see "Overcloud Registration" in the Advanced Overcloud Customization guide.

3.5.2. Deprecated and New Composable Services

The following sections apply if using a custom roles_data.yaml file to define your overcloud roles.

Remove the following deprecated services from your custom roles_data.yaml file:

ServiceDescription

OS::TripleO::Services::Core

This service acted as a core dependency for other Pacemaker services. This service has been removed to accommodate high availability composable services.

OS::TripleO::Services::GlanceRegistry

This service provided image metadata for OpenStack Image Storage (glance). This service has been removed due to its deprecation in the glance v2 API.

OS::TripleO::Services::VipHosts

This service configured the /etc/hosts file with node hostnames and IP addresses. This service is now integrated directly into the director’s Heat templates.

Add the following new services to your custom roles_data.yaml file:

ServiceDescription

OS::TripleO::Services::MySQLClient

Configures the MariaDB client on a node, which provides database configuration for other composable services. Add this service to all roles with standalone composable services.

OS::TripleO::Services::NovaPlacement

Configures the OpenStack Compute (nova) Placement API. If using a standalone Nova role in your current overcloud, add this service to the role. Otherwise, add the service to the Controller role.

OS::TripleO::Services::PankoApi

Configures the OpenStack Telemetry Event Storage (panko) service. If using a standalone Telemetry role in your current overcloud, add this service to the role. Otherwise, add the service to the Controller role.

OS::TripleO::Services::Sshd

Configures SSH access across all nodes. Used for instance migration.

Update any additional parts of the overcloud that might require these new services such as:

  • Custom ServiceNetMap Parameter - If upgrading an Overcloud with a custom ServiceNetMap, ensure to include the latest ServiceNetMap for the new services. The default list of services is defined with the ServiceNetMapDefaults parameter located in the network/service_net_map.j2.yaml file. For information on using a custom ServiceNetMap, see Isolating Networks in Advanced Overcloud Customization.
  • External Load Balancer - If using an external load balancer, include the new services as a part of the external load balancer configuration. For more information, see External Load Balancing for the Overcloud.

3.5.3. Manual Role Upgrades

The upgrade process provides staged upgrades for each composable service on all roles. However, some roles require an individual upgrade of their nodes to ensure availability of instances and service. These roles are:

RoleDescription

Compute

Requires an individual upgrade for each node to ensure instance remain available. The process for these nodes involves migrating instances from each node before upgrading it.

ObjectStorage

Requires an individual upgrade for each node to ensure the Object Storage Service (swift) remains available to the overcloud.

This upgrade process uses the upgrade-non-controller.sh command to upgrade nodes in these roles.

The default roles_data.yaml file for Red Hat OpenStack Platform 11 marks these roles with the disable_upgrade_deployment: True, which excludes these roles during the main composable service upgrade process. This provides a method for upgrading the nodes in these roles individually. However, if using a custom roles_data.yaml file that contains these roles, make sure the Compute and ObjectStorage role definitions contain the disable_upgrade_deployment: True parameter. For example:

- name: Compute
  CountDefault: 1
  disable_upgrade_deployment: True
  ServicesDefault:
    - OS::TripleO::Services::CACerts
    - OS::TripleO::Services::CephClient
    - OS::TripleO::Services::CephExternal
...

Use the disable_upgrade_deployment for other custom roles that require a manual upgrade, such as custom Compute roles.

3.5.4. Storage Backends

Some storage backends have changed from using configuration hooks to their own composable service. If using a custom storage backend, check the associated environment file in the environments directory for new parameters and resources. Update any custom environment files for your backends. For example:

  • For the NetApp Block Storage (cinder) backend, use the new environments/cinder-netapp-config.yaml in your deployment.
  • For the Dell EMC Block Storage (cinder) backend, use the new environments/cinder-dellsc-config.yaml in your deployment.
  • For the Dell EqualLogic Block Storage (cinder) backend, use the new environments/cinder-dellps-config.yaml in your deployment.

For example, the NetApp Block Storage (cinder) backend used the following resources for these respective versions:

  • OpenStack Platform 10 and below: OS::TripleO::ControllerExtraConfigPre: ../puppet/extraconfig/pre_deploy/controller/cinder-netapp.yaml
  • OpenStack Platform 11: OS::TripleO::Services::CinderBackendNetApp: ../puppet/services/cinder-backend-netapp.yaml

As a result, you now use the new OS::TripleO::Services::CinderBackendNetApp resource and its associated service template for this backend.

3.5.5. NFV Configuration

Follow these guidelines to upgrade from Red Hat OpenStack Platform 10 to Red Hat OpenStack Platform 11 when you have OVS-DPDK configured.

Note

Red Hat OpenStack Platform 11 operates in OVS client mode.

Begin your upgrade preparations based on the guidance from Upgrading Red Hat OpenStack Platform. Before you deploy the overcloud based on Upgrading Composable Services, follow these steps:

  1. Add the content from this sample Appendix A, Sample NFV Upgrade File file to any existing post-install.yaml file.
  2. If your overcloud includes OVS version 2.5, modify the following parameters in your .yaml file, for example, in a network-environment.yaml file. This change is not in the post-install.yaml file:

    1. Modify HostCpuList and NeutronDpdkCoreList to match your configuration. Ensure that you use only double quotation marks in the yaml file for these parameters.

      HostCpusList: "0,16,8,24"
      NeutronDpdkCoreList: "1,17,9,25"
    2. Modify NeutronDpdkSocketMemory to match your configuration. Ensure that you use only double quotation marks in the yaml file for this parameter.

      NeutronDpdkSocketMemory: "2048,2048"
    3. Modify NeutronVhostuserSocketDir as follows:

       NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"

3.5.6. Overcloud Parameters

Note the following information about overcloud parameters for upgrades:

  • If upgrading an Overcloud with a custom ServiceNetMap, ensure to include the latest ServiceNetMap for the new services. The default list of services is defined with the ServiceNetMapDefaults parameter located in the network/service_net_map.j2.yaml file. For information on using a custom ServiceNetMap, see Isolating Networks in Advanced Overcloud Customization.
  • Fixed VIP addresses for overcloud networks use new parameters and syntax. See "Assigning Predictable Virtual IPs" in the Advanced Overcloud Customization guide. If using external load balancing, see also "Configuring Load Balancing Options" in the External Load Balancing for the Overcloud guide.
  • Some options for the openstack overcloud deploy command are now deprecated. You should substitute these options for their Heat parameter equivalents. For these parameter mappings, see "Creating the Overcloud with the CLI Tools" in the Director Installation and Usage guide.
  • Some composable services include new parameters that configure Puppet hieradata. If you used hieradata to configure these parameters in the past, the overcloud update might report a Duplicate declaration error. If this situation, use the composable service parameter. For available parameters, see the Overcloud Parameters guide.

3.5.7. Custom Core Templates

If using a modified version of the core Heat template collection from Red Hat OpenStack Platform 10, you need to re-apply your customizations to a copy of the Red Hat OpenStack Platform 11 version. To do this, use a git version control system similar to the one outlined in "Using Customized Core Heat Templates" from the Advanced Overcloud Customization guide.

Red Hat provides updates to the Heat template collection over subsequent releases. Using a modified template collection without a version control system can lead to a divergence between your custom copy and the original copy in /usr/share/openstack-tripleo-heat-templates.

As an alternative to using a custom Heat template collection, Red Hat recommends using the Configuration Hooks from the Advanced Overcloud Customization guide.

3.6. Overcloud Upgrade

3.6.1. Upgrading Overcloud Nodes

An overcloud upgrade requires an additional environment file (major-upgrade-composable-steps.yaml) to your deployment. This file provides a full upgrade to all nodes except roles marked with the disable_upgrade_deployment: True parameter.

Run the openstack overcloud deploy command from your undercloud and include the major-upgrade-composable-steps.yaml environment file. Include all options and custom environment files relevant to your environment, such as network isolation and storage.

The following is an example of an openstack overcloud deploy command with both the required and optional files:

$ openstack overcloud deploy --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
  -e network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps.yaml \
  --ntp-server pool.ntp.org

Wait until the Overcloud updates with the new environment file’s configuration.

Important

This step disables the Neutron server and L3 Agent during the upgrade. This means you cannot create new routers during this step.

Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If so, perform a reboot of each node, use the reboot instructions from "Rebooting the Overcloud" in the Director Installation and Usage guide

During the deployment of the major-upgrade-composable-steps.yaml environment file, the director passes a special upgrade script to each node in roles marked with the disable_upgrade_deployment: True parameter. The next few sections show how to invoke this script from the undercloud and upgrade the remaining roles.

Note

This command removes deprecated composable services and installs new services for Red Hat OpenStack Platform 11. See Section 3.5.2, “Deprecated and New Composable Services” for a list of deprecated and new services.

3.6.2. Upgrading Object Storage Nodes

The director uses the upgrade-non-controller.sh command to run the upgrade script passed to the Object Storage nodes from the major-upgrade-composable-steps.yaml environment file. For this step, upgrade each Object Storage node using the following command:

$ for NODE in `openstack server list -c Name -f value --name objectstorage` ; do upgrade-non-controller.sh --upgrade $NODE ; done

Upgrading each Object Storage node individually ensures the service remains available during the upgrade.

Wait until each Object Storage node completes its upgrade.

Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If so, perform a reboot of each node:

  1. Select a Object Storage node to reboot. Log into it and reboot it:

    $ sudo reboot
  2. Wait until the node boots.
  3. Log into the node and check the status:

    $ sudo systemctl list-units "openstack-swift*"
  4. Log out of the node and repeat this process on the next Object Storage node.
Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.6.3. Upgrading Compute Nodes

Upgrade each Compute node individually and ensure zero downtime of instances in your OpenStack Platform environment. This involves the following workflow:

  1. Select a Compute node to upgrade
  2. Migrate its instances to another Compute node
  3. Upgrade the empty Compute node

List all Compute nodes and their UUIDs:

$ source ~/stackrc
$ openstack server list | grep "compute"

Select a Compute node to upgrade and first migrate its instances using the following process:

  1. From the undercloud, select a Compute Node to reboot and disable it:

    $ source ~/overcloudrc
    $ openstack compute service list
    $ openstack compute service set [hostname] nova-compute --disable
  2. List all instances on the Compute node:

    $ openstack server list --host [hostname] --all-projects
  3. Migrate each instance from the disabled host. Use one of the following commands:

    1. Migrate the instance to a specific host of your choice:

      $ openstack server migrate [instance-id] --live [target-host]--wait
    2. Let nova-scheduler automatically select the target host:

      $ nova live-migration [instance-id]
      Note

      The nova command might cause some deprecation warnings, which are safe to ignore.

  4. Wait until migration completes.
  5. Confirm the instance has migrated from the Compute node:

    $ openstack server list --host [hostname] --all-projects
  6. Repeat this step until you have migrated all instances from the Compute Node.
Important

For full instructions on configuring and migrating instances, see "Migrating VMs from an Overcloud Compute Node" in the Director Installation and Usage guide.

The director uses the upgrade-non-controller.sh command to run the upgrade script passed to each non-Controller node from the major-upgrade-composable-steps.yaml environment file. Upgrade each Compute node with the following command:

$ source ~/stackrc
$ upgrade-non-controller.sh --upgrade [NODE]

Replace [NODE] with the UUID or name of the chosen Compute node. Wait until the Compute node completes its upgrade.

Check the /var/log/yum.log file on the Compute node you have upgraded to see if one of the following packages have updated their major or minor versions:

  • kernel
  • openvswitch
  • ceph-osd (Hyper-converged environments)

If so, perform a reboot of the node:

  1. Log into the Compute Node and reboot it:

    $ sudo reboot
  2. Wait until the node boots.
  3. Enable the Compute Node again:

    $ source ~/overcloudrc
    $ openstack compute service set [hostname] nova-compute --enable
  4. Check whether the Compute node is enabled:

    $ openstack compute service list

Repeat this process for each node individually until you have upgraded and rebooted all nodes.

After upgrading all Compute nodes, revert back to the stackrc access details:

$ source ~/stackrc
Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.6.4. Finalizing the Upgrade

The director needs to run through the upgrade finalization to ensure the Overcloud stack is synchronized with the current Heat template collection. This involves an environment file (major-upgrade-converge.yaml), which you include using the openstack overcloud deploy command.

Important

If your Red Hat OpenStack Platform environment is integrated with an external Ceph Storage Cluster from an earlier version (for example, Red Hat Ceph Storage 1.3), you need to enable backwards compatibility. To do so, create an environment file (for example, /home/stack/templates/ceph-backwards-compatibility.yaml) containing the following:

parameter_defaults:
   RbdDefaultFeatures: 1

Then, include this file in when you run openstack overcloud deploy in the next step.

Run the openstack overcloud deploy from your Undercloud and include the major-upgrade-converge.yaml environment file. Make sure you also include all options and custom environment files relevant to your environment, such as backwards compatibility for Ceph (if applicable), network isolation, and storage.

This following is an example of an openstack overcloud deploy command with the added major-upgrade-converge.yaml file:

$ openstack overcloud deploy --templates \
  --control-scale 3 \
  --compute-scale 3 \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
  -e network_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-converge.yaml \
  --ntp-server pool.ntp.org

Wait until the Overcloud updates with the new environment file’s configuration.

Note

Login to a Controller node and run the pcs status command to check if all resources are active in the Controller cluster. If any resource have failed, run pcs resource cleanup, which cleans the errors and sets the state of each resource to Started. If any errors persist, contact Red Hat and request guidance and assistance.

3.7. Post-Upgrade Notes for the Overcloud

Be aware of the following notes after upgrading the Overcloud to Red Hat OpenStack Platform 11:

  • If necessary, review the resulting configuration files on the overcloud nodes. The upgraded packages might have installed .rpmnew files appropriate to the Red Hat OpenStack Platform 11 version of the service.
  • OpenStack Block Storage (cinder) API uses a new sizelimit filter in Red Hat OpenStack Platform 11. However, following an upgrade path from OpenStack Platform 9 to OpenStack Platform 10 to OpenStack Platform 11 might not update to this new filter. Check the filter on Controller or Cinder API nodes using the following command:

    $ grep filter:sizelimit /etc/cinder/api-paste.ini -A1

    The filter should appear as the following:

    [filter:sizelimit]
    paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory

    If not, replace the value in the /etc/cinder/api-paste.ini file of each Controller or Cinder API node and restart the httpd service:

    $ sudo sed -i s/cinder.api.middleware.sizelimit/oslo_middleware.sizelimit/ /etc/cinder/api-paste.ini
    $ sudo systemctl restart httpd
  • The Compute nodes might report a failure with neutron-openvswitch-agent. If this occurs, log into each Compute node and restart the service. For example:

    $ sudo systemctl restart neutron-openvswitch-agent
  • In some circumstances, the corosync service might fail to start on IPv6 environments after rebooting Controller nodes. This is due to Corosync starting before the Controller node configures the static IPv6 addresses. In these situations, restart Corosync manually on the Controller nodes:

    $ sudo systemctl restart corosync
  • If you configured fencing for your Controller nodes, the upgrade process might disable it. When the upgrade process completes, reenable fencing with the following command on one of the Controller nodes:

    $ sudo pcs property set stonith-enabled=true