Chapter 20. Speeding up an overcloud upgrade

To speed up the overcloud upgrade process, you can upgrade the control plane incrementally, or upgrade all of your nodes at once.

Upgrade incrementally

You can upgrade 1/3 of your control plane at a time. After you upgrade the first 1/3 of the control plane, you can move your environment into mixed-mode where the control plane APIs are running and the cloud is operational. High availability operational performance can be resumed only after the entire control plane has been upgraded.

Sections 20.1 to 20.4 contain an example upgrade process for an overcloud environment that includes the following node types with composable roles:

  • Three Controller nodes
  • Three Database nodes
  • Three Networker nodes
  • Three Ceph Storage nodes
  • Multiple Compute nodes

Upgrade the entire overcloud at once

By upgrading the whole overcloud at once, you complete the upgrade much faster than upgrading incrementally. Note that this option requires you to take your control plane and data plane offline.

To upgrade the entire overcloud, see Section 20.5, “Upgrading the entire overcloud at once”.

20.1. Running the overcloud upgrade preparation

The upgrade requires running openstack overcloud upgrade prepare command, which performs the following tasks:

  • Updates the overcloud plan to OpenStack Platform 16.2
  • Prepares the nodes for the upgrade
Note

If you are not using the default stack name (overcloud), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack.

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the upgrade preparation command:

    $ openstack overcloud upgrade prepare \
        --stack STACK NAME \
        --templates \
        -e ENVIRONMENT FILE
        …​
        -e /home/stack/templates/upgrades-environment.yaml \
        -e /home/stack/templates/rhsm.yaml \
        -e /home/stack/containers-prepare-parameter.yaml \
        -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
        …​

    Include the following options relevant to your environment:

    • The environment file (upgrades-environment.yaml) with the upgrade-specific parameters (-e).
    • The environment file (rhsm.yaml) with the registration and subscription parameters (-e).
    • The environment file (containers-prepare-parameter.yaml) with your new container image locations (-e). In most cases, this is the same environment file that the undercloud uses.
    • The environment file (neutron-ovs.yaml) to maintain OVS compatibility.
    • Any custom configuration environment files (-e) relevant to your deployment.
    • If applicable, your custom roles (roles_data) file using --roles-file.
    • If applicable, your composable network (network_data) file using --networks-file.
    • If you use a custom stack name, pass the name with the --stack option.
  3. Wait until the upgrade preparation completes.
  4. Download the container images:

    $ openstack overcloud external-upgrade run --stack STACK NAME --tags container_image_prepare

20.2. Upgrading the control plane nodes

To upgrade the control plane nodes in your environment to OpenStack Platform 16.2, you must upgrade 1/3 of your control plane nodes at a time, starting with the bootstrap nodes.

During the bootstrap Controller node upgrade process, a new Pacemaker cluster is created and new Red Hat OpenStack 16.2 containers are started on the node, while the remaining Controller nodes continue to run on Red Hat OpenStack 13.

This example includes the following node types with composable roles. The control plane nodes are named using the default overcloud-ROLE-NODEID convention:

  • overcloud-controller-0
  • overcloud-controller-1
  • overcloud-controller-2
  • overcloud-database-0
  • overcloud-database-1
  • overcloud-database-2
  • overcloud-networker-0
  • overcloud-networker-1
  • overcloud-networker-2
  • overcloud-ceph-0
  • overcloud-ceph-1
  • overcloud-ceph-2

Replace these values for your own node names where applicable.

After you upgrade the overcloud-controller-0, overcloud-database-0, overcloud-networker-0, and overcloud-ceph-0 bootstrap nodes, which comprise the first 1/3 of your control plane nodes, you must upgrade each additional 1/3 of the nodes with Pacemaker services and ensure that each node joins the new Pacemaker cluster started with the bootstrap node. Therefore, you must upgrade overcloud-controller-1, overcloud-database-1, overcloud-networker-1, and overcloud-ceph-1 before you upgrade overcloud-controller-2, overcloud-database-2, overcloud-networker-2, and overcloud-ceph-2.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc file:

    $ source ~/stackrc
  3. On the undercloud node, identify the bootstrap Controller node:

    $ tripleo-ansible-inventory --list [--stack <stack_name>] |jq .overcloud_Controller.hosts[0]
    • Optional: Replace <stack_name> with the name of the stack. If not specified, the default is overcloud.
  4. Upgrade the overcloud-controller-0, overcloud-database-0, overcloud-networker-0, and overcloud-ceph-0 control plane nodes:

    1. Run the external upgrade command with the ceph_systemd tag:

      $ openstack overcloud external-upgrade run --stack <stack_name> --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-0,overcloud-database-0,overcloud-networker-0,overcloud-ceph-0

      This command performs the following actions:

      • Changes the systemd units that control the Ceph Storage containers to use Podman management.
      • Limits actions to the selected nodes using the ceph_ansible_limit variable.

      This step is a preliminary measure to prepare the Ceph Storage services for the Leapp upgrade.

    2. Perform a Leapp upgrade of the operating system on each control plane node:

      $ openstack overcloud upgrade run --stack <stack_name> --tags system_upgrade --limit overcloud-controller-0,overcloud-database-0,overcloud-networker-0,overcloud-ceph-0

      The control plane nodes are rebooted as a part of the Leapp upgrade.

      Important

      If the Ceph node upgrade fails, ensure that controller-0 has finished upgrading before you proceed with the rest of the upgrade.

    3. Copy the latest version of the database from an existing node to the bootstrap node:

      $ openstack overcloud external-upgrade run --stack <stack_name> --tags system_upgrade_transfer_data
      Important

      This command causes an outage on the control plane. You cannot perform any standard operations on the overcloud during the next few steps.

    4. Run the upgrade command with the nova_hybrid_state tag and run only the upgrade_steps_playbook.yaml playbook:

      $ openstack overcloud upgrade run --stack <stack_name> \
       --playbook upgrade_steps_playbook.yaml \
       --tags nova_hybrid_state --limit all

      This command launches temporary 16.2 containers on Compute nodes to help facilitate workload migration when you upgrade Compute nodes at a later step.

    5. Run the upgrade command with no tags:

      $ openstack overcloud upgrade run --stack <stack_name> --limit overcloud-controller-0,overcloud-database-0,overcloud-networker-0,overcloud-ceph-0 --playbook all

      This command performs the Red Hat OpenStack Platform upgrade.

      Important

      The control plane becomes active when this command finishes. You can perform standard operations on the overcloud again.

    6. Optional: On the bootstrap Controller node, verify that after the upgrade, the new Pacemaker cluster is started and that the control plane services such as galera, rabbit, haproxy, and redis are running:

      $ sudo pcs status
  5. Upgrade the overcloud-controller-1, overcloud-database-1, overcloud-networker-1, and overcloud-ceph-1 control plane nodes:

    1. Log in to the overcloud-controller-1 node and verify that the old cluster is no longer running:

      $ sudo pcs status

      An error similar to the following is displayed when the cluster is not running:

      Error: cluster is not currently running on this node
    2. Run the external upgrade command with the ceph_systemd tag:

      $ openstack overcloud external-upgrade run --stack <stack_name> --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-1,overcloud-database-1,overcloud-networker-1,overcloud-ceph-1

      This command performs the following functions:

      • Changes the systemd units that control the Ceph Storage containers to use Podman management.
      • Limits actions to the selected nodes using the ceph_ansible_limit variable.

      This step is a preliminary measure to prepare the Ceph Storage services for the Leapp upgrade.

    3. Run the upgrade command with the system_upgrade tag:

      $ openstack overcloud upgrade run --stack <stack_name> --tags system_upgrade --limit overcloud-controller-1,overcloud-database-1,overcloud-networker-1,overcloud-ceph-1

      This command performs the following actions:

      • Performs a Leapp upgrade of the operating system.
      • Performs a reboot as a part of the Leapp upgrade.
    4. Run the upgrade command with no tags:

      $ openstack overcloud upgrade run --stack <stack_name> --limit overcloud-controller-0,overcloud-controller-1,overcloud-database-0,overcloud-database-1,overcloud-networker-0,overcloud-networker-1,overcloud-ceph-0,overcloud-ceph-1

      This command performs the Red Hat OpenStack Platform upgrade. In addition to this node, include the previously upgraded bootstrap nodes in the --limit option.

  6. Upgrade the overcloud-controller-2, overcloud-database-2, overcloud-networker-2, and overcloud-ceph-2 control plane nodes:

    1. Log in to the overcloud-controller-2 node and verify that the old cluster is no longer running:

      $ sudo pcs status

      An error similar to the following is displayed when the cluster is not running:

      Error: cluster is not currently running on this node
    2. Run the external upgrade command with the ceph_systemd tag:

      $ openstack overcloud external-upgrade run --stack <stack_name> --tags ceph_systemd -e ceph_ansible_limit=overcloud-controller-2,overcloud-database-2,overcloud-networker-2,overcloud-ceph-2

      This command performs the following functions:

      • Changes the systemd units that control the Ceph Storage containers to use Podman management.
      • Limits actions to the selected nodes using the ceph_ansible_limit variable.

      This step is a preliminary measure to prepare the Ceph Storage services for the Leapp upgrade.

    3. Run the upgrade command with the system_upgrade tag:

      $ openstack overcloud upgrade run --stack <stack_name> --tags system_upgrade --limit overcloud-controller-2,overcloud-database-2,overcloud-networker-2,overcloud-ceph-2

      This command performs the following actions:

      • Performs a Leapp upgrade of the operating system.
      • Performs a reboot as a part of the Leapp upgrade.
    4. Run the upgrade command with no tags:

      $ openstack overcloud upgrade run --stack <stack_name> --limit overcloud-controller-0,overcloud-controller-1,overcloud-controller-2,overcloud-database-0,overcloud-database-1,overcloud-database-2,overcloud-networker-0,overcloud-networker-1,overcloud-networker-2,overcloud-ceph-0,overcloud-ceph-1,overcloud-ceph-2

      This command performs the Red Hat OpenStack Platform upgrade. Include all control plane nodes in the --limit option.

20.3. Upgrading Compute nodes in parallel

To upgrade a large number of Compute nodes to OpenStack Platform 16.2, you can run the openstack overcloud upgrade run command with the --limit Compute option in parallel on groups of 20 nodes.

You can run multiple upgrade tasks in the background, where each task upgrades a separate group of 20 nodes. When you use this method to upgrade Compute nodes in parallel, you cannot select which nodes you upgrade. The selection of nodes is based on the inventory file that you generate when you run the tripleo-ansible-inventory command. For example, if you have 80 Compute nodes in your deployment, you can run the following commands to update the Compute nodes in parallel:

$ openstack overcloud upgrade run -y --limit 'Compute[0:19]' > upgrade-compute-00-19.log 2>&1 &
$ openstack overcloud upgrade run -y --limit 'Compute[20:29]' > upgrade-compute-20-29.log 2>&1 &
$ openstack overcloud upgrade run -y --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 &
$ openstack overcloud upgrade run -y --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 &

To upgrade specific Compute nodes, use a comma-separated list of nodes:

$ openstack overcloud upgrade run --limit <Compute0>,<Compute1>,<Compute2>,<Compute3>
Note

If you are not using the default stack name overcloud, use the --stack STACK NAME option and replace STACK NAME with name of your stack.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc file:

    $ source ~/stackrc
  3. Migrate your instances. For more information on migration strategies, see Migrating virtual machines between Compute nodes.
  4. Run the upgrade command with the system_upgrade tag:

    $ openstack overcloud upgrade run -y --stack STACK NAME --tags system_upgrade --limit 'Compute[0:19]' > upgrade-compute-00-19.log 2>&1 &
    $ openstack overcloud upgrade run -y --stack STACK NAME --tags system_upgrade --limit 'Compute[20:29]' > upgrade-compute-20-29.log 2>&1 &
    $ openstack overcloud upgrade run -y --stack STACK NAME --tags system_upgrade --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 &
    $ openstack overcloud upgrade run -y --stack STACK NAME --tags system_upgrade --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 &

    This command performs the following actions:

    • Performs a Leapp upgrade of the operating system.
    • Performs a reboot as a part of the Leapp upgrade.
  5. Run the upgrade command with no tags:

    $ openstack overcloud upgrade run -y --stack STACK NAME --limit 'Compute[0:19]' > upgrade-compute-00-19.log 2>&1 &
    $ openstack overcloud upgrade run -y --stack STACK NAME --limit 'Compute[20:29]' > upgrade-compute-20-29.log 2>&1 &
    $ openstack overcloud upgrade run -y --stack STACK NAME --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 &
    $ openstack overcloud upgrade run -y --stack STACK NAME --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 &

    This command performs the Red Hat OpenStack Platform upgrade.

  6. Optional: To upgrade selected Compute nodes, use the --limit option with a comma-separated list of nodes that you want to upgrade. The following example upgrades the overcloud-compute-0, overcloud-compute-1, overcloud-compute-2 nodes in parallel.

    1. Run the upgrade command with the system_upgrade tag:

      $ openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2
    2. Run the upgrade command with no tags:

      $ openstack overcloud upgrade run --stack STACK NAME  --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2

20.4. Synchronizing the overcloud stack

The upgrade requires an update the overcloud stack to ensure that the stack resource structure and parameters align with a fresh deployment of OpenStack Platform 16.2.

Note

If you are not using the default stack name (overcloud), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack.

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Edit the containers-prepare-parameter.yaml file and remove the following parameters and their values:

    • ceph3_namespace
    • ceph3_tag
    • ceph3_image
    • name_prefix_stein
    • name_suffix_stein
    • namespace_stein
    • tag_stein
  3. To re-enable fencing in your overcloud, set the EnableFencing parameter to true in the fencing.yaml environment file.
  4. Run the upgrade finalization command:

    $ openstack overcloud upgrade converge \
        --stack STACK NAME \
        --templates \
        -e ENVIRONMENT FILE
        …​
        -e /home/stack/templates/upgrades-environment.yaml \
        -e /home/stack/templates/rhsm.yaml \
        -e /home/stack/containers-prepare-parameter.yaml \
        -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
        …​

    Include the following options relevant to your environment:

    • The environment file (upgrades-environment.yaml) with the upgrade-specific parameters (-e).
    • The environment file (fencing.yaml) with the EnableFencing parameter set to true.
    • The environment file (rhsm.yaml) with the registration and subscription parameters (-e).
    • The environment file (containers-prepare-parameter.yaml) with your new container image locations (-e). In most cases, this is the same environment file that the undercloud uses.
    • The environment file (neutron-ovs.yaml) to maintain OVS compatibility.
    • Any custom configuration environment files (-e) relevant to your deployment.
    • If applicable, your custom roles (roles_data) file using --roles-file.
    • If applicable, your composable network (network_data) file using --networks-file.
    • If you use a custom stack name, pass the name with the --stack option.
  5. Wait until the stack synchronization completes.
Important

You do not need the upgrades-environment.yaml file for any further deployment operations.

20.5. Upgrading the entire overcloud at once

This upgrade process requires you to first shut down all workloads that are running in the overcloud. You then upgrade all of your overcloud systems, and restart the workloads afterward. This process is driven from the undercloud.

You can also upgrade Compute nodes that include Red Hat Ceph Storage as part of this procedure, or upgrade them separately after you upgrade all the other nodes.

Prerequisites

  • In the upgrades-environment.yaml file, include the following parameter in parameter_defaults:

    AllInOneUpgrade: true

Procedure

  1. Shut down your workloads.
  2. If you deployed director-integrated Ceph, switch the Ceph systemd files to podman:

    $ openstack overcloud external-upgrade run --stack overcloud --tags ceph_systemd -e ceph_ansible_limit=controller-0,controller-1,controller-2,ceph-0,ceph-1,ceph-2,networker-0,networker-1
    • Replace controller-0,controller-1,controller-2,ceph-0,ceph-1,ceph-2,networker-0,networker-1 with the server names for the roles in your environment.
    • To upgrade Compute nodes that contain Ceph, include the hostname of the Compute node in the openstack overcloud external-upgrade run command. For example:

      $ openstack overcloud upgrade run --stack overcloud --tags ceph_systemd -e ceph_ansible_limit=overcloud-compute-0,overcloud-compute-1

      Additionally, include the hostname of the Compute node in the commands in steps 4 and 5.

  3. Stop all RHOSP services on the nodes:

    $ openstack overcloud external-upgrade run --stack overcloud --tags system_upgrade_stop_services
  4. Run the system upgrade on all nodes. If you deployed director-integrated Ceph, include all Ceph nodes in the same --limit command:

    $ openstack overcloud upgrade run --stack overcloud --tags system_upgrade --limit controller-0,controller-1,controller-2,ceph-0,ceph-1,ceph-2,networker-0,networker-1
  5. Run the upgrade without tags:

    $ openstack overcloud upgrade run --stack overcloud --limit controller-0,controller-1,controller-2,ceph-0,ceph-1,ceph-2,networker-0,networker-1

Next steps