Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Director-Based Environments: Performing Updates to Minor Versions

This section explores how to update packages for your Red Hat OpenStack Platform environment within the same version. In this case, it is upgrades within Red Hat OpenStack Platform 9. This includes updating aspects of both the Undercloud and Overcloud.

Warning

With High Availaibility for Compute instances (or Instance HA, as described in High Availability for Compute Instances), upgrades or scale-up operations are not possible. Any attempts to do so will fail.

If you have Instance HA enabled, disable it before performing an upgrade or scale-up. To do so, perform a rollback as described in Rollback.

This procedure for both situations involves the following workflow:

  1. Update the Red Hat OpenStack Platform director packages
  2. Update the Overcloud images on the Red Hat OpenStack Platform director
  3. Update the Overcloud packages using the Red Hat OpenStack Platform director

2.1. Updating the Director Packages

The director relies on standard RPM methods to update your environment. This involves ensuring your director’s host uses the latest packages through yum.

  1. Log into the director as the stack user.
  2. Stop the main OpenStack Platform services:

    $ sudo systemctl stop 'openstack-*' 'neutron-*' httpd
    Note

    This causes a short period of downtime for the undercloud. The overcloud is still functional during the undercloud update.

  3. Update the python-tripleoclient package and its dependencies to ensure you have the latest scripts for the minor version update:

    $ sudo yum update python-tripleoclient
  4. The director uses the openstack undercloud upgrade command to update the Undercloud environment. Run the command:

    $ openstack undercloud upgrade

Major and minor version updates to the kernel or Open vSwitch require a reboot, such as when your undercloud operating system updates from Red Hat Enterprise Linux 7.2 to 7.3, or Open vSwitch from version 2.4 to 2.5. Check the /var/log/yum.log file on the director node to see if either the kernel or openvswitch packages have updated their major or minor versions. If they have, perform a reboot of each node:

  1. Reboot the node:

    $ sudo reboot
  2. Wait until the node boots.

When the node boots, check the status of all services:

$ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"

Verify the existence of your Overcloud and its nodes:

$ source ~/stackrc
$ openstack server list
$ ironic node-list
$ openstack stack list

2.2. Updating the Overcloud Images

The Undercloud update process might download new image archives from the rhosp-director-images and rhosp-director-images-ipa packages. Check the yum log to determine if new image archives are available:

$ sudo grep "rhosp-director-images" /var/log/yum.log

If new archives are available, replace your current images with new images. To install the new images, first remove any existing images from the images directory on the stack user’s home (/home/stack/images):

$ rm -rf ~/images/*

Extract the archives:

$ cd ~/images
$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-9.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-9.0.tar; do tar -xvf $i; done

Import the latest images into the director and configure nodes to use the new images

$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
$ openstack baremetal configure boot

To finalize the image update, verify the existence of the new images:

$ openstack image list
$ ls -l /httpboot

The director is now updated and using the latest images. You do not need to restart any services after the update.

2.3. Updating the Overcloud Packages

The Overcloud relies on standard RPM methods to update the environment. This involves performing an update on all nodes using the openstack overcloud update from the director.

Use the -e to include environment files relevant to your Overcloud and its upgrade path. The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:

  • The overcloud-resource-registry-puppet.yaml file from the heat template collection. Although this file is included automatically when you run the openstack overcloud deploy command, you must include this file when you run the openstack overcloud update command.
  • Any network isolation files, including the initialization file (environments/network-isolation.yaml) from the heat template collection and then your custom NIC configuration file.
  • Any external load balancing environment files.
  • Any storage environment files.
  • Any environment files for Red Hat CDN or Satellite registration.
  • Any other custom environment files.

Running an update on all nodes in parallel might cause problems. For example, an update of a package might involve restarting a service, which can disrupt other nodes. This is why the update process updates each node using a set of breakpoints. This means nodes are updated one by one. When one node completes the package update, the update process moves on to the next node. The update process also requires the -i option, which puts the command in an interactive mode that requires confirmation at each breakpoint. Without the -i option, the update remains paused at the first breakpoint.

The following is an example update command for updating the Overcloud:

$ openstack overcloud update stack overcloud -i \
  --templates  \
  -e /usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /home/stack/templates/network-environment.yaml \
  -e /home/stack/templates/storage-environment.yaml \
  -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml \
  [-e <environment_file>|...]

Running this command starts the update process. During this process, the director reports an IN_PROGRESS status and periodically prompts you to clear breakpoints. For example:

not_started: [u'overcloud-controller-0', u'overcloud-controller-1', u'overcloud-controller-2']
on_breakpoint: [u'overcloud-compute-0']
Breakpoint reached, continue? Regexp or Enter=proceed, no=cancel update, C-c=quit interactive mode:

Press Enter to clear the breakpoint from last node on the on_breakpoint list. This begins the update for that node. You can also type a node name to clear a breakpoint on a specific node, or a Python-based regular expression to clear breakpoints on multiple nodes at once. However, it is not recommended to clear breakpoints on multiple controller nodes at once. Continue this process until all nodes have completed their update.

The update command reports a COMPLETE status when the update completes:

...
IN_PROGRESS
IN_PROGRESS
IN_PROGRESS
COMPLETE
update finished with status COMPLETE

If you configured fencing for your Controller nodes, the update process might disable it. When the update process completes, reenable fencing with the following command on one of the Controller nodes:

$ sudo pcs property set stonith-enabled=true

The update process does not reboot any nodes in the Overcloud automatically. Major and minor version updates to the kernel or Open vSwitch require a reboot, such as when your overcloud operating system updates from Red Hat Enterprise Linux 7.2 to 7.3, or Open vSwitch from version 2.4 to 2.5. Check the /var/log/yum.log file on each node to see if either the kernel or openvswitch packages have updated their major or minor versions. If they have, perform a reboot of each node using the "Rebooting the Overcloud' procedures in the Director Installation and Usage guide.