-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Keeping Red Hat OpenStack Platform Updated
Performing minor updates of Red Hat OpenStack Platform
OpenStack Documentation Team
rhos-docs@redhat.comAbstract
Chapter 1. Introduction
This document provides a workflow to help keep your Red Hat OpenStack Platform 14 environment updated with the latest packages and containers.
This guide provides an upgrade path through the following versions:
| Old Overcloud Version | New Overcloud Version |
|---|---|
| Red Hat OpenStack Platform 14 | Red Hat OpenStack Platform 14.z |
1.1. High level workflow
The following table provides an outline of the steps required for the upgrade process:
| Step | Description |
|---|---|
| Updating the undercloud | Update the undercloud to the latest OpenStack Platform 14.z version. |
| Updating the overcloud | Update the overcloud to the latest OpenStack Platform 14.z version. |
| Updating the Ceph Storage nodes | Upgrade all Ceph Storage services. |
| Finalize the upgrade | Run the convergence command to refresh your overcloud stack. |
Chapter 2. Updating the Undercloud
This process updates the undercloud and its overcloud images to the latest Red Hat OpenStack Platform 14 version.
2.1. Performing a minor update of a containerized undercloud
The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of your OpenStack Platform environment.
Procedure
-
Log into the director as the
stackuser. Run
yumto upgrade the director’s main packages:$ sudo yum update -y python-tripleoclient* openstack-tripleo-common openstack-tripleo-heat-templates
The director uses the
openstack undercloud upgradecommand to update the undercloud environment. Run the command:$ openstack undercloud upgrade
- Wait until the undercloud upgrade process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
$ sudo reboot
- Wait until the node boots.
2.2. Updating the overcloud images
You need to replace your current overcloud images with new versions. The new images ensure the director can introspect and provision your nodes using the latest version of OpenStack Platform software.
Prerequisites
- You have updated the undercloud to the latest version.
Procedure
Remove any existing images from the
imagesdirectory on thestackuser’s home (/home/stack/images):$ rm -rf ~/images/*
Extract the archives:
$ cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-14.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-14.0.tar; do tar -xvf $i; done $ cd ~
Import the latest images into the director:
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
Configure your nodes to use the new images:
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
Verify the existence of the new images:
$ openstack image list $ ls -l /httpboot
When deploying overcloud nodes, ensure the Overcloud image version corresponds to the respective Heat template version. For example, only use the OpenStack Platform 14 images with the OpenStack Platform 14 Heat templates.
2.3. Undercloud Post-Upgrade Notes
-
If using a local set of core templates in your
stackusers home directory, ensure you update the templates using the recommended workflow in "Using Customized Core Heat Templates". You must update the local copy before upgrading the overcloud.
2.4. Next Steps
The undercloud upgrade is complete. You can now update the overcloud.
Chapter 3. Updating the Overcloud
This process updates the overcloud.
Prerequisites
- You have updated the undercloud to the latest version.
3.1. Running the overcloud update preparation
The update requires running openstack overcloud update prepare command, which performs the following tasks:
- Updates the overcloud plan to OpenStack Platform 14
- Prepares the nodes for the update
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the update preparation command:
$ openstack overcloud update prepare \ --templates \ -r <ROLES DATA FILE> \ -n <NETWORK DATA FILE> \ -e <ENVIRONMENT FILE> \ -e <ENVIRONMENT FILE> \ ...Include the following options relevant to your environment:
-
Custom configuration environment files (
-e) -
If using your own custom roles, include your custom roles (
roles_data) file (-r) -
If using custom networks, include your composable network (
network_data) file (-n)
-
Custom configuration environment files (
- Wait until the update preparation completes.
3.2. Running the container image preparation
The overcloud requires the latest OpenStack Platform 14 container images before performing the update. This involves executing the container_image_prepare external update process. To execute this process, run the openstack overcloud external-update run command against tasks tagged with the container_image_prepare tag. This procedure includes the following tasks:
- Automatically prepare all container image configuration relevant to your environment.
- Pull the relevant container images to your undercloud, unless you have previously disabled this option.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the
openstack overcloud external-update runcommand against tasks tagged with thecontainer_image_preparetag:$ openstack overcloud external-update run --tags container_image_prepare
3.3. Updating all Controller nodes
This process updates all the Controller nodes to the latest OpenStack Platform 14 version. The process involves running the openstack overcloud update run command and including the --nodes Controller option to restrict operations to the Controller nodes only.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --nodes Controller
- Wait until the Controller node update completes.
3.4. Updating all Compute nodes
This process updates all Compute nodes to the latest OpenStack Platform 14 version. The process involves running the openstack overcloud update run command and including the --nodes Compute option to restrict operations to the Compute nodes only.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --nodes Compute
- Wait until the Compute node update completes.
3.5. Updating all HCI Compute nodes
This process updates the Hyperconverged Infrastructure (HCI) Compute nodes. The process involves:
-
Running the
openstack overcloud update runcommand and including the--nodes ComputeHCIoption to restrict operations to the HCI nodes only. -
Running the
openstack overcloud ceph-upgrade runcommand to perform an update to a containerized Red Hat Ceph Storage 3 cluster.
Currently, the following combinations of Ansible with ceph-ansible are supported:
-
ansible-2.6withceph-ansible-3.2 -
ansible-2.4withceph-ansible-3.1
If your environment has ansible-2.6 with ceph-ansible-3.1, run the following commands to update ceph-ansible to the newest version:
# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms # subscription-manager repos --enable=rhel-7-server-ansible-2.6-rpms # yum update ceph-ansible
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --nodes ComputeHCI
- Wait until the node update completes.
Run the Ceph Storage update command. For example:
$ openstack overcloud ceph-upgrade run \ --templates \ -e <ENVIRONMENT FILE> \ -e /home/stack/templates/overcloud_images.yamlInclude the following options relevant to your environment:
-
Custom configuration environment files (
-e) -
The environment file with your container image locations (
-e). Note that the update command might display a warning about using the--container-registry-file. You can ignore this warning as this option is deprecated in favor of using-efor the container image environment file. -
If applicable, your custom roles (
roles_data) file (--roles-file) -
If applicable, your composable network (
network_data) file (--networks-file)
-
Custom configuration environment files (
- Wait until the Compute HCI node update completes.
3.6. Updating all Ceph Storage nodes
This process updates the Ceph Storage nodes. The process involves:
-
Running the
openstack overcloud update runcommand and including the--nodes CephStorageoption to restrict operations to the Ceph Storage nodes only. -
Running the
openstack overcloud external-update runcommand to runceph-ansibleas an external process and update the Red Hat Ceph Storage 3 containers.
Currently, the following combinations of Ansible with ceph-ansible are supported:
-
ansible-2.6withceph-ansible-3.2 -
ansible-2.4withceph-ansible-3.1
If your environment has ansible-2.6 with ceph-ansible-3.1, run the following commands to update ceph-ansible to the newest version:
# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms # subscription-manager repos --enable=rhel-7-server-ansible-2.6-rpms # yum update ceph-ansible
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --nodes CephStorage
- Wait until the node update completes.
Run the Ceph Storage container update command:
$ openstack overcloud external-update run --tags ceph
- Wait until the Ceph Storage container update completes.
3.7. Performing online database updates
Some overcloud components require an online upgrade (or migration) of their database tables. This involves executing the online_upgrade external update process. To execute this process, run the openstack overcloud external-update run command against tasks tagged with the online_upgrade tag. This performs online database updates to the following components:
- OpenStack Block Storage (cinder)
- OpenStack Compute (nova)
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the
openstack overcloud external-update runcommand against tasks tagged with theonline_upgradetag:$ openstack overcloud external-update run --tags online_upgrade
3.8. Finalizing the update
The update requires a final step to update the overcloud stack. This ensures the stack’s resource structure aligns with a regular deployment of OpenStack Platform 14 and allows you to perform standard openstack overcloud deploy functions in the future.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the update finalization command:
$ openstack overcloud update converge \ --templates \ -e <ENVIRONMENT FILE> \ -e <ENVIRONMENT FILE> \ ...Include the following options relevant to your environment:
-
Custom configuration environment files (
-e). -
If applicable, your custom roles (
roles_data) file (--roles-file) -
If applicable, your composable network (
network_data) file (--networks-file)
-
Custom configuration environment files (
- Wait until the update finalization completes.
Chapter 4. Rebooting the overcloud
After performing a minor version update, perform a reboot of your overcloud in case the nodes use a new kernel or new system-level components.
4.1. Rebooting controller and composable nodes
Complete the following steps to reboot controller nodes and standalone nodes based on composable roles, excluding Compute nodes and Ceph Storage nodes.
Procedure
Select a node to reboot. Log into the node and stop the cluster before rebooting:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
Reboot the node:
[heat-admin@overcloud-controller-0 ~]$ sudo reboot
- Wait until the node boots.
Re-enable the cluster for the node:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster start
Log into the node and check the services:
If the node uses Pacemaker services, check the node has rejoined the cluster:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
If the node uses Systemd services, check all services are enabled:
[heat-admin@overcloud-controller-0 ~]$ sudo systemctl status
If the node uses containerized services, check all containers on the node are active:
[heat-admin@overcloud-controller-0 ~]$ sudo docker ps
4.2. Rebooting a Ceph Storage (OSD) cluster
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Procedure
Log into a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:
$ sudo ceph osd set noout $ sudo ceph osd set norebalance
- Select the first Ceph Storage node to reboot and log into the node.
Reboot the node:
$ sudo reboot
- Wait until the node boots.
Log into the node and check the cluster status:
$ sudo ceph -s
Check the
pgmapreports allpgsas normal (active+clean).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
$ sudo ceph osd unset noout $ sudo ceph osd unset norebalance
Perform a final status check to verify the cluster reports
HEALTH_OK:$ sudo ceph status
4.3. Rebooting compute nodes
Complete the following steps to reboot Compute nodes. To ensure minimal downtime of instances in your OpenStack Platform environment, this procedure also includes instructions about migrating instances from the Compute node you want to reboot. This involves the following workflow:
- Select and disable the Compute node you want to reboot so that it does not provision new instances.
- Migrate the instances to another Compute node.
- Reboot the empty Compute node.
- Enable the empty Compute node.
Procedure
-
Log into the undercloud as the
stackuser. List all Compute nodes and their UUIDs:
$ source ~/stackrc (undercloud) $ openstack server list --name compute
Identify the UUID of the Compute node you want to reboot.
From the undercloud, select a Compute Node. Disable the node:
$ source ~/overcloudrc (overcloud) $ openstack compute service list (overcloud) $ openstack compute service set [hostname] nova-compute --disable
List all instances on the Compute node:
(overcloud) $ openstack server list --host [hostname] --all-projects
Use one of the following commands to migrate your instances:
Migrate the instance to a different host:
(overcloud) $ openstack server migrate [instance-id] --live [target-host]--wait
Let
nova-schedulerautomatically select the target host:(overcloud) $ nova live-migration [instance-id]
Live migrate all instances at once:
$ nova host-evacuate-live [hostname]
NoteThe
novacommand might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm the migration was successful:
(overcloud) $ openstack server list --host [hostname] --all-projects
- Continue migrating instances until none remain on the chosen Compute Node.
Log into the Compute Node. Reboot the node:
[heat-admin@overcloud-compute-0 ~]$ sudo reboot
- Wait until the node boots.
Enable the Compute Node again:
$ source ~/overcloudrc (overcloud) $ openstack compute service set [hostname] nova-compute --enable
Check whether the Compute node is enabled:
(overcloud) $ openstack compute service list