Upgrading Red Hat OpenStack Platform
Upgrading a Red Hat OpenStack Platform environment
Abstract
Chapter 1. Introduction
This document provides a workflow to help upgrade your Red Hat OpenStack Platform environment to the latest major version and keep it updated with minor releases of that version.
This guide provides an upgrade path through the following versions:
| Old Overcloud Version | New Overcloud Version |
|---|---|
| Red Hat OpenStack Platform 12 | Red Hat OpenStack Platform 13 |
1.1. High level workflow
The following table provides an outline of the steps required for the upgrade process:
| Step | Description |
|---|---|
| Preparing your environment | Perform a backup of the database and configuration of the undercloud and overcloud Controller nodes. Update to the latest minor release. Validate the environment. |
| Upgrading the undercloud | Upgrade the undercloud from OpenStack Platform 12 to OpenStack Platform 13. |
| Obtaining container images | Create an environment file containing the locations of container images for OpenStack Platform 13 services. |
| Preparing the overcloud | Perform relevant steps to transition your overcloud configuration files to OpenStack Platform 13. |
| Upgrading your Controller nodes | Upgrade all Controller nodes simultaneously to OpenStack Platform 13. |
| Upgrading your Compute nodes | Test the upgrade on selected Compute nodes. If the test succeeds, upgrade all Compute nodes. |
| Upgrading your Ceph Storage nodes | Upgrade all Ceph Storage nodes. This includes an upgrade to containerized version of Red Hat Ceph Storage 3. |
| Finalize the upgrade | Run the convergence command to refresh your overcloud stack. |
1.2. Repositories
Both the undercloud and overcloud require access to Red Hat repositories either through the Red Hat Content Delivery Network or through Red Hat Satellite 6. If using a Red Hat Satellite Server, synchronize the required repositories to your OpenStack Platform environment. Use the following list of CDN channel names as a guide:
Table 1.1. OpenStack Platform Repositories
| Name | Repository | Description of Requirement |
|---|---|---|
| Red Hat Enterprise Linux 7 Server (RPMs) |
| Base operating system repository for x86_64 systems. |
| Red Hat Enterprise Linux 7 Server - Extras (RPMs) |
| Contains Red Hat OpenStack Platform dependencies. |
| Red Hat Enterprise Linux 7 Server - RH Common (RPMs) |
| Contains tools for deploying and configuring Red Hat OpenStack Platform. |
| Red Hat Satellite Tools for RHEL 7 Server RPMs x86_64 |
| Tools for managing hosts with Red Hat Satellite 6. |
| Red Hat Enterprise Linux High Availability (for RHEL 7 Server) (RPMs) |
| High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability. |
| Red Hat OpenStack Platform 13 for RHEL 7 (RPMs) |
| Core Red Hat OpenStack Platform repository. Also contains packages for Red Hat OpenStack Platform director. |
| Red Hat Ceph Storage OSD 3 for Red Hat Enterprise Linux 7 Server (RPMs) |
| (For Ceph Storage Nodes) Repository for Ceph Storage Object Storage daemon. Installed on Ceph Storage nodes. |
| Red Hat Ceph Storage MON 3 for Red Hat Enterprise Linux 7 Server (RPMs) |
| (For Ceph Storage Nodes) Repository for Ceph Storage Monitor daemon. Installed on Controller nodes in OpenStack environments using Ceph Storage nodes. |
| Red Hat Ceph Storage Tools 3 for Red Hat Enterprise Linux 7 Server (RPMs) |
| Provides tools for nodes to communicate with the Ceph Storage cluster. This repository should be enabled for all nodes when deploying an overcloud with a Ceph Storage cluster. |
| Enterprise Linux for Real Time for NFV (RHEL 7 Server) (RPMs) |
|
Repository for Real Time KVM (RT-KVM) for NFV. Contains packages to enable the real time kernel. This repository should be enabled for all Compute nodes targeted for RT-KVM. NOTE: You will need a separate subscription to a |
To configure repositories for your Red Hat OpenStack Platform environment in an offline network, see "Configuring Red Hat OpenStack Platform Director in an Offline Environment" on the Red Hat Customer Portal.
Chapter 2. Preparing for an OpenStack Platform Upgrade
This process prepares your OpenStack Platform environment for a full update. This involves the following process:
- Backup both the undercloud and overcloud
- Update the undercloud packages and run the upgrade command
- Reboot the undercloud in case a newer kernel or newer system packages are installed
- Update the overcloud using the overcloud upgrade command
- Reboot the overcloud nodes in case a newer kernel or newer system packages are installed
- Perform a validation check on both the undercloud and overcloud
These procedures ensure your OpenStack Platform environment is in the best possible state before proceeding with the upgrade.
2.1. Backing up the undercloud
A full undercloud backup includes the following databases and files:
- All MariaDB databases on the undercloud node
- MariaDB configuration file on the undercloud (so that you can accurately restore databases)
-
The configuration data:
/etc -
Log data:
/var/log -
Image data:
/var/lib/glance -
Certificate generation data if using SSL:
/var/lib/certmonger -
Any container image data:
/var/lib/dockerand/var/lib/registry -
All swift data:
/srv/node -
All data in the stack user home directory:
/home/stack
Confirm that you have sufficient disk space available on the undercloud before performing the backup process. Expect the archive file to be at least 3.5 GB, if not larger.
Procedure
-
Log into the undercloud as the
rootuser. Create a
backupdirectory, and change the user ownership of the directory to thestackuser:[root@director ~]# mkdir /backup [root@director ~]# chown stack: /backup
From the
backupdirectory, back up the database:[root@director ~]# cd /backup [root@director ~]# mysqldump --opt --all-databases > /root/undercloud-all-databases.sql
Archive the database backup and the configuration files:
[root@director ~]# tar --xattrs --ignore-failed-read -cf \ undercloud-backup-`date +%F`.tar \ /root/undercloud-all-databases.sql \ /etc \ /var/log \ /var/lib/glance \ /var/lib/certmonger \ /var/lib/docker \ /var/lib/registry \ /srv/node \ /root \ /home/stack-
The
--ignore-failed-readoption skips any directory that does not apply to your undercloud. -
The
--xattrsoption includes extended attributed, which are required to store metadata for Object Storage (swift).
This creates a file named
undercloud-backup-<date>.tar.gz, where<date>is the system date. Copy thistarfile to a secure location.-
The
2.2. Backing up containerized overcloud control plane services
The following procedure creates a backup of the containerized overcloud databases and configuration. A backup of the overcloud database and services ensures you have a snapshot of a working environment. Having this snapshot helps in case you need to restore the overcloud to its original state in case of an operational failure.
This procedure only includes crucial control plane services. It does not include backups of Compute node workloads, data on Ceph Storage nodes, nor any additional services.
Procedure
Perform the database backup:
Log into a Controller node. You can access the overcloud from the undercloud:
$ ssh heat-admin@192.0.2.100
Change to the
rootuser:$ sudo -i
Create a temporary directory to store the backups:
# mkdir -p /var/tmp/mysql_backup/
Obtain the database password and store it in the
MYSQLDBPASSenvironment variable. The password is stored in themysql::server::root_passwordvariable within the/etc/puppet/hieradata/service_configs.jsonfile. Use the following command to store the password:# MYSQLDBPASS=$(sudo hiera mysql::server::root_password)
Backup the database:
# mysql -uroot -p$MYSQLDBPASS -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$MYSQLDBPASS --single-transaction --databases > /var/tmp/mysql_backup/openstack_databases-`date +%F`-`date +%T`.sql
This dumps a database backup called
/var/tmp/mysql_backup/openstack_databases-<date>.sqlwhere<date>is the system date and time. Copy this database dump to a secure location.Backup all the users and permissions information:
# mysql -uroot -p$MYSQLDBPASS -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$MYSQLDBPASS -s -N -e | sed 's/$/;/' > /var/tmp/mysql_backup/openstack_databases_grants-`date +%F`-`date +%T`.sqlThis will dump a database backup called
/var/tmp/mysql_backup/openstack_databases_grants-<date>.sqlwhere<date>is the system date and time. Copy this database dump to a secure location.
Backup the OpenStack Telemetry database:
Connect to any controller and get the IP of the MongoDB primary instance:
# MONGOIP=$(sudo hiera mongodb::server::bind_ip)
Create the backup:
# mkdir -p /var/tmp/mongo_backup/ # mongodump --oplog --host $MONGOIP --out /var/tmp/mongo_backup/
-
Copy the database dump in
/var/tmp/mongo_backup/to a secure location.
Backup the Redis cluster:
Obtain the Redis endpoint from HAProxy:
# REDISIP=$(sudo hiera redis_vip)
Obtain the master password for the Redis cluster:
# REDISPASS=$(sudo hiera redis::masterauth)
Check connectivity to the Redis cluster:
# redis-cli -a $REDISPASS -h $REDISIP ping
Dump the Redis database:
# redis-cli -a $REDISPASS -h $REDISIP bgsave
This stores the database backup in the default
/var/lib/redis/directory. Copy this database dump to a secure location.
Backup the filesystem on each Controller node:
Create a directory for the backup:
# mkdir -p /var/tmp/filesystem_backup/
Run the following
tarcommand:# tar --ignore-failed-read --xattrs \ -zcvf /var/tmp/filesystem_backup/fs_backup-`date '+%Y-%m-%d-%H-%M-%S'`.tar.gz \ /var/lib/config-data \ /var/log/containers \ /etc/corosync \ /etc/logrotate.d \ /etc/openvswitch \ /var/log/openvswitch \ /srv/node \ /home/heat-adminThe
--ignore-failed-readoption ignores any missing directories, which is useful if certain services are not used or separated on their own custom roles.
-
Copy the resulting
tarfile to a secure location.
2.3. Performing a minor update of an undercloud
The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of your OpenStack Platform environment.
Procedure
-
Log into the director as the
stackuser. Update the
python-tripleoclientpackage and its dependencies to ensure you have the latest scripts for the minor version update:$ sudo yum update -y python-tripleoclient
The director uses the
openstack undercloud upgradecommand to update the Undercloud environment. Run the command:$ openstack undercloud upgrade
- Wait until the undercloud upgrade process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
$ sudo reboot
- Wait until the node boots.
2.4. Performing a minor update of a containerized overcloud
The director provides commands to update the packages on all overcloud nodes. This allows you to perform a minor update within the current version of your OpenStack Platform environment.
Procedure
Find the latest tag for the containerized service images:
$ openstack overcloud container image tag discover \ --image registry.access.redhat.com/rhosp12/openstack-base:latest \ --tag-from-label version-release
Make a note of the most recent tag.
Create an updated environment file for your container image source. Run using the
openstack overcloud container image preparecommand. For example, to use images fromregistry.access.redhat.com:$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp12 \ --prefix=openstack- \ --tag [TAG] \ 1 --set ceph_namespace=registry.access.redhat.com/rhceph \ --set ceph_image=rhceph-2-rhel7 \ --set ceph_tag=latest \ --env-file=/home/stack/templates/overcloud_images.yaml \ -e /home/stack/templates/custom_environment_file.yaml 2
For more information about generating this environment file for different source types, see "Configuring a container image source" in the Director Installation and Usage guide.
Run the
openstack overcloud update stackcommand to update the container image locations in your overcloud:$ openstack overcloud update stack --init-minor-update \ --container-registry-file /home/stack/templates/overcloud_images.yaml
The
--init-minor-updateonly performs an update of the parameters in the overcloud stack. It does not perform the actual package or container update. Wait until this command completes.Perform a package and container update using the
openstack overcloud updatecommand. Using the--nodesoption to upgrade node for each role. For example, the following command updates nodes in theControllerrole$ openstack overcloud update stack --nodes Controller
Run this command for each role group in the following order:
-
Controller -
CephStorage -
Compute -
ObjectStorage -
Any custom roles such as
Database,MessageBus,Networker, and so forth.
-
- The update process starts for the chosen role starts. The director uses an Ansible playbook to perform the update and displays the output of each task.
- Update the next role group. Repeat until you have updated all nodes.
2.5. Rebooting controller and composable nodes
The following procedure reboots controller nodes and standalone nodes based on composable roles. This excludes Compute nodes and Ceph Storage nodes.
Procedure
Select a node to reboot. Log into it and stop the cluster before rebooting:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
Reboot the node:
[heat-admin@overcloud-controller-0 ~]$ sudo reboot
- Wait until the node boots.
Re-enable the cluster for the node:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster start
Log into the node and check the services. For example:
If the node uses Pacemaker services, check the node has rejoined the cluster:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
If the node uses Systemd services, check all services are enabled:
[heat-admin@overcloud-controller-0 ~]$ sudo systemctl status
2.6. Rebooting a Ceph Storage (OSD) cluster
The following procedure reboots a cluster of Ceph Storage (OSD) nodes.
Procedure
Log into a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:
$ sudo ceph osd set noout $ sudo ceph osd set norebalance
- Select the first Ceph Storage node to reboot and log into it.
Reboot the node:
$ sudo reboot
- Wait until the node boots.
Log into the node and check the cluster status:
$ sudo ceph -s
Check that the
pgmapreports allpgsas normal (active+clean).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
$ sudo ceph osd unset noout $ sudo ceph osd unset norebalance
Perform a final status check to verify the cluster reports
HEALTH_OK:$ sudo ceph status
2.7. Rebooting compute nodes
The following procedure reboots Compute nodes. To ensure minimal downtime of instances in your OpenStack Platform environment, this procedure also includes instructions on migrating instances from the chosen Compute node. This involves the following workflow:
- Select a Compute node to reboot and disable it so that it does not provision new instances
- Migrate the instances to another Compute node
- Reboot the empty Compute node and enable it
Procedure
-
Log into the undercloud as the
stackuser. List all Compute nodes and their UUIDs:
$ source ~/stackrc (undercloud) $ openstack server list --name compute
Identify the UUID of the Compute node you aim to reboot.
From the undercloud, select a Compute Node and disable it:
$ source ~/overcloudrc (overcloud) $ openstack compute service list (overcloud) $ openstack compute service set [hostname] nova-compute --disable
List all instances on the Compute node:
(overcloud) $ openstack server list --host [hostname] --all-projects
Use one of the following commands to migrate your instances:
Migrate the instance to a specific host of your choice:
(overcloud) $ openstack server migrate [instance-id] --live [target-host]--wait
Let
nova-schedulerautomatically select the target host:(overcloud) $ nova live-migration [instance-id]
Live migrate all instances at once:
$ nova host-evacuate-live [hostname]
NoteThe
novacommand might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm the migration was successful:
(overcloud) $ openstack server list --host [hostname] --all-projects
- Continue migrating instances until none remain on the chosen Compute Node.
Log into the Compute Node and reboot it:
[heat-admin@overcloud-compute-0 ~]$ sudo reboot
- Wait until the node boots.
Enable the Compute Node again:
$ source ~/overcloudrc (overcloud) $ openstack compute service set [hostname] nova-compute --enable
Check whether the Compute node is enabled:
(overcloud) $ openstack compute service list
2.8. Validating the undercloud
The following is a set of steps to check the functionality of your undercloud.
Procedure
Source the undercloud access details:
$ source ~/stackrc
Check for failed Systemd services:
(undercloud) $ sudo systemctl list-units --state=failed 'openstack*' 'neutron*' 'httpd' 'docker'
Check the undercloud free space:
(undercloud) $ df -h
Use the "Undercloud Reqirements" as a basis to determine if you have adequate free space.
If you have NTP installed on the undercloud, check that clocks are synchronized:
(undercloud) $ sudo ntpstat
Check the undercloud network services:
(undercloud) $ openstack network agent list
All agents should be
Aliveand their state should beUP.Check the undercloud compute services:
(undercloud) $ openstack compute service list
All agents' status should be
enabledand their state should beup
Related Information
- The following solution article shows how to remove deleted stack entries in your OpenStack Orchestration (heat) database: https://access.redhat.com/solutions/2215131
2.9. Validating a containerized overcloud
The following is a set of steps to check the functionality of your containerized overcloud.
Procedure
Source the undercloud access details:
$ source ~/stackrc
Check the status of your bare metal nodes:
(undercloud) $ openstack baremetal node list
All nodes should have a valid power state (
on) and maintenance mode should befalse.Check for failed Systemd services:
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo systemctl list-units --state=failed 'openstack*' 'neutron*' 'httpd' 'docker' 'ceph*'" ; done
Check for failed containerized services:
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker ps -f 'exited=1' --all" ; done (undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker ps -f 'status=dead' -f 'status=restarting'" ; done
Check the HAProxy connection to all services. Obtain the Control Plane VIP address and authentication details for the
haproxy.statsservice:(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE sudo 'grep "listen haproxy.stats" -A 6 /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg'
Use these details in the following cURL request:
(undercloud) $ curl -s -u admin:<PASSWORD> "http://<IP ADDRESS>:1993/;csv" | egrep -vi "(frontend|backend)" | awk -F',' '{ print $1" "$2" "$18 }'Replace
<PASSWORD>and<IP ADDRESS>details with the respective details from thehaproxy.statsservice. The resulting list shows the OpenStack Platform services on each node and their connection status.Check overcloud database replication health:
(undercloud) $ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker exec clustercheck clustercheck" ; done
Check RabbitMQ cluster health:
(undercloud) $ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo docker exec $(ssh heat-admin@$NODE "sudo docker ps -f 'name=.*rabbitmq.*' -q") rabbitmqctl node_health_check" ; done
Check Pacemaker resource health:
(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo pcs status"
Look for:
-
All cluster nodes
online. -
No resources
stoppedon any cluster nodes. -
No
failedpacemaker actions.
-
All cluster nodes
Check the disk space on each overcloud node:
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo df -h --output=source,fstype,avail -x overlay -x tmpfs -x devtmpfs" ; done
Check overcloud Ceph Storage cluster health. The following command runs the
cephtool on a Controller node to check the cluster:(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph -s"
Check Ceph Storage OSD for free space. The following command runs the
cephtool on a Controller node to check the free space:(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph df"
Check that clocks are synchronized on overcloud nodes
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo ntpstat" ; done
Source the overcloud access details:
(undercloud) $ source ~/overcloudrc
Check the overcloud network services:
(overcloud) $ openstack network agent list
All agents should be
Aliveand their state should beUP.Check the overcloud compute services:
(overcloud) $ openstack compute service list
All agents' status should be
enabledand their state should beupCheck the overcloud volume services:
(overcloud) $ openstack volume service list
All agents' status should be
enabledand their state should beup.
Related Information
- Review the article "How can I verify my OpenStack environment is deployed with Red Hat recommended configurations?". This article provides some information on how to check your Red Hat OpenStack Platform environment and tune the configuration to Red Hat’s recommendations.
Chapter 3. Upgrading the Undercloud
This process upgrades the undercloud and its overcloud images to Red Hat OpenStack Platform 13.
3.1. Upgrading the undercloud to OpenStack Platform 13
This procedure upgrades the undercloud toolset and the core Heat template collection to the OpenStack Platform 13 release.
Procedure
-
Log into the director as the
stackuser. Disable the current OpenStack Platform repository:
$ sudo subscription-manager repos --disable=rhel-7-server-openstack-12-rpms
Enable the new OpenStack Platform repository:
$ sudo subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
Run
yumto upgrade the director’s main packages:$ sudo yum update -y python-tripleoclient
Run the following command to upgrade the undercloud:
$ openstack undercloud upgrade
- Wait until the undercloud upgrade process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
$ sudo reboot
- Wait until the node boots.
You have upgraded the undercloud to the OpenStack Platform 13 release.
3.2. Upgrading the overcloud images
You need to replace your current overcloud images with new versions. The new images ensure the director can introspect and provision your nodes using the latest version of OpenStack Platform software.
Prerequisites
- You have upgraded the undercloud to the latest version.
Procedure
Remove any existing images from the
imagesdirectory on thestackuser’s home (/home/stack/images):$ rm -rf ~/images/*
Extract the archives:
$ cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar; do tar -xvf $i; done $ cd ~
Import the latest images into the director:
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
Configure your nodes to use the new images:
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
Verify the existence of the new images:
$ openstack image list $ ls -l /httpboot
When deploying overcloud nodes, ensure the Overcloud image version corresponds to the respective Heat template version. For example, only use the OpenStack Platform 13 images with the OpenStack Platform 13 Heat templates.
3.3. Comparing Previous Template Versions
The upgrade process installs a new set of core Heat templates that correspond to the latest overcloud version. Red Hat OpenStack Platform’s repository retains the previous version of the core template collection in the openstack-tripleo-heat-templates-compat package. This procedure shows how to compare these versions so you can identify changes that might affect your overcloud upgrade.
Procedure
Install the
openstack-tripleo-heat-templates-compatpackage:$ sudo yum install openstack-tripleo-heat-templates-compat
This installs the previous templates in the
compatdirectory of your Heat template collection (/usr/share/openstack-tripleo-heat-templates/compat) and also creates a link tocompatnamed after the previous version (ocata). These templates are backwards compatible with the upgraded director, which means you can use the latest version of the director to install an overcloud of the previous version.Create a temporary copy of the core Heat templates:
$ cp -a /usr/share/openstack-tripleo-heat-templates /tmp/osp13
Move the previous version into its own directory:
$ mv /tmp/osp12/compat /tmp/osp12
Perform a
diffon the contents of both directories:$ diff -urN /tmp/osp12 /tmp/osp13
This shows the core template changes from one version to the next. These changes provide an idea of what should occur during the overcloud upgrade.
3.4. Next Steps
The undercloud upgrade is complete. You can now prepare the overcloud for the upgrade.
Chapter 4. Configuring a container image source
A containerized overcloud requires access to a registry with the required container images. This chapter provides information on how to prepare the registry and your overcloud configuration to use container images for Red Hat OpenStack Platform.
This guide provides several use cases to configure your overcloud to use a registry. Before attempting one of these use cases, it is recommended to familiarize yourself with how to use the image preparation command. See Section 4.2, “Container image preparation command usage” for more information.
4.1. Registry Methods
Red Hat OpenStack Platform supports the following registry types:
- Remote Registry
-
The overcloud pulls container images directly from
registry.access.redhat.com. This method is the easiest for generating the initial configuration. However, each overcloud node pulls each image directly from the Red Hat Container Catalog, which can cause network congestion and slower deployment. In addition, all overcloud nodes require internet access to the Red Hat Container Catalog. - Local Registry
-
The undercloud uses the
docker-distributionservice to act as a registry. This allows the director to synchronize the images fromregistry.access.redhat.comand push them to thedocker-distributionregistry. When creating the overcloud, the overcloud pulls the container images from the undercloud’sdocker-distributionregistry. This method allows you to store a registry internally, which can speed up the deployment and decrease network congestion. However, the undercloud only acts as a basic registry and provides limited life cycle management for container images.
The docker-distribution service acts separately from docker. docker is used to pull and push images to the docker-distribution registry and does not serve the images to the overcloud. The overcloud pulls the images from the docker-distribution registry.
- Satellite Server
- Manage the complete application life cycle of your container images and publish them through a Red Hat Satellite 6 server. The overcloud pulls the images from the Satellite server. This method provides an enterprise grade solution to store, manage, and deploy Red Hat OpenStack Platform containers.
Select a method from the list and continue configuring your registry details.
When building for a multi-architecture cloud, the local registry option is not supported.
4.2. Container image preparation command usage
This section provides an overview on how to use the openstack overcloud container image prepare command, including conceptual information on the command’s various options.
Generating a Container Image Environment File for the Overcloud
One of the main uses of the openstack overcloud container image prepare command is to create an environment file that contains a list of images the overcloud uses. You include this file with your overcloud deployment commands, such as openstack overcloud deploy. The openstack overcloud container image prepare command uses the following options for this function:
--output-env-file- Defines the resulting environment file name.
The following snippet is an example of this file’s contents:
parameter_defaults: DockerAodhApiImage: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest DockerAodhConfigImage: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest ...
Generating a Container Image List for Import Methods
If you aim to import the OpenStack Platform container images to a different registry source, you can generate a list of images. The syntax of list is primarily used to import container images to the container registry on the undercloud, but you can modify the format of this list to suit other import methods, such as Red Hat Satellite 6.
The openstack overcloud container image prepare command uses the following options for this function:
--output-images-file- Defines the resulting file name for the import list.
The following is an example of this file’s contents:
container_images: - imagename: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest - imagename: registry.access.redhat.com/rhosp13/openstack-aodh-evaluator:latest ...
Setting the Namespace for Container Images
Both the --output-env-file and --output-images-file options require a namespace to generate the resulting image locations. The openstack overcloud container image prepare command uses the following options to set the source location of the container images to pull:
--namespace- Defines the namespace for the container images. This is usually a hostname or IP address with a directory.
--prefix- Defines the prefix to add before the image names.
As a result, the director generates the image names using the following format:
-
[NAMESPACE]/[PREFIX][IMAGE NAME]
Setting Container Image Tags
The openstack overcloud container image prepare command uses the latest tag for each container image by default. However, you can select a specific tag for an image version using one of the following options:
--tag-from-label- Use the value of the specified container image labels to discover the versioned tag for every image.
--tag-
Sets the specific tag for all images. All OpenStack Platform container images use the same tag to provide version synchronicity. When using in combination with
--tag-from-label, the versioned tag is discovered starting from this tag.
4.3. Container images for additional services
The director only prepares container images for core OpenStack Platform Services. Some additional features use services that require additional container images. You enable these services with environment files. The openstack overcloud container image prepare command uses the following option to include environment files and their respective container images:
-e- Include environment files to enable additional container images.
The following table provides a sample list of additional services that use container images and their respective environment file locations within the /usr/share/openstack-tripleo-heat-templates directory.
| Service | Environment File |
|---|---|
| Ceph Storage |
|
| Collectd |
|
| Congress |
|
| Fluentd |
|
| OpenStack Bare Metal (ironic) |
|
| OpenStack Data Processing (sahara) |
|
| OpenStack EC2-API |
|
| OpenStack Key Manager (barbican) |
|
| OpenStack Load Balancing-as-a-Service (octavia) |
|
| OpenStack Shared File System Storage (manila) |
|
| Open Virtual Network (OVN) |
|
| Sensu |
|
The next few sections provide examples of including additional services.
Ceph Storage
If deploying a Red Hat Ceph Storage cluster with your overcloud, you need to include the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file. This file enables the composable containerized services in your overcloud and the director needs to know these services are enabled to prepare their images.
In addition to this environment file, you also need to define the Ceph Storage container location, which is different from the OpenStack Platform services. Use the --set option to set the following parameters specific to Ceph Storage:
--set ceph_namespace-
Defines the namespace for the Ceph Storage container image. This functions similar to the
--namespaceoption. --set ceph_image-
Defines the name of the Ceph Storage container image. Usually,this is
rhceph-3-rhel7. --set ceph_tag-
Defines the tag to use for the Ceph Storage container image. This functions similar to the
--tagoption. When--tag-from-labelis specified, the versioned tag is discovered starting from this tag.
The following snippet is an example that includes Ceph Storage in your container image files:
$ openstack overcloud container image prepare \
...
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
--set ceph_namespace=registry.access.redhat.com/rhceph \
--set ceph_image=rhceph-3-rhel7 \
--tag-from-label {version}-{release} \
...OpenStack Bare Metal (ironic)
If deploying OpenStack Bare Metal (ironic) in your overcloud, you need to include the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml environment file so the director can prepare the images. The following snippet is an example on how to include this environment file:
$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml \ ...
OpenStack Data Processing (sahara)
If deploying OpenStack Data Processing (sahara) in your overcloud, you need to include the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/sahara.yaml environment file so the director can prepare the images. The following snippet is an example on how to include this environment file:
$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/sahara.yaml \ ...
4.4. Using the Red Hat registry as a remote registry source
Red Hat hosts the overcloud container images on registry.access.redhat.com. Pulling the images from a remote registry is the simplest method because the registry is already setup and all you require is the URL and namespace of the image you aim to pull. However, during overcloud creation, the overcloud nodes all pull images from the remote repository, which can congest your external connection. If that is a problem, you can either:
- Setup a local registry
- Host the images on Red Hat Satellite 6
Procedure
To pull the images directly from
registry.access.redhat.comin your overcloud deployment, an environment file is required to specify the image parameters. The following command automatically creates this environment file:(undercloud) $ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/templates/overcloud_images.yaml-
Use the
-eoption to include any environment files for optional services. -
If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location:
--set ceph_namespace,--set ceph_image,--set ceph_tag.
-
Use the
-
This creates an
overcloud_images.yamlenvironment file, which contains image locations, on the undercloud. You include this file with your deployment.
4.5. Using the undercloud as a local registry
You can configure a local registry on the undercloud to store overcloud container images. This method involves the following:
-
The director pulls each image from the
registry.access.redhat.com. -
The director pushes each images to the
docker-distributionregistry running on the undercloud. - The director creates the overcloud.
-
During the overcloud creation, the nodes pull the relevant images from the undercloud’s
docker-distributionregistry.
This keeps network traffic for container images within your internal network, which does not congest your external network connection and can speed the deployment process.
Procedure
Find the address of the local undercloud registry. The address will use the following pattern:
<REGISTRY IP ADDRESS>:8787
Use the IP address of your undercloud, which you previously set with the
local_ipparameter in yourundercloud.conffile. For the commands below, the address is assumed to be192.168.24.1:8787.Create a template to upload the the images to the local registry, and the environment file to refer to those images:
(undercloud) $ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=192.168.24.1:8787 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml-
Use the
-eoption to include any environment files for optional services. -
If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location:
--set ceph_namespace,--set ceph_image,--set ceph_tag.
-
Use the
This creates two files:
-
local_registry_images.yaml, which contains container image information from the remote source. Use this file to pull the images from the Red Hat Container Registry (registry.access.redhat.com) to the undercloud. overcloud_images.yaml, which contains the eventual image locations on the undercloud. You include this file with your deployment.Check that both files exist.
-
Pull the container images from
registry.access.redhat.comto the undercloud.(undercloud) $ sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose
Pulling the required images might take some time depending on the speed of your network and your undercloud disk.
NoteThe container images consume approximately 10 GB of disk space.
The images are now stored on the undercloud’s
docker-distributionregistry. To view the list of images on the undercloud’sdocker-distributionregistry using the following command:(undercloud) $ curl http://192.0.2.5:8787/v2/_catalog | jq .repositories[]
To view a list of tags for a specific image, use the
skopeocommand:(undercloud) $ skopeo inspect --tls-verify=false docker://192.0.2.5:8787/rhosp13/openstack-keystone | jq .RepoTags[]
To verify a tagged image, use the
skopeocommand:(undercloud) $ skopeo inspect --tls-verify=false docker://192.0.2.5:8787/rhosp13/openstack-keystone:13.0-44
The registry configuration is ready.
4.6. Using a Satellite server as a registry
Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more details information on managing container images, see "Managing Container Images" in the Red Hat Satellite 6 Content Management Guide.
The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME. Substitute this organization for your own Satellite 6 organization.
Procedure
Create a template to pull images to the local registry:
$ source ~/stackrc (undercloud) $ openstack overcloud container image prepare \ --namespace=rhosp13 \ --prefix=openstack- \ --output-images-file /home/stack/satellite_images \
-
Use the
-eoption to include any environment files for optional services. -
If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location:
--set ceph_namespace,--set ceph_image,--set ceph_tag.
NoteThis version of the
openstack overcloud container image preparecommand targets the registry on theregistry.access.redhat.comto generate an image list. It uses different values than theopenstack overcloud container image preparecommand used in a later step.-
Use the
-
This creates a file called
satellite_imageswith your container image information. You will use this file to synchronize container images to your Satellite 6 server. Remove the YAML-specific information from the
satellite_imagesfile and convert it into a flat file containing only the list of images. The followingsedcommands accomplish this:(undercloud) $ awk -F ':' '{if (NR!=1) {gsub("[[:space:]]", ""); print $2}}' ~/satellite_images > ~/satellite_images_namesThis provides a list of images that you pull into the Satellite server.
-
Copy the
satellite_images_namesfile to a system that contains the Satellite 6hammertool. Alternatively, use the instructions in the Hammer CLI Guide to install thehammertool to the undercloud. Run the following
hammercommand to create a new product (OSP13 Containers) to your Satellite organization:$ hammer product create \ --organization "ACME" \ --name "OSP13 Containers"
This custom product will contain our images.
Add the base container image to the product:
$ hammer repository create \ --organization "ACME" \ --product "OSP13 Containers" \ --content-type docker \ --url https://registry.access.redhat.com \ --docker-upstream-name rhosp13/openstack-base \ --name base
Add the overcloud container images from the
satellite_imagesfile.$ while read IMAGE; do \ IMAGENAME=$(echo $IMAGE | cut -d"/" -f2 | sed "s/openstack-//g" | sed "s/:.*//g") ; \ hammer repository create \ --organization "ACME" \ --product "OSP13 Containers" \ --content-type docker \ --url https://registry.access.redhat.com \ --docker-upstream-name $IMAGE \ --name $IMAGENAME ; done < satellite_images_names
Synchronize the container images:
$ hammer product synchronize \ --organization "ACME" \ --name "OSP13 Containers"
Wait for the Satellite server to complete synchronization.
NoteDepending on your configuration,
hammermight ask for your Satellite server username and password. You can configurehammerto automatically login using a configuration file. See the "Authentication" section in the Hammer CLI Guide.- If your Satellite 6 server uses content views, create a new content view version to incorporate the images.
Check the tags available for the
baseimage:$ hammer docker tag list --repository "base" \ --organization "ACME" \ --product "OSP13 Containers"
This displays tags for the OpenStack Platform container images.
Return to the undercloud and generate an environment file for the images on your Satellite server. The following is an example command for generating the environment file:
(undercloud) $ openstack overcloud container image prepare \ --namespace=satellite6.example.com:5000 \ --prefix=acme-osp13_containers- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/templates/overcloud_images.yamlNoteThis version of the
openstack overcloud container image preparecommand targets the Satellite server. It uses different values than theopenstack overcloud container image preparecommand used in a previous step.When running this command, include the following data:
-
--namespace- The URL and port of the registry on the Satellite server. The default registry port on Red Hat Satellite is 5000. For example,--namespace=satellite6.example.com:5000. --prefix=- The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views:-
If you use content views, the structure is
[org]-[environment]-[content view]-[product]-. For example:acme-production-myosp13-osp13_containers-. -
If you do not use content views, the structure is
[org]-[product]-. For example:acme-osp13_containers-.
-
If you use content views, the structure is
-
--tag-from-label {version}-{release}- Identifies the latest tag for each image. -
-e- Include any environment files for optional services. --set ceph_namespace,--set ceph_image,--set ceph_tag- If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location. Note thatceph_imagenow includes a Satellite-specific prefix. This prefix is the same value as the--prefixoption. For example:--set ceph_image=acme-osp13_containers-rhceph-3-rhel7
This ensures the overcloud uses the Ceph container image using the Satellite naming convention.
-
-
This creates an
overcloud_images.yamlenvironment file, which contains the image locations on the Satellite server. You include this file with your deployment.
The registry configuration is ready.
4.7. Next Steps
You now have an overcloud_images.yaml environment file that contains a list of your container image sources. Include this file with all future upgrade and deployment operations.
You can now prepare the overcloud for the upgrade.
Chapter 5. Preparing for the Overcloud Upgrade
This process prepares the overcloud for the upgrade process.
Prerequisites
- You have upgraded the undercloud to the latest version.
5.1. Preparing Overcloud Registration Details
You need to provide the overcloud with the latest subscription details to ensure the overcloud consumes the latest packages during the upgrade process.
Prerequisites
- A subscription containing the latest OpenStack Platform repositories.
- If using activation keys for registration, create a new activation key including the new OpenStack Platform repositories.
Procedure
Edit the environment file containing your registration details. For example:
$ vi ~/templates/rhel-registration/environment-rhel-registration.yaml
Edit the following parameter values:
rhel_reg_repos- Update to include the new repositories for Red Hat OpenStack Platform 13.
rhel_reg_activation_key- Update the activation key to access the Red Hat OpenStack Platform 13 repositories.
rhel_reg_sat_repo- If using a newer version of Red Hat Satellite 6, update the repository containing Satellite 6’s management tools.
- Save the environment file.
Related Information
- For more information about registration parameters, see "Registering the Overcloud with an Environment File" in the Advanced Overcloud Customizations guide.
5.2. Deprecated parameters
Note that the following parameters are deprecated and have been replaced with role-specific parameters:
| Old Parameter | New Parameter |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update these parameters in your custom environment files.
If your OpenStack Platform environment still requires these deprecated parameters, the default roles_data file allows their use. However, if you are using a custom roles_data file and your overcloud still requires these deprecated parameters, you can allow access to them by editing the roles_data file and adding the following to each role:
Controller Role
- name: Controller uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' ...
Compute Role
- name: Compute uses_deprecated_params: True deprecated_param_image: 'NovaImage' deprecated_param_extraconfig: 'NovaComputeExtraConfig' deprecated_param_metadata: 'NovaComputeServerMetadata' deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' deprecated_param_ips: 'NovaComputeIPs' deprecated_server_resource_name: 'NovaCompute' disable_upgrade_deployment: True ...
Object Storage Role
- name: ObjectStorage uses_deprecated_params: True deprecated_param_metadata: 'SwiftStorageServerMetadata' deprecated_param_ips: 'SwiftStorageIPs' deprecated_param_image: 'SwiftStorageImage' deprecated_param_flavor: 'OvercloudSwiftStorageFlavor' disable_upgrade_deployment: True ...
5.3. Deprecated CLI options
Some command line options are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults section on an environment file. The following table maps deprecated options to their Heat template equivalents.
Table 5.1. Mapping deprecated CLI options to Heat template parameters
| Option | Description | Heat Template Parameter |
|---|---|---|
|
| The number of Controller nodes to scale out |
|
|
| The number of Compute nodes to scale out |
|
|
| The number of Ceph Storage nodes to scale out |
|
|
| The number of Cinder nodes to scale out |
|
|
| The number of Swift nodes to scale out |
|
|
| The flavor to use for Controller nodes |
|
|
| The flavor to use for Compute nodes |
|
|
| The flavor to use for Ceph Storage nodes |
|
|
| The flavor to use for Cinder nodes |
|
|
| The flavor to use for Swift storage nodes |
|
|
| Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation |
|
|
| An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed |
|
|
| The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network |
|
|
| Defines the interface to bridge onto br-ex for network nodes |
|
|
| The tenant network type for Neutron |
|
|
| The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string |
|
|
| Ranges of GRE tunnel IDs to make available for tenant network allocation |
|
|
| Ranges of VXLAN VNI IDs to make available for tenant network allocation |
|
|
| The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the 'datacentre' physical network |
|
|
| The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string |
|
|
| Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron | No parameter mapping. |
|
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. | No parameter mapping |
|
| Sets the NTP server to use to synchronize time |
|
These parameters have been removed from Red Hat OpenStack Platform. It is recommended to convert your CLI options to Heat parameters and add them to an environment file.
5.4. Composable networks
This version of Red Hat OpenStack Platform introduces a new feature for composable networks. If using a custom roles_data file, edit the file to add the composable networks to each role. For example, for Controller nodes:
- name: Controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file for further examples of syntax. Also check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles.
The following table provides a mapping of composable networks to custom standalone roles:
| Role | Networks Required |
|---|---|
| Ceph Storage Monitor |
|
| Ceph Storage OSD |
|
| Ceph Storage RadosGW |
|
| Cinder API |
|
| Compute |
|
| Controller |
|
| Database |
|
| Glance |
|
| Heat |
|
| Horizon |
|
| Ironic | None required. Uses the Provisioning/Control Plane network for API. |
| Keystone |
|
| Load Balancer |
|
| Manila |
|
| Message Bus |
|
| Networker |
|
| Neutron API |
|
| Nova |
|
| OpenDaylight |
|
| Redis |
|
| Sahara |
|
| Swift API |
|
| Swift Storage |
|
| Telemetry |
|
In previous versions, the *NetName parameters (e.g. InternalApiNetName) changed the names of the default networks. This is no longer supported. Use a custom composable network file. For more information, see "Using Composable Networks" in the Advanced Overcloud Customization guide.
5.5. Checking custom Puppet parameters
If you use the ExtraConfig interfaces for customizations of Puppet parameters, Puppet might report duplicate declaration errors during the upgrade. This is due to changes in the interfaces provided by the puppet modules themselves.
This procedure shows how to check for any custom ExtraConfig hieradata parameters in your environment files.
Procedure
Select an environment file and the check if it has an
ExtraConfigparameter:$ grep ExtraConfig ~/templates/custom-config.yaml
-
If the results show an
ExtraConfigparameter for any role (e.g.ControllerExtraConfig) in the chosen file, check the full parameter structure in that file. If the parameter contains any puppet Hierdata with a
SECTION/parametersyntax followed by avalue, it might have been been replaced with a parameter with an actual Puppet class. For example:parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'Check the director’s Puppet modules to see if the parameter now exists within a Puppet class. For example:
$ grep dnsmasq_local_resolv
If so, change to the new interface.
The following are examples to demonstrate the change in syntax:
Example 1:
parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'Changes to:
parameter_defaults: ExtraConfig: neutron::agents::dhcp::dnsmasq_local_resolv: trueExample 2:
parameter_defaults: ExtraConfig: ceilometer::config::ceilometer_config: 'oslo_messaging_rabbit/rabbit_qos_prefetch_count': value: '32'Changes to:
parameter_defaults: ExtraConfig: oslo::messaging::rabbit::rabbit_qos_prefetch_count: '32'
5.6. Converting network interface templates to the new structure
Previously the network interface structure used a OS::Heat::StructuredConfig resource to configure interfaces:
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
[NETWORK INTERFACE CONFIGURATION HERE]
The templates now use a OS::Heat::SoftwareConfig resource for configuration:
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template:
get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
params:
$network_config:
network_config:
[NETWORK INTERFACE CONFIGURATION HERE]
This configuration takes the interface configuration stored in the $network_config variable and injects it as a part of the run-os-net-config.sh script.
It is mandatory to update your network interface template to use this new structure and check your network interface templates still conforms to the syntax. Not doing so can cause failure during the fast forward upgrade process.
The director’s Heat template collection contains a script to help convert your templates to this new format. This script is located in /usr/share/openstack-tripleo-heat-templates/tools/yaml-nic-config-2-script.py. For an example of usage:
$ /usr/share/openstack-tripleo-heat-templates/tools/yaml-nic-config-2-script.py \
--script-dir /usr/share/openstack-tripleo-heat-templates/network/scripts \
[NIC TEMPLATE] [NIC TEMPLATE] ...Ensure your templates does not contain any commented lines when using this script. This can cause errors when parsing the old template structure.
For more information, see "Isolating Networks".
If you enabled High Availability for Compute Instances (Instance HA) in Red Hat OpenStack Platform 12 or earlier and you want to perform an upgrade to version 13 or later, you must manually disable Instance Ha first. For instructions, see Disabling Instance HA from previous versions.
5.7. Next Steps
The overcloud preparation stage is complete. You can now perform an upgrade of the overcloud to 13 using the steps in Chapter 6, Upgrading the Overcloud.
Chapter 6. Upgrading the Overcloud
This process upgrades the overcloud.
Prerequisites
- You have upgraded the undercloud to the latest version.
- You have prepared your custom environment files to accommodate the changes in the upgrade.
6.1. Running the overcloud upgrade preparation
The upgrade requires running openstack overcloud upgrade prepare command, which performs the following tasks:
- Updates the overcloud plan to OpenStack Platform 13
- Prepares the nodes for the upgrade
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the upgrade preparation command:
$ openstack overcloud upgrade prepare \ --templates \ -e /home/stack/templates/overcloud_images.yaml \ -e <ENVIRONMENT FILE>Include the following options relevant to your environment:
-
Custom configuration environment files (
-e) -
The environment file with your new container image locations (
-e). Note that the upgrade command might display a warning about using the--container-registry-file. You can ignore this warning as this option is deprecated in favor of using-efor the container image environment file. -
If applicable, your custom roles (
roles_data) file using--roles-file. -
If applicable, your composable network (
network_data) file using--networks-file.
-
Custom configuration environment files (
- Wait until the upgrade preparation completes.
6.2. Upgrading all Controller nodes
This process upgrades all the Controller nodes to OpenStack Platform 13. The process involves running the openstack overcloud upgrade run command and including the --nodes Controller option to restrict operations to the Controller nodes only.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the upgrade command:
$ openstack overcloud upgrade run --nodes Controller
-
If using a custom stack name, pass the name with the
--stackoption.
-
If using a custom stack name, pass the name with the
- Wait until the Controller node upgrade completes.
6.3. Upgrading all Compute nodes
This process upgrades all remaining Compute nodes to OpenStack Platform 13. The process involves running the openstack overcloud upgrade run command and including the --nodes Compute option to restrict operations to the Compute nodes only.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the upgrade command:
$ openstack overcloud upgrade run --nodes Compute
-
If using a custom stack name, pass the name with the
--stackoption.
-
If using a custom stack name, pass the name with the
- Wait until the Compute node upgrade completes.
6.4. Upgrading all Ceph Storage nodes
This process upgrades the Ceph Storage nodes. The process involves:
-
Running the
openstack overcloud upgrade runcommand and including the--nodes CephStorageoption to restrict operations to the Ceph Storage nodes only. -
Running the
openstack overcloud ceph-upgrade runcommand to perform an upgrade to a containerized Red Hat Ceph Storage 3 cluster.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the upgrade command:
$ openstack overcloud upgrade run --nodes CephStorage
-
If using a custom stack name, pass the name with the
--stackoption.
-
If using a custom stack name, pass the name with the
- Wait until the node upgrade completes.
Run the Ceph Storage upgrade command. For example:
$ openstack overcloud ceph-upgrade run \ --templates \ -e /home/stack/templates/overcloud_images.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /home/stack/templates/ceph-customization.yaml \ -e <ENVIRONMENT FILE>Include the following options relevant to your environment:
Custom configuration environment files (
-e). For example:-
The environment file with your container image locations (
overcloud_images.yaml). Note that the upgrade command might display a warning about using the--container-registry-file. You can ignore this warning as this option is deprecated in favor of using-efor the container image environment file. - The relevant environment files for your Ceph Storage nodes.
- Any additional environment files relevant to your environment.
-
The environment file with your container image locations (
-
If using a custom stack name, pass the name with the
--stackoption. -
If applicable, your custom roles (
roles_data) file using--roles-file. -
If applicable, your composable network (
network_data) file using--networks-file.
- Wait until the Ceph Storage node upgrade completes.
6.5. Finalizing the upgrade
The upgrade requires a final step to update the overcloud stack. This ensures the stack’s resource structure aligns with a regular deployment of OpenStack Platform 13 and allows you to perform standard openstack overcloud deploy functions in the future.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Run the upgrade finalization command:
$ openstack overcloud upgrade converge \ --templates \ -e <ENVIRONMENT FILE>Include the following options relevant to your environment:
-
Custom configuration environment files (
-e). -
If using a custom stack name, pass the name with the
--stackoption. -
If applicable, your custom roles (
roles_data) file using--roles-file. -
If applicable, your composable network (
network_data) file using--networks-file.
-
Custom configuration environment files (
- Wait until the upgrade finalization completes.
Chapter 7. Executing Post Upgrade Steps
This process implements final steps after completing the main upgrade process.
Prerequisites
- You have completed the overcloud upgrade to the latest major release.
7.1. General Considerations after an Overcloud Upgrade
The following items are general considerations after an overcloud upgrade:
-
If necessary, review the resulting configuration files on the overcloud nodes. The upgraded packages might have installed
.rpmnewfiles appropriate to the upgraded version of each service. The Compute nodes might report a failure with
neutron-openvswitch-agent. If this occurs, log into each Compute node and restart the service. For example:$ sudo systemctl restart neutron-openvswitch-agent
In some circumstances, the
corosyncservice might fail to start on IPv6 environments after rebooting Controller nodes. This is due to Corosync starting before the Controller node configures the static IPv6 addresses. In these situations, restart Corosync manually on the Controller nodes:$ sudo systemctl restart corosync
