Chapter 6. Upgrading the Red Hat Hyperconverged Infrastructure for Cloud Solution
As a technician, you can upgrade the Red Hat Hyperconverged Infrastructure for Cloud solution by taking advantage of Red Hat OpenStack Platform’s Fast-Forward Upgrade (FFU) feature. This document covers upgrades from Red Hat OpenStack Platform 10 (Newton) to 13 (Queens), and from Red Hat Ceph Storage 2 to 3.
Basic Upgrade Workflow
- Prepare the environment
- Upgrade the undercloud
- Obtain updated container images
- Prepare the overcloud
Perform the fast-forward upgrades
- Upgrade the hyperconverged Monitor/Controller nodes
- Upgrade the hyperconverged OSD/Compute nodes
- Upgrade Red Hat Ceph Storage
- Finalize the upgrade
6.1. Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
6.2. Introducing the Fast-Forward Upgrade Process
The Fast-Forward Upgrade (FFU) feature provides an upgrade path spanning multiple Red Hat OpenStack Platform versions. This feature allows users to upgrade from the current long-life release, to the next long-life release of the Red Hat OpenStack Platform.
Currently, the supported FFU path is from Red Hat OpenStack Platform 10 (Newton) to 13 (Queens).
Additional Resources
- See the Fast-Forward Upgrades Guide for more details.
6.3. Preparing to Do a Red Hat OpenStack Platform Upgrade
As a technician, you need to perform a series of tasks before proceeding with the upgrade process. This process involves the following basic steps:
- Backing up the undercloud and overcloud.
- Updating the undercloud to the latest minor version of Red Hat OpenStack Platform 10.
- Rebooting the undercloud, if newer packages are installed.
- Updating the overcloud images.
- Updating the overcloud to the latest minor version of Red Hat OpenStack Platform 10.
- Rebooting the overcloud nodes, if newer packages are installed.
- Performing validation checks on both the undercloud and overcloud
Doing these steps ensure the existing Red Hat Hyperconverged Infrastructure for Cloud environment is in the best possible state before proceeding with an upgrade.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
6.3.1. Backing Up the Undercloud
A full undercloud backup includes the following databases and files:
- All MariaDB databases on the undercloud node
- MariaDB configuration file on the undercloud, for accurately restoring databases
-
All swift data:
/srv/node -
All data in the
stackuser’s home directory:/home/stack The undercloud SSL certificates:
-
/etc/pki/ca-trust/source/anchors/ca.crt.pem -
/etc/pki/instack-certs/undercloud.pem
-
Confirm that you have sufficient disk space available before performing the backup process. The tarball can be expected to be at least 3.5 GB, but this is likely to be larger.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
-
Log into the undercloud as the
rootuser. Back up the database:
[root@director ~]# mysqldump --opt --all-databases > /root/undercloud-all-databases.sql
Archive the database backup and the configuration files:
[root@director ~]# tar --xattrs -czf undercloud-backup-`date +%F`.tar.gz /root/undercloud-all-databases.sql /etc/my.cnf.d/server.cnf /srv/node /home/stack /etc/pki/instack-certs/undercloud.pem /etc/pki/ca-trust/source/anchors/ca.crt.pem
This creates a file named
undercloud-backup-[timestamp].tar.gz.
Additional Resources
- If you need to restore the undercloud backup, see the Restore chapter in the Back Up and Restore the Director Undercloud guide.
6.3.2. Backing Up the Overcloud Control Plane Services
The following procedure creates a backup of the overcloud databases and configuration. While most of the overcloud configuration can be recreated using the openstack overcloud deploy, a backup of the overcloud database and services ensures you have a snapshot of a working environment. Having this snapshot helps, if you need to restore the overcloud to its original state in case of an operational failure.
This procedure only includes crucial control plane services. It does not include backups of Compute node workloads nor data on Red Hat Ceph Storage nodes.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
Perform the database backup:
Log into a Controller node. You can access the overcloud from the undercloud:
$ ssh heat-admin@192.0.2.100
Change to the
rootuser:$ sudo -i
Create a temporary directory to store the backups:
# mkdir -p /var/tmp/mysql_backup/
Obtain the database password and store it in the
MYSQLDBPASSenvironment variable. The password is stored in themysql::server::root_passwordvariable within the/etc/puppet/hieradata/service_configs.jsonfile. Use the following command to store the password:# MYSQLDBPASS=$(sudo hiera mysql::server::root_password)
Backup the database:
# mysql -uroot -p$MYSQLDBPASS -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" \ -s -N | xargs mysqldump -uroot -p$MYSQLDBPASS --single-transaction --databases > /var/tmp/mysql_backup/openstack_databases-`date +%F`-`date +%T`.sql
This dumps a database backup called
/var/tmp/mysql_backup/openstack_databases-<date>.sqlwhere<date>is the system date and time.Backup all the users and permissions information:
# mysql -uroot -p$MYSQLDBPASS -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" \ -s -N | xargs -n1 mysql -uroot -p$MYSQLDBPASS -s -N -e | sed 's/$/;/' > /var/tmp/mysql_backup/openstack_databases_grants-`date +%F`-`date +%T`.sqlThis will dump a database backup called
/var/tmp/mysql_backup/openstack_databases_grants-<date>.sqlwhere<date>is the system date and time.
Backup the OpenStack Telemetry database:
Connect to any controller and get the IP of the MongoDB primary instance:
# MONGOIP=$(sudo hiera mongodb::server::bind_ip)
Create the backup:
# mkdir -p /var/tmp/mongo_backup/ # mongodump --oplog --host $MONGOIP --out /var/tmp/mongo_backup/
Backup the Redis cluster:
Obtain the Redis endpoint from HAProxy:
# REDISIP=$(sudo hiera redis_vip)
Obtain the master password for the Redis custer:
# REDISPASS=$(sudo hiera redis::masterauth)
Check connectivity to the Redis cluster:
# redis-cli -a $REDISPASS -h $REDISIP ping
Dump the Redis database:
redis-cli -a $REDISPASS -h $REDISIP bgsave
This stores the database backup in the default
/var/lib/redis/directory.
Backup the filesystem:
Create a directory for the backup:
# mkdir -p /var/tmp/filesystem_backup/
Run the following
tarcommand:# tar --ignore-failed-read \ -zcvf /var/tmp/filesystem_backup/fs_backup-`date '+%Y-%m-%d-%H-%M-%S'`.tar.gz \ /etc/nova \ /var/log/nova \ /var/lib/nova \ --exclude /var/lib/nova/instances \ /etc/glance \ /var/log/glance \ /var/lib/glance \ /etc/keystone \ /var/log/keystone \ /var/lib/keystone \ /etc/httpd \ /etc/cinder \ /var/log/cinder \ /var/lib/cinder \ /etc/heat \ /var/log/heat \ /var/lib/heat \ /var/lib/heat-config \ /var/lib/heat-cfntools \ /etc/rabbitmq \ /var/log/rabbitmq \ /var/lib/rabbitmq \ /etc/neutron \ /var/log/neutron \ /var/lib/neutron \ /etc/corosync \ /etc/haproxy \ /etc/logrotate.d/haproxy \ /var/lib/haproxy \ /etc/openvswitch \ /var/log/openvswitch \ /var/lib/openvswitch \ /etc/ceilometer \ /var/lib/redis \ /etc/sysconfig/memcached \ /etc/gnocchi \ /var/log/gnocchi \ /etc/aodh \ /var/log/aodh \ /etc/panko \ /var/log/panko \ /etc/ceilometer \ /var/log/ceilometerThe
--ignore-failed-readoption ignores any missing directories, which is useful if certain services are not used or separated on their own custom roles.
6.3.3. Updating the Current Undercloud Packages for OpenStack Platform 10.z
The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of the OpenStack Platform environment. This is a minor update within OpenStack Platform 10.
This procedure also updates the operating systems packages to the latest version of Red Hat Enterprise Linux.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
-
Log into the undercloud as the
stackuser. Stop the main OpenStack Platform services:
(undercloud) [stack@director ~]$ sudo systemctl stop 'openstack-*' 'neutron-*' httpd
NoteThis causes a short period of downtime for the undercloud. The overcloud is still functional during the undercloud upgrade.
Update the
python-tripleoclientpackage and its dependencies to ensure you have the latest scripts for the minor version update:(undercloud) [stack@director ~]$ sudo yum update python-tripleoclient
Run the
openstack undercloud upgradecommand:(undercloud) [stack@director ~]$ openstack undercloud upgrade
Wait until this command completes its execution.
Reboot the undercloud to update the operating system’s kernel and other system packages:
(undercloud) [stack@director ~]$ sudo reboot
- Wait until the node boots.
-
Log into the undercloud as the
stackuser.
6.3.4. Updating the Current Overcloud Images for Red Hat OpenStack Platform 10.z
In addition to undercloud package updates, Red Hat recommends to keep the overcloud images up to date and to keep the image configuration in sync with the latest openstack-tripleo-heat-template package. This ensures successful deployment and scaling operations in between the current preparation stage and the actual fast-forward upgrade. The undercloud update process might download new image archives from the rhosp-director-images and rhosp-director-images-ipa packages. This process updates these images on the undercloud within Red Hat OpenStack Platform 10.
Prerequisites
- Update to the latest minor release of the current undercloud version.
Procedure
Check the
yumlog to determine if new image archives are available:(undercloud) [stack@director ~]$ sudo grep "rhosp-director-images" /var/log/yum.log
If new archives are available, replace the current images with new images. To install the new images, first remove any existing images from the
imagesdirectory on thestackuser’s home (/home/stack/images):(undercloud) [stack@director ~]$ rm -rf ~/images/*
Extract the archives:
(undercloud) [stack@director ~]$ cd ~/images (undercloud) [stack@director ~]$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar; do tar -xvf $i; done
Import the latest images into the director and configure nodes to use the new images
(undercloud) [stack@director ~]$ cd ~ (undercloud) [stack@director ~]$ openstack overcloud image upload --update-existing --image-path /home/stack/images/ (undercloud) [stack@director ~]$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f csv --quote none | sed "1d" | paste -s -d " ")
To finalize the image update, verify the existence of the new images:
(undercloud) [stack@director ~]$ openstack image list (undercloud) [stack@director ~]$ ls -l /httpboot
The director also retains the old images and renames them using the timestamp of when they were updated. If you no longer need these images, delete them. The director is now updated and using the latest images.
NoteYou do not need to restart any services after the update.
6.3.5. Updating the Current Overcloud Packages for Red Hat OpenStack Platform 10.z
The director provides commands to update the packages on all overcloud nodes. This allows you to perform a minor update within the current version of the Red Hat OpenStack Platform environment. This is a minor update within Red Hat OpenStack Platform 10.
The update process does not reboot any nodes in the Overcloud automatically.
This procedure also updates the operating systems packages to the latest version of Red Hat Enterprise Linux. Updates to the kernel and other system packages require a reboot.
Prerequisites
- Updated to the latest minor release of the current undercloud version.
- Performed a backup of the overcloud.
Procedure
Update the current plan using the original
openstack overcloud deploycommand and including the--update-plan-onlyoption. For example:(undercloud) [stack@director ~]$ openstack overcloud deploy --update-plan-only \ --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/custom-templates/network-environment.yaml \ -e /home/stack/custom-templates/storage-environment.yaml \ -e /home/stack/custom-templates/rhel-registration/environment-rhel-registration.yaml \ [-e <environment_file>|...]
The
--update-plan-onlyonly updates the Overcloud plan stored in the director. Use the-eoption to include environment files relevant to the Overcloud and its update path. The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:-
Any network isolation files, including the initialization file (
environments/network-isolation.yaml) from the heat template collection and then the custom NIC configuration file. - Any external load balancing environment files.
- Any storage environment files.
- Any environment files for Red Hat CDN or Satellite registration.
- Any other custom environment files.
-
Any network isolation files, including the initialization file (
Perform a package update on all nodes using the
openstack overcloud updatecommand. For example:(undercloud) [stack@director ~]$ openstack overcloud update stack -i overcloud
The
-iruns an interactive mode to update each node. When the update process completes a node update, the script provides a breakpoint for you to confirm. Without the-ioption, the update remains paused at the first breakpoint. Therefore, it is mandatory to include the-ioption.NoteRunning an update on all nodes in parallel can cause problems. For example, an update of a package might involve restarting a service, which can disrupt other nodes. This is why the process updates each node using a set of breakpoints. This means nodes are updated one by one. When one node completes the package update, the update process moves to the next node.
The update process starts. During this process, the director reports an
IN_PROGRESSstatus and periodically prompts you to clear breakpoints. For example:not_started: [u'overcloud-controller-0', u'overcloud-controller-1', u'overcloud-controller-2'] on_breakpoint: [u'overcloud-compute-0'] Breakpoint reached, continue? Regexp or Enter=proceed, no=cancel update, C-c=quit interactive mode:
Press Enter to clear the breakpoint from last node on the
on_breakpointlist. This begins the update for that node. You can also type a node name to clear a breakpoint on a specific node, or a Python-based regular expression to clear breakpoints on multiple nodes at once. However, it is not recommended to clear breakpoints on multiple controller nodes at once. Continue this process until all nodes have completed their update.The update command reports a
COMPLETEstatus when the update completes:... IN_PROGRESS IN_PROGRESS IN_PROGRESS COMPLETE update finished with status COMPLETE
If you configured fencing for the Controller nodes, the update process might disable it. When the update process completes, reenable fencing with the following command on one of the Controller nodes:
(undercloud) [stack@director ~]$ sudo pcs property set stonith-enabled=true
6.3.6. Rebooting the Controller and Composable Nodes
The following procedure reboots controller nodes and standalone nodes based on composable roles. This excludes Compute nodes and Red Hat Ceph Storage nodes.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
- Select a node and login to it.
Reboot the node:
[heat-admin@overcloud-controller-0 ~]$ sudo reboot
- Wait until the node boots.
Log into the node and check the services. For example:
If the node uses Pacemaker services, check the node has rejoined the cluster:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
If the node uses Systemd services, check all services are enabled:
[heat-admin@overcloud-controller-0 ~]$ sudo systemctl status
6.3.7. Rebooting a Red Hat Ceph Storage Cluster
The following procedure reboots the Red Hat Ceph Storage nodes.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
Log into a hyperconverged Controller/Ceph Monitor node and disable the storage cluster rebalancing feature temporarily:
[stack@cntr-mon ~]$ sudo ceph osd set noout [stack@cntr-mon ~]$ sudo ceph osd set norebalance
- Select the first Ceph Storage node to reboot and log into it.
Reboot the node:
[stack@cntr-mon ~]$ sudo reboot
- Wait until the node boots.
Log into the node and check the cluster status:
[stack@cntr-mon ~]$ sudo ceph -s
Check that the
pgmapreports allpgsas normal (active+clean).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
[stack@cntr-mon ~]$ sudo ceph osd unset noout [stack@cntr-mon ~]$ sudo ceph osd unset norebalance
Perform a final status check to verify the cluster reports
HEALTH_OK:[stack@cntr-mon ~]$ sudo ceph status
6.3.8. Rebooting the Compute Nodes
The following procedure reboots the Compute nodes. To ensure minimal downtime of instances in the Red Hat OpenStack Platform environment, this procedure also includes instructions on migrating instances from the chosen Compute node. This involves the following workflow:
- Select a Compute node to reboot and disable it so that it does not provision new instances
- Migrate the instances to another Compute node
- Reboot the empty Compute node and enable it
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
-
Log into the undercloud as the
stackuser. List all the compute nodes and their UUIDs:
[stack@director ~]$ source ~/stackrc (undercloud) [stack@director ~]$ openstack server list --name compute
Identify the UUID of the compute node you aim to reboot.
From the undercloud, select a compute node and disable it:
[stack@director ~]$ source ~/overcloudrc (overcloud) [stack@director ~]$ openstack compute service list (overcloud) [stack@director ~]$ openstack compute service set [hostname] nova-compute --disable
List all instances on the compute node:
(overcloud) [stack@director ~]$ openstack server list --host [hostname] --all-projects
Use one of the following commands to migrate the instances:
Migrate the instance to a specific host of the choice:
(overcloud) [stack@director ~]$ openstack server migrate [instance-id] --live [target-host]--wait
Let
nova-schedulerautomatically select the target host:(overcloud) [stack@director ~]$ nova live-migration [instance-id]
Live migrate all instances at once:
[stack@director ~]$ nova host-evacuate-live [hostname]
NoteThe
novacommand might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm the migration was successful:
(overcloud) [stack@director ~]$ openstack server list --host [hostname] --all-projects
- Continue migrating instances until none remain on the chosen compute node.
Log into the compute node and reboot it:
[heat-admin@overcloud-compute-0 ~]$ sudo reboot
- Wait until the node boots.
Enable the compute node again:
[stack@director ~]$ source ~/overcloudrc (overcloud) [stack@director ~]$ openstack compute service set [hostname] nova-compute --enable
Check whether the compute node is enabled:
(overcloud) [stack@director ~]$ openstack compute service list
6.3.9. Verifying the System Packages Before Upgrading
Before the upgrade, all nodes should be using the latest versions of the following packages:
| Package | Version |
|
| At least 2.9 |
|
| At least 2.10 |
|
| At least 2.10 |
|
| At least 2.10 |
|
| At least 2.10 |
Prerequisites
- Access to all the nodes in the Red Hat Hyperconverged Infrastructure for Cloud environment.
Procedure
- Log into a node.
Run
yumto check the system packages:[stack@director ~]$ sudo yum list qemu-img-rhev qemu-kvm-common-rhev qemu-kvm-rhev qemu-kvm-tools-rhev openvswitch
Run
ovs-vsctlto check the version currently running:[stack@director ~]$ sudo ovs-vsctl --version
- Repeat steps 1-3 for each node in the Red Hat Hyperconverged Infrastructure for Cloud environment.
6.3.10. Validating the Undercloud Before Upgrading
Follow this procedure to check the functionality of the Red Hat OpenStack Platform 10 undercloud before doing an upgrade.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
Source the undercloud access details:
[stack@director ~]$ source ~/stackrc
Check for failed Systemd services:
[stack@director ~]$ sudo systemctl list-units --state=failed 'openstack*' 'neutron*' 'httpd' 'docker'
Check the undercloud free space:
[stack@director ~]$ df -h
Check that clocks are synchronized on the undercloud:
[stack@director ~]$ sudo ntpstat
Check the undercloud network services:
[stack@director ~]$ openstack network agent list
All agents should be
Aliveand their state should beUP.Check the undercloud compute services:
[stack@director ~]$ openstack compute service list
All agents' status should be
enabledand their state should beupCheck the undercloud volume services:
[stack@director ~]$ openstack volume service list
All agents' status should be
enabledand their state should beup.
Additional Resources
- See the Red Hat Knowledgebase article on how to remove deleted stack entries in the OpenStack Orchestration (heat) database: https://access.redhat.com/solutions/2215131
6.3.11. Validating the Overcloud Before Upgrading
Follow this procedure to check the functionality of the Red Hat OpenStack Platform 10 overcloud before an upgrade.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
Source the undercloud access details:
[stack@director ~]$ source ~/stackrc
Check the status of the bare metal nodes:
[stack@director ~]$ openstack baremetal node list
All nodes should have a valid power state (
on) and maintenance mode should befalse.Check for failed Systemd services:
[stack@director ~]$ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo systemctl list-units --state=failed 'openstack*' 'neutron*' 'httpd' 'docker' 'ceph*'" ; done
Check the HAProxy connection to all services. Obtain the Control Plane VIP address and authentication details for the
haproxy.statsservice:[stack@director ~]$ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE sudo 'grep "listen haproxy.stats" -A 6 /etc/haproxy/haproxy.cfg'
Use these details in the following cURL request:
[stack@director ~]$ curl -s -u admin:<PASSWORD> "http://<IP ADDRESS>:1993/;csv" | egrep -vi "(frontend|backend)" | awk -F',' '{ print $1" "$2" "$18 }'Replace
<PASSWORD>and<IP ADDRESS>details with the respective details from thehaproxy.statsservice. The resulting list shows the OpenStack Platform services on each node and their connection status.Check overcloud database replication health:
[stack@director ~]$ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo clustercheck" ; done
Check RabbitMQ cluster health:
[stack@director ~]$ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo rabbitmqctl node_health_check" ; done
Check Pacemaker resource health:
[stack@director ~]$ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo pcs status"
Look for:
-
All cluster nodes
online. -
No resources
stoppedon any cluster nodes. -
No
failedpacemaker actions.
-
All cluster nodes
Check the disk space on each overcloud node:
[stack@director ~]$ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo df -h --output=source,fstype,avail -x overlay -x tmpfs -x devtmpfs" ; done
Check health of the Red Hat Ceph Storage cluster. The following command runs the
cephtool on a Controller node to check the cluster:[stack@director ~]$ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph -s"
Check the free space on the OSD. The following command runs the
cephtool on a Controller node to check the free space:[stack@director ~]$ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph df"
Check that clocks are synchronized on overcloud nodes
[stack@director ~]$ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo ntpstat" ; done
Source the overcloud access details:
[stack@director ~]$ source ~/overcloudrc
Check the overcloud network services:
[stack@director ~]$ openstack network agent list
All agents should be
Aliveand their state should beUP.Check the overcloud compute services:
[stack@director ~]$ openstack compute service list
All agents' status should be
enabledand their state should beupCheck the overcloud volume services:
[stack@director ~]$ openstack volume service list
All agents' status should be
enabledand their state should beup.
Additional Resources
- Review the article "How can I verify my OpenStack environment is deployed with Red Hat recommended configurations?". This article provides some information on how to check the Red Hat OpenStack Platform environment and tune the configuration to Red Hat’s recommendations.
- Review the article "Database Size Management for Red Hat Enterprise Linux OpenStack Platform" to check and clean unused database records for OpenStack Platform services on the overcloud.
Next Step
- Upgrading the undercloud from Red Hat OpenStack Platform 10 to 13.
6.4. Upgrading the Undercloud
The following procedures upgrades the undercloud and its overcloud images to Red Hat OpenStack Platform 13. You accomplish this by performing an upgrade through each sequential version of the undercloud from Red Hat OpenStack Platform 10 to Red Hat OpenStack Platform 13.
Prerequisites
- Preparing Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
6.4.1. Upgrading the Undercloud to Red Hat OpenStack Platform 11
This procedure upgrades the undercloud toolset and the core Heat template collection to the Red Hat OpenStack Platform 11 release.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
-
Log into the undercloud as the
stackuser. Disable the current OpenStack Platform repository:
[stack@director ~]$ sudo subscription-manager repos --disable=rhel-7-server-openstack-10-rpms
Enable the new OpenStack Platform repository:
[stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-openstack-11-rpms
Stop the main OpenStack Platform services:
[stack@director ~]$ sudo systemctl stop 'openstack-*' 'neutron-*' httpd
NoteThis causes a short period of downtime for the undercloud. The overcloud is still functional during the undercloud upgrade.
The default Provisioning/Control Plane network has changed from
192.0.2.0/24to192.168.24.0/24. If you used default network values in the previousundercloud.conffile, the Provisioning/Control Plane network is set to192.0.2.0/24. This means you need to set certain parameters in theundercloud.conffile to continue using the192.0.2.0/24network. These parameters are:-
local_ip -
network_gateway -
undercloud_public_vip -
undercloud_admin_vip -
network_cidr -
masquerade_network -
dhcp_start -
dhcp_end
Set the network values in
undercloud.confto ensure continued use of the192.0.2.0/24CIDR during future upgrades. Ensure the network configuration set correctly before running theopenstack undercloud upgradecommand.-
Run
yumto upgrade the director’s main packages:[stack@director ~]$ sudo yum update instack-undercloud openstack-puppet-modules openstack-tripleo-common python-tripleoclient
Run the following command to upgrade the undercloud:
[stack@director ~]$ openstack undercloud upgrade
- Wait until the undercloud upgrade process completes.
You have upgraded the undercloud to the Red Hat OpenStack Platform 11 release.
Additional Resources
- For more information about the new driver and migration instructions, see the Appendix "Virtual Baseboard Management Controller (VBMC)" in the Director Installation and Usage Guide.
6.4.2. Upgrading the Undercloud to Red Hat OpenStack Platform 12
This procedure upgrades the undercloud toolset and the core Heat template collection to the Red Hat OpenStack Platform 12 release.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
-
Log into the undercloud as the
stackuser. Disable the current OpenStack Platform repository:
[stack@director ~]$ sudo subscription-manager repos --disable=rhel-7-server-openstack-11-rpms
Enable the new OpenStack Platform repository:
[stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-openstack-12-rpms
Install the
ceph-ansiblepackage:[stack@director ~]$ sudo yum install ceph-ansible
Run
yumto upgrade the director’s main packages:[stack@director ~]$ sudo yum update python-tripleoclient
-
Edit the
/home/stack/undercloud.conffile and check that theenabled_driversparameter does not contain thepxe_sshdriver. This driver is deprecated in favor of the Virtual Baseboard Metal Controller (VBMC) and removed from Red Hat OpenStack Platform. Run the following command to upgrade the undercloud:
[stack@director ~]$ openstack undercloud upgrade
- Wait until the undercloud upgrade process completes.
You have upgraded the undercloud to the Red Hat OpenStack Platform 12 release.
Additional Resources
- For more information about the new driver and migration instructions, see the Appendix "Virtual Baseboard Management Controller (VBMC)" in the Director Installation and Usage Guide.
6.4.3. Upgrading the Undercloud to Red Hat OpenStack Platform 13
This procedure upgrades the undercloud toolset and the core Heat template collection to the Red Hat OpenStack Platform 13 release.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
-
Log into the undercloud as the
stackuser. Disable the current OpenStack Platform repository:
[stack@director ~]$ sudo subscription-manager repos --disable=rhel-7-server-openstack-12-rpms
Enable the new OpenStack Platform repository:
[stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
Run
yumto upgrade the director’s main packages:[stack@director ~]$ sudo yum update python-tripleoclient
Run the following command to upgrade the undercloud:
[stack@director ~]$ openstack undercloud upgrade
- Wait until the undercloud upgrade process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
[stack@director ~]$ sudo reboot
- Wait until the node boots.
You have upgraded the undercloud to the Red Hat OpenStack Platform 13 release.
Additional Resources
- For more information about the new driver and migration instructions, see the Appendix "Virtual Baseboard Management Controller (VBMC)" in the Director Installation and Usage Guide.
Next Step
Once the undercloud upgrade is complete, you can configure a source for the container images.
6.5. Configuring a Container Image Source
As a technician, you can containerize the overcloud, but this first requires access to a registry with the required container images. Here you can find information on how to prepare the registry and the overcloud configuration to use container images for Red Hat OpenStack Platform.
There are several methods for configuring the overcloud to use a registry, based on the use case.
6.5.1. Registry Methods
Red Hat Hyperconverged Infrastructure for Cloud supports the following registry types, choose one of the following methods:
- Remote Registry
-
The overcloud pulls container images directly from
registry.access.redhat.com. This method is the easiest for generating the initial configuration. However, each overcloud node pulls each image directly from the Red Hat Container Catalog, which can cause network congestion and slower deployment. In addition, all overcloud nodes require internet access to the Red Hat Container Catalog. - Local Registry
-
Create a local registry on the undercloud, synchronize the images from
registry.access.redhat.com, and the overcloud pulls the container images from the undercloud. This method allows you to store a registry internally, which can speed up the deployment and decrease network congestion. However, the undercloud only acts as a basic registry and provides limited life cycle management for container images.
6.5.2. Including Additional Container Images for Red Hat OpenStack Platform Services
The Red Hat Hyperconverged Infrastructure for Cloud uses additional services besides the core Red Hat OpenStack Platform services. These additional services require additional container images, and you enable these services with their corresponding environment file. These environment files enable the composable containerized services in the overcloud and the director needs to know these services are enabled to prepare their images.
Prerequisites
- A running undercloud.
Procedure
As the
stackuser, on the undercloud node, using theopenstack overcloud container image preparecommand to include the additional services.Include the following environment file using the
-eoption:-
Ceph Storage Cluster :
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
-
Ceph Storage Cluster :
Include the following
--setoptions for Red Hat Ceph Storage:--set ceph_namespace- Defines the namespace for the Red Hat Ceph Storage container image.
--set ceph_image-
Defines the name of the Red Hat Ceph Storage container image. Use image name:
rhceph-3-rhel7. --set ceph_tag-
Defines the tag to use for the Red Hat Ceph Storage container image. When
--tag-from-labelis specified, the versioned tag is discovered starting from this tag.
Run the image prepare command:
Example
[stack@director ~]$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ --set ceph_namespace=registry.access.redhat.com/rhceph \ --set ceph_image=rhceph-3-rhel7 \ --tag-from-label {version}-{release} \ ...NoteThese options are passed in addition to any other options that need to be passed to the
openstack overcloud container image preparecommand.
6.5.3. Using the Red Hat Registry as a Remote Registry Source
Red Hat hosts the overcloud container images on registry.access.redhat.com. Pulling the images from a remote registry is the simplest method because the registry is already setup and all you require is the URL and namespace of the image you aim to pull.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Access to the Internet.
Procedure
To pull the images directly from
registry.access.redhat.comin the overcloud deployment, an environment file is required to specify the image parameters. The following command automatically creates this environment file:(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/custom-templates/overcloud_images.yamlNoteUse the
-eoption to include any environment files for optional services.-
This creates an
overcloud_images.yamlenvironment file, which contains image locations, on the undercloud. Include this file with all future upgrade and deployment operations.
Additional Resources
6.5.4. Using the Undercloud as a Local Registry
You can configure a local registry on the undercloud to store overcloud container images. This method involves the following:
-
The director pulls each image from the
registry.access.redhat.com. - The director creates the overcloud.
- During the overcloud creation, the nodes pull the relevant images from the undercloud.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Access to the Internet.
Procedure
Create a template to pull the images to the local registry:
(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-images-file /home/stack/local_registry_images.yamlUse the
-eoption to include any environment files for optional services.NoteThis version of the
openstack overcloud container image preparecommand targets the registry on theregistry.access.redhat.comto generate an image list. It uses different values than theopenstack overcloud container image preparecommand used in a later step.
This creates a file called
local_registry_images.yamlwith the container image information. Pull the images using thelocal_registry_images.yamlfile:(undercloud) [stack@director ~]$ sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose
NoteThe container images consume approximately 10 GB of disk space.
Find the namespace of the local images. The namespace uses the following pattern:
<REGISTRY IP ADDRESS>:8787/rhosp13
Use the IP address of the undercloud, which you previously set with the
local_ipparameter in theundercloud.conffile. Alternatively, you can also obtain the full namespace with the following command:(undercloud) [stack@director ~]$ docker images | grep -v redhat.com | grep -o '^.*rhosp13' | sort -u
Create a template for using the images in our local registry on the undercloud. For example:
(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=192.168.24.1:8787/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/custom-templates/overcloud_images.yaml-
Use the
-eoption to include any environment files for optional services. -
If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location:
--set ceph_namespace,--set ceph_image,--set ceph_tag.
NoteThis version of the
openstack overcloud container image preparecommand targets the Satellite server. It uses different values than theopenstack overcloud container image preparecommand used in a previous step.-
Use the
-
This creates an
overcloud_images.yamlenvironment file, which contains image locations on the undercloud. Include this file with all future upgrade and deployment operations.
Additional Resources
Next Steps
- Prepare the overcloud for an upgrade.
Additional Resources
- See Section 4.2 in the Red Hat OpenStack Platform Fast Forward Upgrades Guide for more information.
6.6. Preparing for the Overcloud Upgrade
As a technician, you need to preparing the overcloud environment for upgrading the overcloud services.
6.6.1. Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
6.6.2. Preparing for Overcloud Upgrade Service Downtime
The overcloud upgrade process disables the main services at key points. This means you cannot use any overcloud services to create new resources during the upgrade duration. Workloads running in the overcloud remain active during this period, which means instances continue to run through the upgrade duration.
Plan a maintenance window to ensure no users can access the overcloud services for the duration of the upgrade.
Affected by overcloud upgrade
- OpenStack Platform services
Unaffected by overcloud upgrade
- Instances running during the upgrade
- Red Hat Ceph Storage OSDs
- Linux networking
- Open vSwitch networking
- Undercloud
6.6.3. Selecting a Hyperconverged OSD/Compute Node for Upgrade Testing
The overcloud upgrade process allows you to either:
- Upgrade all nodes in a role.
- Individual nodes separately.
To ensure a smooth overcloud upgrade process, it is useful to test the upgrade on one hyperconverged OSD/compute nodes in the environment before upgrading all the hyperconverged OSD/compute nodes. This ensures no major issues occur during the upgrade while maintaining minimal downtime to your workloads.
Use the following recommendations to help choose test nodes for the upgrade:
- Select a hyperconverged OSD/compute node for upgrade testing.
- Select a node without any critical instances running.
- If necessary, migrate critical instances from the selected test hyperconverged OSD/compute nodes to other hyperconverged OSD/compute nodes.
6.6.4. Using a New or an Existing Custom Roles Data
When using a custom roles file during the upgrade, there are two approaches to consider. You can create a new roles data file (roles_data_custom.yaml), see Section 6.6.5, “Generating a New Custom Roles Data File” for this procedure.
Or, you can use an existing roles data file from a previous deployment, for example, the custom-roles.yaml file. See Section 6.6.6, “New Composable Services”, Section 6.6.7, “Deprecated Composable Services”, Section 6.6.8, “Preparing for Composable Networks”, and Section 6.6.9, “Preparing for Deprecated Parameters” for more information on updating the custom roles data file accordingly.
Updating the roles data file ensures any new composable services will be added to the relevant roles in the environment.
6.6.5. Generating a New Custom Roles Data File
This procedure generates a new custom roles data file (roles_data_custom.yaml).
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
As the
stackuser, source thestackrcfile:[stack@director ~]$ source ~/stackrc
List the default role templates with the
openstack overcloud role listcommand:[stack@director ~]$ openstack overcloud role list BlockStorage CephStorage Compute ComputeHCI ComputeOvsDpdk Controller ...
View a role’s YAML definition with the
openstack overcloud role showcommand. For example:[stack@director ~]$ openstack overcloud role show Compute
Generate a custom
roles_data_custom.yamlfile with theopenstack overcloud roles generatecommand to join multiple predefined roles into a single file. For example, the following command joins theController,Compute, andCephStorageroles into a single file:[stack@director ~]$ openstack overcloud roles generate -o ~/custom-templates/roles_data_custom.yaml Controller Compute ComputeHCI
The
-odefines the name of the file to create.-
This creates a new custom
roles_data_custom.yamlfile for the upgrade. If you need to add or remove composable services from any roles, then edit the file and make changes to suit the overcloud. If theOsdComputerole was used in a Red Hat Hyperconverged Infrastructure for Cloud v10 deployment, then you must replaceComputeHCIwithOsdCompute.
Additional Resources
-
For more information on custom
roles_data_custom.yamlgeneration, see "Composable Services and Custom Roles" in the Advanced Overcloud Customization guide.
6.6.6. New Composable Services
Red Hat OpenStack Platform 13 contains new composable services. If you wish to manually edit an existing roles data file (roles_data.yaml), then use the following lists of new composable services for Red Hat OpenStack Platform roles. When generating a new custom roles data file (roles_data_custom.yaml) with their own roles, include these new compulsory services in their applicable roles.
In a Red Hat Hyperconverged Infrastructure for Cloud v10 deployment, the custom role data used was the ~/custom-templates/custom-roles.yaml file.
All Roles
The following new services apply to all roles.
OS::TripleO::Services::MySQLClient- Configures the MariaDB client on a node, which provides database configuration for other composable services. Add this service to all roles with standalone composable services.
OS::TripleO::Services::Sshd- Configures SSH access across all nodes. Used for instance migration.
OS::TripleO::Services::CertmongerUser- Allows the overcloud to require certificates from Certmonger. Only used if enabling TLS/SSL communication.
OS::TripleO::Services::Docker-
Installs
dockerto manage containerized services. OS::TripleO::Services::ContainersLogrotateCrond-
Installs the
logrotateservice for container logs. OS::TripleO::Services::Securetty-
Allows configuration of
securettyon nodes. Enabled with theenvironments/securetty.yamlenvironment file. OS::TripleO::Services::Tuned-
Enables and configures the Linux tuning daemon (
tuned).
Specific Roles
The following new services apply to specific roles:
OS::TripleO::Services::NovaPlacement- Configures the OpenStack Compute (nova) Placement API. If using a standalone Nova API role in the current overcloud, add this service to the role. Otherwise, add the service to the Controller role.
OS::TripleO::Services::PankoApi- Configures the OpenStack Telemetry Event Storage (panko) service. If using a standalone Telemetry role in the current overcloud, add this service to the role. Otherwise, add the service to the Controller role.
OS::TripleO::Services::Clustercheck-
Required on any role that also uses the
OS::TripleO::Services::MySQLservice, such as the Controller or standalone Database role. OS::TripleO::Services::Iscsid-
Configures the
iscsidservice on the Controller, Compute, and BlockStorage roles. OS::TripleO::Services::NovaMigrationTarget- Configures the migration target service on Compute nodes.
Additional Resources
- For updated lists of services for specific custom roles, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide.
6.6.7. Deprecated Composable Services
Always check for any deprecated services. When using a custom roles_data file, remove these services from their applicable roles.
OS::TripleO::Services::Core- This service acted as a core dependency for other Pacemaker services. This service has been removed to accommodate high availability composable services.
OS::TripleO::Services::VipHosts- This service configured the /etc/hosts file with node hostnames and IP addresses. This service is now integrated directly into the director’s Heat templates.
Additional Resources
- For updated lists of services for specific custom roles, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide.
6.6.8. Preparing for Composable Networks
This version of Red Hat OpenStack Platform introduces a new feature for composable networks. When using a custom roles_data file, edit the file to add the composable networks to each role. For example, the controller nodes:
- name: Controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file for more examples of the syntax. Also, check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles.
Table 6.1. Mapping of composable networks to custom standalone roles
| Role | Networks Required |
| Ceph Storage Monitor |
|
| Ceph Storage OSD |
|
| Ceph Storage RadosGW |
|
| Cinder API |
|
| Compute |
|
| Controller |
|
| Database |
|
| Glance |
|
| Heat |
|
| Horizon |
|
| Ironic | None required. Uses the Provisioning/Control Plane network for API. |
| Keystone |
|
| Load Balancer |
|
| Manila |
|
| Message Bus |
|
| Networker |
|
| Neutron API |
|
| Nova |
|
| OpenDaylight |
|
| Redis |
|
| Sahara |
|
| Swift API |
|
| Swift Storage |
|
| Telemetry |
|
6.6.9. Preparing for Deprecated Parameters
The following parameters are deprecated and have been replaced with role-specific parameters. If any of these deprecated parameters are being used, then update these parameters in the custom environment files accordingly.
Table 6.2. Deprecated Parameters
| Old Parameter | New Parameter |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6.6.10. Software Repositories for Fast-Forward Upgrades
The fast-forward upgrade process uses a new method to switch software repositories. The director passes the following upstream codenames of each OpenStack Platform version to the script:
| Codename | Version |
|
| OpenStack Platform 11 |
|
| OpenStack Platform 12 |
|
| OpenStack Platform 13 |
By default, the fast-forward upgrade process uses a script to change software repositories contained on Red Hat’s Content Delivery Network (CDN) during each stage of the upgrade process. This script is included as part of the OS::TripleO::Services::TripleoPackages composable service (puppet/services/tripleo-packages.yaml) using the FastForwardCustomRepoScriptContent parameter.
You can also use a custom script by placing the commands underneath the FastForwardCustomRepoScriptContent parameter.
parameter_defaults:
FastForwardCustomRepoScriptContent: |
[INSERT UPGRADE SCRIPT HERE]Example
parameter_defaults:
FastForwardCustomRepoScriptContent: |
set -e
URL="satellite.example.com"
case $1 in
ocata)
subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp11 --org=Default_Organization
;;
pike)
subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp12 --org=Default_Organization
;;
queens)
subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp13 --org=Default_Organization
;;
*)
echo "unknown release $1" >&2
exit 1
esac
Additional Resources
- See the Red Hat OpenStack Platform 13 Fast-Forward Upgrade guide for more information.
6.6.11. Preparing for the Red Hat Ceph Storage Upgrade
Due to the upgrade to containerized services, the method for installing and updating Ceph Storage nodes has changed. Ceph Storage configuration now uses a set of playbooks in the ceph-ansible package, which you install on the undercloud.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
Procedure
Check that you are using the latest resources and configuration in the storage environment file. This requires the following changes:
The
resource_registryuses containerized services from thedocker/services/subdirectory of the core Heat template collection. For example:resource_registry: OS::TripleO::Services::CephMon: ../docker/services/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../docker/services/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../docker/services/ceph-ansible/ceph-client.yaml
Use the new
CephAnsibleDisksConfigparameter to define how the disks are mapped. Previous versions of the Red Hat OpenStack Platform used theceph::profile::params::osdshieradata to define the OSD layout. Convert this hieradata to the structure of the newCephAnsibleDisksConfigparameter. For example, if the hieradata contained the following:parameter_defaults: ExtraConfig: ceph::profile::params::osd_journal_size: 512 ceph::profile::params::osds: '/dev/sdb': {} '/dev/sdc': {} '/dev/sdd': {}Then the
CephAnsibleDisksConfigwould look like this:parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd journal_size: 512 osd_scenario: collocatedNoteFor a full list of OSD disk layout options used with
ceph-ansible, view the sample file in/usr/share/ceph-ansible/group_vars/osds.yml.sample.
6.6.12. Preparing Access to the Undercloud’s Public API over SSL/TLS
The overcloud requires access to the undercloud’s OpenStack Object Storage (swift) Public API during the upgrade. If the undercloud uses a self-signed certificate, then you need to add the undercloud’s certificate authority to each overcloud node.
Prerequisites
- Using SSL/TLS for the undercloud public API.
Procedure
The undercloud’s dynamic Ansible script has updated to the OpenStack Platform 12 version, which uses the
RoleNetHostnameMapHeat parameter in the overcloud plan to define the inventory. However, the overcloud currently uses the OpenStack Platform 11 template versions, which do not have theRoleNetHostnameMapparameter. This means you need to create a temporary static inventory file, which you can generate with the following command:[stack@director ~]$ openstack server list -c Networks -f value | cut -d"=" -f2 > overcloud_hosts
Create an Ansible playbook (
undercloud-ca.yml) that contains the following:- name: Add undercloud CA to overcloud nodes hosts: all user: heat-admin become: true tasks: - name: Copy undercloud CA copy: src: /etc/pki/ca-trust/source/anchors/cm-local-ca.pem dest: /etc/pki/ca-trust/source/anchors/ - name: Update trust command: "update-ca-trust extract" - name: Get the swift endpoint shell: | source ~stack/stackrc openstack endpoint list -c 'Service Name' -c 'URL' -c Interface -f value | grep swift | grep public | awk -F/ '{print $3}' register: swift_endpoint delegate_to: 127.0.0.1 become: yes become_user: stack - debug: var: swift_endpoint - name: Verify URL uri: url: https://{{ swift_endpoint.stdout }}/healthcheck return_content: yes register: verify - name: Report output debug: msg: "{{ ansible_hostname }} can access the undercloud's Public API" when: verify.content == "OK"This playbook contains multiple tasks that perform the following on each node:
-
Copy the undercloud’s certificate authority file (
ca.crt.pem) to the overcloud node. The name of this file and its location might vary depending on the configuration. This example uses the name and location defined during the self-signed certificate procedure. - Execute the command to update the certificate authority trust database on the overcloud node.
- Checks the undercloud’s Object Storage Public API from the overcloud node and reports if successful.
-
Copy the undercloud’s certificate authority file (
Run the playbook with the following command:
[stack@director ~]$ ansible-playbook -i overcloud_hosts undercloud-ca.yml
This uses the temporary inventory to provide Ansible with the overcloud nodes.
The resulting Ansible output should show a debug message for node. For example:
ok: [192.168.24.100] => { "msg": "overcloud-controller-0 can access the undercloud's Public API" }
Additional Resources
- For more information on running Ansible automation on the overcloud, see "Running Ansible Automation" in the Director Installation and Usage guide.
- For more information on configuring SSL/TLS, see "SSL/TLS Certificate Configuration" in the Director Installation and Usage guide.
6.6.13. Next Steps
- Performing an upgrade of the overcloud.
6.7. Upgrading the Overcloud
As a technician, after you have prepared for the overcloud upgrade, now it is time to perform the actual overcloud upgrade.
6.7.1. Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
- Prepared for the overcloud upgrade.
6.7.2. Performing the Fast Forward Upgrade of the Overcloud
The fast forward upgrade requires running two commands that perform the following tasks:
- Updates the overcloud plan to OpenStack Platform 13.
- Prepares the nodes for the fast forward upgrade.
Runs through upgrade steps of each subsequent version within the fast forward upgrade, including:
- Performs version-specific tasks for each OpenStack Platform service.
- Changes the repository to OpenStack Platform version within the fast forward upgrade.
- Performs package and database upgrades for each subsequent version.
- Prepares the overcloud for the final upgrade to OpenStack Platform 13.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
- Prepared for the Overcloud Upgrade.
Procedure
As the
stackuser, source thestackrcfile:[stack@director ~]$ source ~/stackrc
Run the fast forward upgrade preparation command:
Example
[stack@director ~]$ openstack overcloud ffwd-upgrade prepare --templates \ -e $ALL_ENVIRONMENTS_USED_TO_DEPLOY \ -e /home/stack/custom-templates/overcloud_images.yaml \ -r /home/stack/custom-templates/roles_data_custom.yaml
Include the following options relevant to the environment:
-
The
$ALL_ENVIRONMENT_FILES_TO_DEPLOYrepresents all the environment files used during the initial deployment. -
Path to custom configuration environment files (
-e). Path to the custom roles data file (
-r):-
Using a new
roles_data_custom.yamlfile. -
Or, using an existing
custom-roles.yamlfile.
-
Using a new
- If used, a custom repository file.
-
The
- Wait until the fast forward upgrade preparation completes.
Run the fast forward upgrade command:
[stack@director ~]$ openstack overcloud ffwd-upgrade run
- Wait until the fast forward upgrade completes.
6.7.3. Upgrading the Overcloud First Checkpoint
The following list is the state of the overcloud upgrade process at this first checkpoint:
- The overcloud packages and database have been upgraded to Red Hat OpenStack Platform 12 versions.
- All overcloud services are disabled.
- Ceph Storage nodes are at version 2.
The overcloud is now at a state to perform the standard upgrade steps to reach OpenStack Platform 13.
6.7.4. Upgrading the Controller Nodes
This process upgrades all the Controller nodes to OpenStack Platform 13. The process involves running the openstack overcloud upgrade run command and including the --roles Controller option to restrict operations to the Controller nodes only.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
- Prepared for the Overcloud Upgrade.
Procedure
As the
stackuser, source thestackrcfile:[stack@director ~]$ source ~/stackrc
Run the upgrade command:
[stack@director ~]$ openstack overcloud upgrade run --roles Controller --skip-tags validation
NoteThe command uses
--skip-tags validationbecause OpenStack Platform services are inactive on the overcloud and cannot be validated.- Wait until the Controller node upgrade completes.
6.7.5. Upgrading the Overcloud Second Checkpoint
The following list is the state of the overcloud upgrade process at this second checkpoint:
- The controller nodes and other nodes based on composable services have been upgraded to Red Hat OpenStack Platform 13 and all services are enabled.
- Upgrading the compute nodes has not started yet.
- The Red Hat Ceph Storage nodes are still at version 2 and have not been upgraded yet.
Although the controller services are enabled, do not perform any workload operations while the compute node and the Red Hat Ceph Storage services are disabled. This can cause orphaned virtual machines. Wait until the entire environment is upgraded.
6.7.6. Upgrading the Test Hyperconverged OSD/Compute Node
This process upgrades the hyperconverged OSD/compute nodes selected for testing. The process involves running the openstack overcloud upgrade run command and including the --nodes option to restrict operations to the test nodes only. This procedure uses --nodes compute-0 as an example in commands.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
- Prepared for the Overcloud Upgrade.
Procedure
Source the
stackrcfile:[stack@director ~]$ source ~/stackrc
Run the upgrade command:
Example
[stack@director ~]$ openstack overcloud upgrade run --nodes osdcompute-0 --skip-tags validation
NoteThis command uses the
--skip-tags validationoption, because the Red Hat OpenStack Platform services are inactive on the overcloud and cannot be validated.- Wait until the test node upgrade completes.
6.7.7. Upgrading the Hyperconverged OSD/Compute Nodes
This process upgrades all remaining Compute nodes to OpenStack Platform 13. The process involves running the openstack overcloud upgrade run command and including the --roles OsdCompute option to restrict operations to the Compute nodes only.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
- Prepared for the Overcloud Upgrade.
Procedure
As the
stackuser, source thestackrcfile:[stack@director ~]$ source ~/stackrc
Run the upgrade command:
[stack@director ~]$ openstack overcloud upgrade run --roles OsdCompute --skip-tags validation
NoteThe command uses the
--skip-tags validationoption, because the Red Hat OpenStack Platform services are inactive on the overcloud and cannot be validated.- Wait until the hyperconverged OSD/compute node upgrade completes.
6.7.8. Upgrading the Overcloud Third Checkpoint
The following list is the state of the overcloud upgrade process at this third checkpoint:
- The controller nodes and other nodes based on composable services have been upgraded to Red Hat OpenStack Platform 13 and all services enabled.
- Compute nodes have been upgraded to Red Hat OpenStack Platform 13.
- The Red Hat Ceph Storage nodes are still at version 2 and have not been upgraded yet.
6.7.9. Upgrading Red Hat Ceph Storage
This process upgrades the Ceph Storage nodes. The process involves:
-
Running the
openstack overcloud ceph-upgrade runcommand to perform a rolling upgrade to a containerized Red Hat Ceph Storage 3 cluster.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Upgraded the undercloud.
- Configured a container image source.
- The controller and compute nodes have been upgraded.
Procedure
As the
stackuser, source thestackrcfile:[stack@director ~]$ source ~/stackrc
Run the Ceph Storage upgrade command. For example:
Example
[stack@director ~]$ openstack overcloud ceph-upgrade run --templates \ -e $ALL_ENVIRONMENT_FILES_TO_DEPLOY \ -e /home/stack/custom-templates/overcloud_images.yaml \ -r /home/stack/custom-templates/roles_data_custom.yaml \ --ceph-ansible-playbook '/usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml,/usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml'Include the following options relevant to the environment:
-
The
$ALL_ENVIRONMENT_FILES_TO_DEPLOYrepresents all the environment files used during the initial deployment. -
Path to custom configuration environment files (
-e). Path to the custom roles data file (
-r):-
Using a new
roles_data_custom.yamlfile. -
Or, using an existing
custom-roles.yamlfile.
-
Using a new
- If used, a custom repository file.
The following ansible playbooks:
-
/usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml -
/usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml
-
-
The
- Wait until the Ceph Storage node upgrade completes.
6.7.10. Upgrading the Overcloud Fourth Checkpoint
The following list is the state of the overcloud upgrade process at this fourth checkpoint:
- The controller nodes and other nodes based on composable services have been upgraded to Red Hat OpenStack Platform 13 and all services enabled.
- Compute nodes have been upgraded to Red Hat OpenStack Platform 13.
- The Red Hat Ceph Storage nodes have been upgraded to version 3.
Although the environment is now upgraded, you must perform one last step to finalize the upgrade.
6.7.11. Finalizing the Fast Forward Upgrade
The fast forward upgrade requires a final step to update the overcloud stack. This ensures the stack’s resource structure aligns with a regular deployment of OpenStack Platform 13 and allows you to perform standard openstack overcloud deploy functions in the future.
Prerequisites
- An upgrade from Red Hat Hyperconverged Infrastructure for Cloud 10 to 13.
Procedure
As the
stackuser, source thestackrcfile:[stack@director ~]$ source ~/stackrc
Run the fast forward upgrade finalization command:
Example
[stack@director ~]$ openstack overcloud ffwd-upgrade converge \ -e $ALL_ENVIRONMENT_FILES_TO_DEPLOY -e /home/stack/custom-templates/overcloud_images.yaml \ -r /home/stack/custom-templates/roles_data_custom.yamlInclude the following options relevant to your environment:
-
The
$ALL_ENVIRONMENT_FILES_TO_DEPLOYrepresents all the environment files used during the initial deployment. -
Path to custom configuration environment files (
-e). Path to the custom roles data file (
-r):-
Using a new
roles_data_custom.yamlfile. -
Or, using an existing
custom-roles.yamlfile.
-
Using a new
-
The
- Wait until the fast forward upgrade finalization completes.
6.7.12. Next Steps
- Post-upgrade steps for the overcloud configuration.
6.8. Doing the Post Upgrade Steps
This procedure implements final steps after completing the fast forward upgrade process. This includes an overcloud reboot and any additional configuration steps or considerations after the fast forward upgrade process completes.
Also, you need to replace the current overcloud images with the new versions. The new images ensure that the undercloud can introspect and provision the nodes using the latest version of Red Hat OpenStack Platform software.
Prerequisites
- You have upgraded the undercloud to the latest version.
Procedure
Remove any existing images from the
imagesdirectory on thestackuser’s home (/home/stack/images):[stack@director ~]$ rm -rf ~/images/*
Extract the archives:
[stack@director ~]$ cd ~/images [stack@director ~]$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar; do tar -xvf $i; done [stack@director ~]$ cd ~
Import the latest images into the undercloud:
[stack@director ~]$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
Configure the nodes to use the new images:
[stack@director ~]$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
Verify the existence of the new images:
[stack@director ~]$ openstack image list [stack@director ~]$ ls -l /httpboot
- Reboot the overcloud nodes.
When deploying overcloud nodes, ensure the Overcloud image version corresponds to the respective Heat template version. For example, only use the Red Hat OpenStack Platform 13 images with the Red Hat OpenStack Platform 13 Heat templates.
6.9. Additional Resources
- See the Fast-Forward Upgrades Guide for more details.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.