Performing a minor update of Red Hat OpenStack Platform
Apply the latest bug fixes and security improvements to Red Hat OpenStack Platform
OpenStack Documentation Team
rhos-docs@redhat.com
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback.
- Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.
- Click the following link to open a the Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Preparing for a minor update
Keep your Red Hat OpenStack Platform (RHOSP) 17.1 environment updated with the latest packages and containers.
Use the upgrade path for the following versions:
Old RHOSP Version | New RHOSP Version |
---|---|
Red Hat OpenStack Platform 17.0.z | Red Hat OpenStack Platform 17.1 latest |
Red Hat OpenStack Platform 17.1.z | Red Hat OpenStack Platform 17.1 latest |
Minor update workflow
A minor update of your RHOSP environment involves updating the RPM packages and containers on the undercloud and overcloud host, and the service configuration, if needed. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment:
Update step | Description |
---|---|
Undercloud update | Director packages are updated, containers are replaced, and the undercloud is rebooted. |
Optional |
All |
ha-image-update external | Updates container image names of Pacemaker-controlled services. There is no service disruption. This step applies to only customers that are updating their system from version 17.0.z to the latest 17.1 release. |
Overcloud update of Controller nodes and composable nodes that contain Pacemaker services | During an Overcloud update, the Pacemaker services are stopped for each host. While the Pacemaker services are stopped, the RPMs on the host, the container configuration data, and the containers are updated. When the Pacemaker services restart, the host is added again. |
Overcloud update of Compute nodes | Multiple nodes are updated in parallel. The default value for running nodes in parallel is 25. |
Overcloud update of Ceph nodes | Ceph nodes are updated one node at a time. |
Ceph cluster update |
Ceph services are updated by using |
If you have a multistack infrastructure, update each overcloud stack completely, one at a time. If you have a distributed compute node (DCN) infrastructure, update the overcloud at the central location completely, and then update the overcloud at each edge site, one at a time.
Additionally, an administrator can perform the following operations during a minor update:
- Migrate your virtual machine
- Create a virtual machine network
- Run additional cloud operations
The following operations are not supported during a minor update:
- Replacing a Controller node
- Scaling in or scaling out any role
Considerations before you update your RHOSP environment
To help guide you during the update process, consider the following information:
- Red Hat recommends backing up the undercloud and overcloud control planes. For more information about backing up nodes, see Backing up and restoring the undercloud and control plane nodes.
- Familiarize yourself with the known issues that might block an update.
- Familiarize yourself with the possible update and upgrade paths before you begin your update. For more information, see Section 1.1, “Upgrade paths for long life releases”.
-
To identify your current maintenance release, run
$ cat /etc/rhosp-release
. You can also run this command after updating your environment to validate the update.
Known issues that might block an update
There are currently no known issues.
Procedure
To prepare your RHOSP environment for the minor update, complete the following procedures:
1.1. Upgrade paths for long life releases
Familiarize yourself with the possible update and upgrade paths before you begin an update.
You can view your current RHOSP and RHEL versions in the /etc/rhosp-release
and /etc/redhat-release
files.
Table 1.1. Updates version path
Current version | Target version |
---|---|
RHOSP 17.0.x on RHEL 9.0 | RHOSP 17.0 latest on RHEL 9.0 latest |
RHOSP 17.1.x on RHEL 9.2 | RHOSP 17.1 latest on RHEL 9.2 latest |
Table 1.2. Upgrades version path
Current version | Target version |
---|---|
RHOSP 10 on RHEL 7.7 | RHOSP 13 latest on RHEL 7.9 latest |
RHOSP 13 on RHEL 7.9 | RHOSP 16.1 latest on RHEL 8.2 latest |
RHOSP 13 on RHEL 7.9 | RHOSP 16.2 latest on RHEL 8.4 latest |
RHOSP 16 on RHEL 8.4 | RHOSP 17.1 latest on RHEL 9.0 latest |
For more information, see Framework for upgrades (16.2 to 17.1).
1.2. Locking the environment to a Red Hat Enterprise Linux release
Red Hat OpenStack Platform (RHOSP) 17.1 is supported on Red Hat Enterprise Linux (RHEL) 9.2. Before you perform the update, lock the undercloud and overcloud repositories to the RHEL 9.2 release to avoid upgrading the operating system to a newer minor release.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
-
Edit your overcloud subscription management environment file, which is the file that contains the
RhsmVars
parameter. The default name for this file is usuallyrhsm.yml
. Check if your subscription management configuration includes the
rhsm_release
parameter. If therhsm_release
parameter is not present, add it and set it to 9.2:parameter_defaults: RhsmVars: … rhsm_username: "myusername" rhsm_password: "p@55w0rd!" rhsm_org_id: "1234567" rhsm_pool_ids: "1a85f9223e3d5e43013e3d6e8ff506fd" rhsm_method: "portal" rhsm_release: "9.2"
- Save the overcloud subscription management environment file.
Create a playbook that contains a task to lock the operating system version to RHEL 9.2 on all nodes:
$ cat > ~/set_release.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: set release to 9.2 command: subscription-manager release --set=9.2 become: true EOF
Run the
set_release.yaml
playbook:$ ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/set_release.yaml --limit <undercloud>, <Controller>, <Compute>
-
Replace
<stack>
with the name of your stack. -
Use the
--limit
option to apply the content to all RHOSP nodes. Replace <undercloud>, <Controller>, <Compute> with the Ansible groups in your environment that contain those nodes. Do not run this playbook against Ceph Storage nodes because you might have a different subscription for these nodes.
-
Replace
To manually lock a node to a version, log in to the node and run the subscription-manager release
command:
$ sudo subscription-manager release --set=9.2
1.3. Updating Red Hat Openstack Platform repositories
Update your repositories to use Red Hat OpenStack Platform (RHOSP) 17.1.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
-
Edit your overcloud subscription management environment file, which is the file that contains the
RhsmVars
parameter. The default name for this file is usuallyrhsm.yml
. Check the
rhsm_repos
parameter in your subscription management configuration. If therhsm_repos
parameter is using the RHOSP 17.1 repositories, change the repository to the correct versions:parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.1-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms
- Save the overcloud subscription management environment file.
Create a playbook that contains a task to set the repositories to RHOSP 17.1 on all nodes:
$ cat > ~/update_rhosp_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --enable=openstack-17.1-for-rhel-9-x86_64-rpms become: true EOF
Run the
update_rhosp_repos.yaml
playbook:$ ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/update_rhosp_repos.yaml --limit <undercloud>,<Controller>,<Compute>
-
Replace
<stack>
with the name of your stack. -
Use the
--limit
option to apply the content to all RHOSP nodes. Replace <undercloud>, <Controller>, and <Compute> with the Ansible groups in your environment that contain those nodes. Do not run this playbook against Ceph Storage nodes because they usually use a different subscription.
-
Replace
Create a playbook that contains a task to set the repositories to RHOSP 17.1 on all ceph storage nodes:
$ cat > ~/update_ceph_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms become: true EOF
Run the
update_ceph_repos.yaml
playbook:$ ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/update_ceph_repos.yaml --limit CephStorage
Use the
--limit
option to apply the content to Ceph Storage nodes.
1.4. Updating the container image preparation file
The container preparation file is the file that contains the ContainerImagePrepare
parameter. You use this file to define the rules for obtaining container images for the undercloud and overcloud.
Before you update your environment, check the file to ensure that you obtain the correct image versions.
Procedure
-
Edit the container preparation file. The default name for this file is usually
containers-prepare-parameter.yaml
. Ensure that the
tag
parameter is set to17.1
for each rule set:parameter_defaults: ContainerImagePrepare: - push_destination: true set: ... tag: '17.1' tag_from_label: '{version}-{release}'
NoteIf you do not want to use a specific tag for the update, such as
17.1
or17.1.1
, remove thetag
key-value pair and specifytag_from_label
only. This uses the installed Red Hat OpenStack Platform version to determine the value for the tag to use as part of the update process.- Save this file.
1.5. Disabling fencing in the overcloud
Before you update the overcloud, ensure that fencing is disabled.
If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results.
If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Log in to a Controller node and run the Pacemaker command to disable fencing:
$ ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=false"
-
Replace
<controller_ip>
with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with themetalsmith list
command.
-
Replace
-
In the
fencing.yaml
environment file, set theEnableFencing
parameter tofalse
to ensure that fencing stays disabled during the update process.
Additional Resources
Chapter 2. Updating the undercloud
You can use director to update the main packages on the undercloud node. To update the undercloud and its overcloud images to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version, complete the following procedures:
Prerequisites
- Before you can update the undercloud to the latest RHOSP 17.1 version, ensure that you complete all the update preparation procedures. For more information, see Chapter 1, Preparing for a minor update
2.1. Validating RHOSP before the undercloud update
Before you update your Red Hat OpenStack Platform (RHOSP) environment, validate your undercloud with the tripleo-validations
playbooks.
For more information about validations, see Using the validation framework in Installing and managing Red Hat OpenStack Platform with director.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Install the packages for validation:
$ sudo dnf -y update openstack-tripleo-validations python3-validations-libs validations-common
Run the validation:
$ validation run -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml --group pre-update
- Replace <stack> with the name of the stack.
Verification
- To view the results of the validation report, see Viewing validation history in Installing and managing Red Hat OpenStack Platform with director.
Ignore FAILED
validations that report No host matched
as the only error returned. This error means that you have no hosts that match the validation host group, which might be expected. A FAILED
validation does not prevent you from using your updated RHOSP environment. However, a FAILED
validation can indicate an issue with your environment.
2.2. Performing a minor update of a containerized undercloud
Director provides commands to update the main packages on the undercloud node. Use director to perform a minor update within the current version of your RHOSP environment.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Update the director main packages with the
dnf update
command:$ sudo dnf update -y python3-tripleoclient ansible-*
Update the undercloud environment:
$ openstack undercloud upgrade
- Wait until the undercloud update process completes.
Reboot the undercloud to update the operating system’s kernel and other system packages:
$ sudo reboot
- Wait until the node boots.
2.3. Updating the overcloud images
You must replace your current overcloud images with new versions to ensure that director can introspect and provision your nodes with the latest version of the RHOSP software.
Prerequisites
- You have updated the undercloud node to the latest version. For more information, see Section 2.2, “Performing a minor update of a containerized undercloud”.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Remove any existing images from the
images
directory on thestack
user’s home (/home/stack/images
):$ rm -rf ~/images/*
Extract the archives:
cd ~/images for i in /usr/share/rhosp-director-images/ironic-python-agent-latest-17.1.tar /usr/share/rhosp-director-images/overcloud-hardened-uefi-full-latest-17.1.tar; do tar -xvf $i; done cd ~
Import the latest images into the director:
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
Configure your nodes to use the new images:
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
Verify the existence of the new images:
$ ls -l /var/lib/ironic/httpboot /var/lib/ironic/images
- When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use only the RHOSP 17.1 images with the RHOSP 17.1 heat templates.
-
If you deployed a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, the overcloud image and package repository versions might be out of sync. To ensure that the overcloud image and package repository versions match, you can use the
virt-customize
tool. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize. -
The new
overcloud-full
image replaces the oldovercloud-full
image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future.
Chapter 3. Updating the overcloud
After you update the undercloud, you can update the overcloud by running the overcloud and container image preparation commands, and updating your nodes. The control plane API is fully available during a minor update.
Prerequisites
- You have updated the undercloud node to the latest version. For more information, see Chapter 2, Updating the undercloud.
-
If you use a local set of core templates in your
stack
user home directory, ensure that you update the templates and use the recommended workflow in Understanding heat templates in the Installing and managing Red Hat OpenStack Platform with director guide. You must update the local copy before you upgrade the overcloud. Add the
GlanceApiInternal
service to your Controller role:OS::TripleO::Services::GlanceApiInternal
This is the service for the internal instance of the Image service (glance) API to provide location data to administrators and other services that require it, such as the Block Storage service (cinder) and the Compute service (nova).
Procedure
To update the overcloud, you must complete the following procedures:
- Section 3.1, “Running the overcloud update preparation”
- Section 3.3, “Running the container image preparation”
- Section 3.4, “Optional: Updating the ovn-controller container on all overcloud servers”
- Section 3.5, “Updating the container image names of Pacemaker-controlled services”
- Section 3.6, “Updating all Controller nodes”
- Section 3.7, “Updating all Compute nodes”
- Section 3.8, “Updating all HCI Compute nodes”
- Section 3.9, “Updating all DistributedComputeHCI nodes”
- Section 3.10, “Updating all Ceph Storage nodes”
- Section 3.11, “Updating the Red Hat Ceph Storage cluster”
- Section 3.12, “Performing online database updates”
- Section 3.13, “Re-enabling fencing in the overcloud”
3.1. Running the overcloud update preparation
To prepare the overcloud for the update process, you must run the openstack overcloud update prepare
command, which updates the overcloud plan to Red Hat OpenStack Platform (RHOSP) 17.1 and prepares the nodes for the update.
Prerequisites
-
If you use a Ceph subscription and have configured director to use the
overcloud-minimal
image for Ceph storage nodes, you must ensure that in theroles_data.yaml
role definition file, therhsm_enforce
parameter is set toFalse
. -
If you rendered custom NIC templates, you must regenerate the templates with the updated version of the
openstack-tripleo-heat-templates
collection to avoid incompatibility with the overcloud version. For more information about custom NIC templates, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide.
For distributed compute node (edge) architectures with OVN deployments, you must complete this procedure for each stack with Compute, DistributedCompute, or DistributedComputeHCI nodes before proceeding with section Updating the ovn-controller container on all overcloud servers.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the update preparation command:
$ openstack overcloud update prepare \ --templates \ --stack <stack_name> \ -r <roles_data_file> \ -n <network_data_file> \ -e <environment_file> \ -e <environment_file> \ ...
Include the following options relevant to your environment:
-
If the name of your overcloud stack is different to the default name
overcloud
, include the--stack
option in the update preparation command and replace<stack_name>
with the name of your stack. -
If you use your own custom roles, use the
-r
option to include the custom roles (<roles_data_file>
) file. -
If you use custom networks, use the
-n
option to include your composable network in the (<network_data_file>
) file. -
If you deploy a high availability cluster, include the
--ntp-server
option in the update preparation command, or include theNtpServer
parameter and value in your environment file. -
Include any custom configuration environment files with the
-e
option.
-
If the name of your overcloud stack is different to the default name
- Wait until the update preparation process completes.
3.2. Validating your SSH Key size
Starting with Red Hat Enterprise Linux (RHEL) 9.1, a minimum SSH key size of 2048 bits is required. If your current SSH key on Red Hat OpenStack Platform (RHOSP) director is less than 2048 bits, you can lose access to the overcloud. You must verify that your SSH key meets the required bit size.
Procedure
Validate your SSH key size:
ssh-keygen -l -f ~/.ssh/id_rsa.pub
Example output:
1024 SHA256:Xqz0Xz0/aJua6B3qRD7VsLr6n/V3zhmnGSkcFR6FlJw stack@director.example.local (RSA)
If your SSH key is less than 2048 bits, use the
ssh_key_rotation.yaml
ansible playbook to replace the SSH key before continuing.ANSIBLE_SSH_COMMON_ARGS="-o RequiredRSASize=1024" ansible-playbook \ -i ~/config-download/<stack>/tripleo-ansible-inventory.yaml \ /usr/share/ansible/tripleo-playbooks/ssh_key_rotation.yaml \ --extra-vars "keep_old_key_authorized_keys=<true|false> \ backup_folder_path=</path/to/backup_folder>"
-
Replace
<stack>
with the name of your overcloud stack. -
Replace
<true|false>
based on your backup needs. If you do not include this variable, the default value istrue
. -
Replace
</path/to/backup_folder>
with a backup path. If you do not include this variable, the default value is/home/{{ ansible_user_id }}/backup_keys/{{ ansible_date_time.epoch }}
-
Replace
3.3. Running the container image preparation
Before you can update the overcloud, you must prepare all container image configurations that are required for your environment and pull the latest RHOSP 17.1 container images to your undercloud.
To complete the container image preparation, you must run the openstack overcloud external-update run
command against tasks that have the container_image_prepare
tag.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the
openstack overcloud external-update run
command against tasks that have thecontainer_image_prepare
tag:$ openstack overcloud external-update run --stack <stack_name> --tags container_image_prepare
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
3.4. Optional: Updating the ovn-controller container on all overcloud servers
If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller
container to the latest RHOSP 17.1 version. The update occurs on every overcloud server that runs the ovn-controller
container.
-
The following procedure updates the
ovn-controller
containers on servers that are assigned the Compute role before it updates the ovn-northd service on servers that are assigned the Controller role. For distributed compute node (edge) architectures, you must complete this procedure for each stack with Compute, DistributedCompute, or DistributedComputeHCI nodes before proceeding with section Updating all Controller nodes.
If you accidentally updated the
ovn-northd
service before following this procedure, you might not be able to connect to your virtual machines or create new virtual machines or virtual networks. The following procedure restores connectivity.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the
openstack overcloud external-update run
command against the tasks that have the ovn tag:$ openstack overcloud external-update run --stack <stack_name> --tags ovn
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
-
Wait until the
ovn-controller
container update completes.
3.5. Updating the container image names of Pacemaker-controlled services
If you update your system from Red Hat Openstack Platform (RHOSP) 17 to RHOSP 17.1, you must update the container image names of the Pacemaker-controlled services. You must perform this update to migrate to the new image naming schema of the Pacemaker-controlled services.
If you update your system from a version of RHOSP 17.1 to the latest version of RHOSP 17.1, you do not need to update the container image names of the pacemaker-controlled services.
Procedure
- Log in to the undercloud host as the stack user.
Source the stackrc undercloud credentials file:
$ source ~/stackrc
Run the
openstack overcloud external-update run
command with theha_image_update
tag:$ openstack overcloud external-update run --stack <stack_name> --tags ha_image_update
- If the name of your undercloud stack is different from the default stack name undercloud, set your stack name with the --stack option and replace <stack_name> with the name of your stack.
3.6. Updating all Controller nodes
Update all the Controller nodes to the latest RHOSP 17.1 version. Run the openstack overcloud update run
command and include the --limit Controller
option to restrict operations to the Controller nodes only. The control plane API is fully available during the minor update.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --stack <stack_name> --limit Controller
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
- Wait until the Controller node update completes.
3.7. Updating all Compute nodes
Update all Compute nodes to the latest RHOSP 17.1 version. To update Compute nodes, run the openstack overcloud update run
command and include the --limit Compute
option to restrict operations to the Compute nodes only.
- Parallelization considerations
When you update a large number of Compute nodes, to improve performance, you can run multiple update tasks in the background and configure each task to update a separate group of 20 nodes. For example, if you have 80 Compute nodes in your deployment, you can run the following commands to update the Compute nodes in parallel:
$ openstack overcloud update run -y --limit 'Compute[0:19]' > update-compute-0-19.log 2>&1 & $ openstack overcloud update run -y --limit 'Compute[20:39]' > update-compute-20-39.log 2>&1 & $ openstack overcloud update run -y --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 & $ openstack overcloud update run -y --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 &
This method of partitioning the nodes space is random and you do not have control over which nodes are updated. The selection of nodes is based on the inventory file that you generate when you run the
tripleo-ansible-inventory
command.To update specific Compute nodes, list the nodes that you want to update in a batch separated by a comma:
$ openstack overcloud update run --limit <Compute0>,<Compute1>,<Compute2>,<Compute3>
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --stack <stack_name> --limit Compute
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
- Wait until the Compute node update completes.
3.8. Updating all HCI Compute nodes
Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest RHOSP 17.1 version.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-mon
service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean
:$ sudo cephadm -- shell ceph status
If the Ceph cluster is healthy, it returns a status of
HEALTH_OK
.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARN
orHEALTH_ERR
. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --stack <stack_name> --limit ComputeHCI
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
- Wait until the node update completes.
3.9. Updating all DistributedComputeHCI nodes
Update roles specific to distributed compute node architecture. When you upgrade distributed compute nodes, update DistributedComputeHCI
nodes first, and then update DistributedComputeHCIScaleOut
nodes.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-mon
service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean
:$ sudo cephadm -- shell ceph status
If the Ceph cluster is healthy, it returns a status of
HEALTH_OK
.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARN
orHEALTH_ERR
. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or Red Hat Ceph Storage 6 Troubleshooting Guide.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --stack <stack_name> --limit DistributedComputeHCI
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
-
Wait until the
DistributedComputeHCI
node update completes. -
Use the same process to update
DistributedComputeHCIScaleOut
nodes.
3.10. Updating all Ceph Storage nodes
Update the Red Hat Ceph Storage nodes to the latest RHOSP 17.1 version.
RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-mon
service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean
:$ sudo cephadm -- shell ceph status
If the Ceph cluster is healthy, it returns a status of
HEALTH_OK
.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARN
orHEALTH_ERR
. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the update command:
$ openstack overcloud update run --stack <stack_name> --limit CephStorage
-
If the name of your overcloud stack is different from the default stack name
overcloud
, set your stack name with the--stack
option and replace<stack_name>
with the name of your stack.
-
If the name of your overcloud stack is different from the default stack name
- Wait until the node update completes.
3.11. Updating the Red Hat Ceph Storage cluster
Update the director-deployed Red Hat Ceph Storage cluster to the latest version that is compatible with Red Hat OpenStack Platform (RHOSP) 17.1 by using the cephadm
command.
Update your Red Hat Ceph Storage cluster if one of the following scenarios applies to your environment:
- If you upgraded from RHOSP 16.2 to RHOSP 17.1, you run Red Hat Ceph Storage 5, and you are updating to a newer version of Red Hat Ceph Storage 5.
- If you newly deployed RHOSP 17.1, you run Red Hat Ceph Storage 6, and you are updating to a newer version of Red Hat Ceph Storage 6.
Prerequisites
- Complete the container image preparation in Section 3.3, “Running the container image preparation”.
Procedure
- Log in to a Controller node.
Check the health of the cluster:
$ sudo cephadm shell -- ceph health
NoteIf the Ceph Storage cluster is healthy, the command returns a result of
HEALTH_OK
. If the command returns a different result, review the status of the cluster and contact Red Hat support before continuing the update. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage Upgrade Guide or Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 6 Upgrade Guide.Optional: Check which images should be included in the Ceph Storage cluster update:
$ openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print $2}'
Update the cluster to the latest Red Hat Ceph Storage version:
$ sudo cephadm shell -- ceph orch upgrade start --image <image_name>: <version>
-
Replace
<image_name>
with the name of the Ceph Storage cluster image. -
Replace
<version>
with the target version to which you are updating the Ceph Storage cluster.
-
Replace
Wait until the Ceph Storage container update completes. To monitor the status of the update, run the following command:
sudo cephadm shell -- ceph orch upgrade status
3.12. Performing online database updates
Some overcloud components require an online update or migration of their databases tables. To perform online database updates, run the openstack overcloud external-update run
command against tasks that have the online_upgrade
tag.
Online database updates apply to the following components:
- OpenStack Block Storage (cinder)
- OpenStack Compute (nova)
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the
openstack overcloud external-update run
command against tasks that use theonline_upgrade
tag:$ openstack overcloud external-update run --stack <stack_name> --tags online_upgrade
3.13. Re-enabling fencing in the overcloud
Before you updated the overcloud, you disabled fencing in Disabling fencing in the overcloud. After you update the overcloud, re-enable fencing to protect your data if a node fails.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Log in to a Controller node and run the Pacemaker command to re-enable fencing:
$ ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true"
-
Replace
<controller_ip>
with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with theopenstack server list
command.
-
Replace
-
In the
fencing.yaml
environment file, set theEnableFencing
parameter totrue
.
Additional Resources
Chapter 4. Rebooting the overcloud
After you perform a minor Red Hat OpenStack Platform (RHOSP) update to the latest 17.1 version, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates provide performance and security benefits. Plan downtime to perform the reboot procedures.
Use the following guidance to understand how to reboot different node types:
- If you reboot all nodes in one role, reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation.
Complete the reboot procedures on the nodes in the following order:
4.1. Rebooting Controller and composable nodes
Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes.
Procedure
- Log in to the node that you want to reboot.
Optional: If the node uses Pacemaker resources, stop the cluster:
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
Reboot the node:
[tripleo-admin@overcloud-controller-0 ~]$ sudo reboot
- Wait until the node boots.
Verification
Verify that the services are enabled.
If the node uses Pacemaker services, check that the node has rejoined the cluster:
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs status
If the node uses Systemd services, check that all services are enabled:
[tripleo-admin@overcloud-controller-0 ~]$ sudo systemctl status
If the node uses containerized services, check that all containers on the node are active:
[tripleo-admin@overcloud-controller-0 ~]$ sudo podman ps
4.2. Rebooting a Ceph Storage (OSD) cluster
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-mon
service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean
:$ sudo cephadm -- shell ceph status
If the Ceph cluster is healthy, it returns a status of
HEALTH_OK
.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARN
orHEALTH_ERR
. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide.
Procedure
Log in to a Ceph Monitor or Controller node that is running the
ceph-mon
service, and disable Ceph Storage cluster rebalancing temporarily:$ sudo cephadm shell -- ceph osd set noout $ sudo cephadm shell -- ceph osd set norebalance
NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the
noout
andnorebalance
flags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring
.- Select the first Ceph Storage node that you want to reboot and log in to the node.
Reboot the node:
$ sudo reboot
- Wait until the node boots.
Log in to the node and check the Ceph cluster status:
$ sudo cephadm -- shell ceph status
Check that the
pgmap
reports allpgs
as normal (active+clean
).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes.
When complete, log in to a Ceph Monitor or Controller node that is running the
ceph-mon
service and enable Ceph cluster rebalancing:$ sudo cephadm shell -- ceph osd unset noout $ sudo cephadm shell -- ceph osd unset norebalance
NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the
noout
andnorebalance
flags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring
Perform a final status check to verify that the cluster reports
HEALTH_OK
:$ sudo cephadm shell ceph status
4.3. Rebooting Compute nodes
To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot.
Migrating instances workflow
- Decide whether to migrate instances to another Compute node before rebooting the node.
- Select and disable the Compute node that you want to reboot so that it does not provision new instances.
- Migrate the instances to another Compute node.
- Reboot the empty Compute node.
- Enable the empty Compute node.
Prerequisites
Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting.
Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute service for instance creation.
NoteIf you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.2 to a Compute node that is running RHEL 8.4, only cold migration is supported. For more information about cold migration, see Cold migrating an instance in Configuring the Compute service for instance creation.
If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots:
NovaResumeGuestsStateOnHostBoot
-
Determines whether to return instances to the same state on the Compute node after reboot. When set to
False
, the instances remain down and you must start them manually. The default value isFalse
. NovaResumeGuestsShutdownTimeout
Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to
0
. The default value is300
.For more information about overcloud parameters and their usage, see Overcloud parameters.
Procedure
-
Log in to the undercloud as the
stack
user. Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot:
(undercloud)$ source ~/overcloudrc (overcloud)$ openstack compute service list
Identify the host name of the Compute node that you want to reboot.
Disable the Compute service on the Compute node that you want to reboot:
(overcloud)$ openstack compute service list (overcloud)$ openstack compute service set <hostname> nova-compute --disable
-
Replace
<hostname>
with the host name of your Compute node.
-
Replace
List all instances on the Compute node:
(overcloud)$ openstack server list --host <hostname> --all-projects
Optional: To migrate the instances to another Compute node, complete the following steps:
If you decide to migrate the instances to another Compute node, use one of the following commands:
To migrate the instance to a different host, run the following command:
(overcloud) $ openstack server migrate <instance_id> --live <target_host> --wait
-
Replace
<instance_id>
with your instance ID. -
Replace
<target_host>
with the host that you are migrating the instance to.
-
Replace
Let
nova-scheduler
automatically select the target host:(overcloud) $ nova live-migration <instance_id>
Live migrate all instances at once:
$ nova host-evacuate-live <hostname>
NoteThe
nova
command might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm that the migration was successful:
(overcloud) $ openstack server list --host <hostname> --all-projects
- Continue to migrate instances until none remain on the Compute node.
Log in to the Compute node and reboot the node:
[tripleo-admin@overcloud-compute-0 ~]$ sudo reboot
- Wait until the node boots.
Re-enable the Compute node:
$ source ~/overcloudrc (overcloud) $ openstack compute service set <hostname> nova-compute --enable
Check that the Compute node is enabled:
(overcloud) $ openstack compute service list
4.4. Validating RHOSP after the overcloud update
After you update your Red Hat OpenStack Platform (RHOSP) environment, validate your overcloud with the tripleo-validations
playbooks.
For more information about validations, see Using the validation framework in Installing and managing Red Hat OpenStack Platform with director.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the validation:
$ validation run -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml --group post-update
- Replace <stack> with the name of the stack.
Verification
- To view the results of the validation report, see Viewing validation history in Installing and managing Red Hat OpenStack Platform with director.
Ignore FAILED
validations that report No host matched
as the only error returned. This error means that you have no hosts that match the validation host group, which might be expected. A FAILED
validation does not prevent you from using your updated RHOSP environment. However, a FAILED
validation can indicate an issue with your environment.