Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Keeping Red Hat OpenStack Platform Updated

Red Hat OpenStack Platform 13

Performing minor updates of Red Hat OpenStack Platform

OpenStack Documentation Team

Abstract

This document provides the procedure to update your Red Hat OpenStack Platform 13 (Queens) environment. This document assumes you will update a containerized OpenStack Platform deployment installed on Red Hat Enterprise Linux 7.

Chapter 1. Introduction

This document provides a workflow to help keep your Red Hat OpenStack Platform 13 environment updated with the latest packages and containers.

This guide provides an upgrade path through the following versions:

Old Overcloud VersionNew Overcloud Version

Red Hat OpenStack Platform 13

Red Hat OpenStack Platform 13.z

1.1. High level workflow

The following table provides an outline of the steps required for the upgrade process:

StepDescription

Obtaining new container images

Create a new environment file containing the latest container images for OpenStack Platform 13 services.

Updating the undercloud

Update the undercloud to the latest OpenStack Platform 13.z version.

Updating the overcloud

Update the overcloud to the latest OpenStack Platform 13.z version.

Updating the Ceph Storage nodes

Upgrade all Ceph Storage 3 services.

Finalize the upgrade

Run the convergence command to refresh your overcloud stack.

1.2. Troubleshooting

  • If the update process takes longer than expected, then it might time out with the error: socket is already closed. This can arise because the undercloud’s authentication token is set to expire after a set duration. For more information, see Recommendations for Large Deployments.

Chapter 2. Updating your container image source

This chapter provides information on how to update your registry source with new overcloud container images for Red Hat OpenStack Platform.

2.1. Registry Methods

Red Hat OpenStack Platform supports the following registry types:

Remote Registry
The overcloud pulls container images directly from registry.access.redhat.com. This method is the easiest for generating the initial configuration. However, each overcloud node pulls each image directly from the Red Hat Container Catalog, which can cause network congestion and slower deployment. In addition, all overcloud nodes require internet access to the Red Hat Container Catalog.
Local Registry
The undercloud uses the docker-distribution service to act as a registry. This allows the director to synchronize the images from registry.access.redhat.com and push them to the docker-distribution registry. When creating the overcloud, the overcloud pulls the container images from the undercloud’s docker-distribution registry. This method allows you to store a registry internally, which can speed up the deployment and decrease network congestion. However, the undercloud only acts as a basic registry and provides limited life cycle management for container images.
Note

The docker-distribution service acts separately from docker. docker is used to pull and push images to the docker-distribution registry and does not serve the images to the overcloud. The overcloud pulls the images from the docker-distribution registry.

Satellite Server
Manage the complete application life cycle of your container images and publish them through a Red Hat Satellite 6 server. The overcloud pulls the images from the Satellite server. This method provides an enterprise grade solution to store, manage, and deploy Red Hat OpenStack Platform containers.

Select a method from the list and continue configuring your registry details.

Note

When building for a multi-architecture cloud, the local registry option is not supported.

2.2. Container image preparation command usage

This section provides an overview on how to use the openstack overcloud container image prepare command, including conceptual information on the command’s various options.

Generating a Container Image Environment File for the Overcloud

One of the main uses of the openstack overcloud container image prepare command is to create an environment file that contains a list of images the overcloud uses. You include this file with your overcloud deployment commands, such as openstack overcloud deploy. The openstack overcloud container image prepare command uses the following options for this function:

--output-env-file
Defines the resulting environment file name.

The following snippet is an example of this file’s contents:

parameter_defaults:
  DockerAodhApiImage: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest
  DockerAodhConfigImage: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest
...

Generating a Container Image List for Import Methods

If you aim to import the OpenStack Platform container images to a different registry source, you can generate a list of images. The syntax of list is primarily used to import container images to the container registry on the undercloud, but you can modify the format of this list to suit other import methods, such as Red Hat Satellite 6.

The openstack overcloud container image prepare command uses the following options for this function:

--output-images-file
Defines the resulting file name for the import list.

The following is an example of this file’s contents:

container_images:
- imagename: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest
- imagename: registry.access.redhat.com/rhosp13/openstack-aodh-evaluator:latest
...

Setting the Namespace for Container Images

Both the --output-env-file and --output-images-file options require a namespace to generate the resulting image locations. The openstack overcloud container image prepare command uses the following options to set the source location of the container images to pull:

--namespace
Defines the namespace for the container images. This is usually a hostname or IP address with a directory.
--prefix
Defines the prefix to add before the image names.

As a result, the director generates the image names using the following format:

  • [NAMESPACE]/[PREFIX][IMAGE NAME]

Setting Container Image Tags

The openstack overcloud container image prepare command uses the latest tag for each container image by default. However, you can select a specific tag for an image version using one of the following options:

--tag-from-label
Use the value of the specified container image labels to discover the versioned tag for every image.
--tag
Sets the specific tag for all images. All OpenStack Platform container images use the same tag to provide version synchronicity. When using in combination with --tag-from-label, the versioned tag is discovered starting from this tag.

2.3. Container images for additional services

The director only prepares container images for core OpenStack Platform Services. Some additional features use services that require additional container images. You enable these services with environment files. The openstack overcloud container image prepare command uses the following option to include environment files and their respective container images:

-e
Include environment files to enable additional container images.

The following table provides a sample list of additional services that use container images and their respective environment file locations within the /usr/share/openstack-tripleo-heat-templates directory.

ServiceEnvironment File

Ceph Storage

environments/ceph-ansible/ceph-ansible.yaml

Collectd

environments/services-docker/collectd.yaml

Congress

environments/services-docker/congress.yaml

Fluentd

environments/services-docker/fluentd.yaml

OpenStack Bare Metal (ironic)

environments/services-docker/ironic.yaml

OpenStack Data Processing (sahara)

environments/services-docker/sahara.yaml

OpenStack EC2-API

environments/services-docker/ec2-api.yaml

OpenStack Key Manager (barbican)

environments/services-docker/barbican.yaml

OpenStack Load Balancing-as-a-Service (octavia)

environments/services-docker/octavia.yaml

OpenStack Shared File System Storage (manila)

environments/manila-{backend-name}-config.yaml

NOTE: See OpenStack Shared File System (manila) for more information.

Open Virtual Network (OVN)

environments/services-docker/neutron-ovn-dvr-ha.yaml

Sensu

environments/services-docker/sensu-client.yaml

The next few sections provide examples of including additional services.

Ceph Storage

If deploying a Red Hat Ceph Storage cluster with your overcloud, you need to include the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file. This file enables the composable containerized services in your overcloud and the director needs to know these services are enabled to prepare their images.

In addition to this environment file, you also need to define the Ceph Storage container location, which is different from the OpenStack Platform services. Use the --set option to set the following parameters specific to Ceph Storage:

--set ceph_namespace
Defines the namespace for the Ceph Storage container image. This functions similar to the --namespace option.
--set ceph_image
Defines the name of the Ceph Storage container image. Usually,this is rhceph-3-rhel7.
--set ceph_tag
Defines the tag to use for the Ceph Storage container image. This functions similar to the --tag option. When --tag-from-label is specified, the versioned tag is discovered starting from this tag.

The following snippet is an example that includes Ceph Storage in your container image files:

$ openstack overcloud container image prepare \
  ...
  -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
  --set ceph_namespace=registry.access.redhat.com/rhceph \
  --set ceph_image=rhceph-3-rhel7 \
  --tag-from-label {version}-{release} \
  ...

OpenStack Bare Metal (ironic)

If deploying OpenStack Bare Metal (ironic) in your overcloud, you need to include the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml environment file so the director can prepare the images. The following snippet is an example on how to include this environment file:

$ openstack overcloud container image prepare \
  ...
  -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml \
  ...

OpenStack Data Processing (sahara)

If deploying OpenStack Data Processing (sahara) in your overcloud, you need to include the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/sahara.yaml environment file so the director can prepare the images. The following snippet is an example on how to include this environment file:

$ openstack overcloud container image prepare \
  ...
  -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/sahara.yaml \
  ...

OpenStack Neutron SR-IOV

If deploying OpenStack Neutron SR-IOV in your overcloud, include the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-sriov.yaml environment file so the director can prepare the images. The default Controller and Compute roles do not support the SR-IOV service, so you must also use the -r option to include a custom roles file that contains SR-IOV services. The following snippet is an example on how to include this environment file:

$ openstack overcloud container image prepare \
  ...
  -r ~/custom_roles_data.yaml
  -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-sriov.yaml \
  ...

OpenStack Load Balancing-as-a-Service (octavia)

If deploying OpenStack Load Balancing-as-a-Service in your overcloud, include the /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml environment file so the director can prepare the images. The following snippet is an example on how to include this environment file:

$ openstack overcloud container image prepare \
  ...
  -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml
\
  ...

OpenStack Shared File System (manila)

Using the format manila-{backend-name}-config.yaml, you can choose a supported back end to deploy the Shared File System with that back end. Shared File System service containers can be prepared by including any of the following environment files:

  environments/manila-isilon-config.yaml
  environments/manila-netapp-config.yaml
  environments/manila-vmax-config.yaml
  environments/manila-cephfsnative-config.yaml
  environments/manila-cephfsganesha-config.yaml
  environments/manila-unity-config.yaml
  environments/manila-vnx-config.yaml

For more information about customizing and deploying environment files, see the following resources:

2.4. Using the Red Hat registry as a remote registry source

Red Hat hosts the overcloud container images on registry.access.redhat.com. Pulling the images from a remote registry is the simplest method because the registry is already configured and all you require is the URL and namespace of the image that you want to pull. However, during overcloud creation, the overcloud nodes all pull images from the remote repository, which can congest your external connection. As a result, this method is not recommended for production environments. For production environments, use one of the following methods instead:

  • Setup a local registry
  • Host the images on Red Hat Satellite 6

Procedure

  1. To pull the images directly from registry.access.redhat.com in your overcloud deployment, an environment file is required to specify the image parameters. Run the following command to generate the container image environment file:

    (undercloud) $ sudo openstack overcloud container image prepare \
      --namespace=registry.access.redhat.com/rhosp13 \
      --prefix=openstack- \
      --tag-from-label {version}-{release} \
      --output-env-file=/home/stack/templates/overcloud_images.yaml
    • Use the -e option to include any environment files for optional services.
    • Use the -r option to include a custom roles file.
    • If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location: --set ceph_namespace, --set ceph_image, --set ceph_tag.
  2. Modify the overcloud_images.yaml file and include the following parameters to authenticate with registry.access.redhat.com during deployment:

    ContainerImageRegistryLogin: true
    ContainerImageRegistryCredentials:
      registry.access.redhat.com:
        <USERNAME>: <PASSWORD>
    • Replace <USERNAME> and <PASSWORD> with your credentials for registry.access.redhat.com.

      The overcloud_images.yaml file contains the image locations on the undercloud. Include this file with your deployment.

      Note

      Before you run the openstack overcloud deploy command, you must log in to the remote registry:

      (undercloud) $ sudo docker login registry.access.redhat.com

The registry configuration is ready.

2.5. Using the undercloud as a local registry

You can configure a local registry on the undercloud to store overcloud container images. This method involves the following:

  • The director pulls each image from the registry.access.redhat.com.
  • The director pushes each images to the docker-distribution registry running on the undercloud.
  • The director creates the overcloud.
  • During the overcloud creation, the nodes pull the relevant images from the undercloud’s docker-distribution registry.

This keeps network traffic for container images within your internal network, which does not congest your external network connection and can speed the deployment process.

Procedure

  1. Find the address of the local undercloud registry. The address will use the following pattern:

    <REGISTRY IP ADDRESS>:8787

    Use the IP address of your undercloud, which you previously set with the local_ip parameter in your undercloud.conf file. For the commands below, the address is assumed to be 192.168.24.1:8787.

  2. Create a template to upload the the images to the local registry, and the environment file to refer to those images:

    (undercloud) $ openstack overcloud container image prepare \
      --namespace=registry.access.redhat.com/rhosp13 \
      --push-destination=192.168.24.1:8787 \
      --prefix=openstack- \
      --tag-from-label {version}-{release} \
      --output-env-file=/home/stack/templates/overcloud_images.yaml \
      --output-images-file /home/stack/local_registry_images.yaml
    • Use the -e option to include any environment files for optional services.
    • Use the -r option to include a custom roles file.
    • If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location: --set ceph_namespace, --set ceph_image, --set ceph_tag.
  3. This creates two files:

    • local_registry_images.yaml, which contains container image information from the remote source. Use this file to pull the images from the Red Hat Container Registry (registry.access.redhat.com) to the undercloud.
    • overcloud_images.yaml, which contains the eventual image locations on the undercloud. You include this file with your deployment.

      Check that both files exist.

  4. Modify the local_registry_images.yaml file and include the following parameters to authenticate with registry.access.redhat.com:

    ContainerImageRegistryLogin: true
    ContainerImageRegistryCredentials:
      registry.access.redhat.com:
        <USERNAME>: <PASSWORD>
    • Replace <USERNAME> and <PASSWORD> with your credentials for registry.access.redhat.com.
  5. Log in to registry.access.redhat.com and pull the container images from the remote registry to the undercloud.

    (undercloud) $ sudo docker login registry.access.redhat.com
    (undercloud) $ sudo openstack overcloud container image upload \
      --config-file  /home/stack/local_registry_images.yaml \
      --verbose

    Pulling the required images might take some time depending on the speed of your network and your undercloud disk.

    Note

    The container images consume approximately 10 GB of disk space.

  6. The images are now stored on the undercloud’s docker-distribution registry. To view the list of images on the undercloud’s docker-distribution registry, run the following command:

    (undercloud) $  curl http://192.168.24.1:8787/v2/_catalog | jq .repositories[]

    To view a list of tags for a specific image, use the skopeo command:

    (undercloud) $ curl -s http://192.168.24.1:8787/v2/rhosp13/openstack-keystone/tags/list | jq .tags

    To verify a tagged image, use the skopeo command:

    (undercloud) $ skopeo inspect --tls-verify=false docker://192.168.24.1:8787/rhosp13/openstack-keystone:13.0-44

The registry configuration is ready.

2.6. Using a Satellite server as a registry

Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more details information on managing container images, see "Managing Container Images" in the Red Hat Satellite 6 Content Management Guide.

The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME. Substitute this organization for your own Satellite 6 organization.

Procedure

  1. Create a template to pull images to the local registry:

    $ source ~/stackrc
    (undercloud) $ openstack overcloud container image prepare \
      --namespace=rhosp13 \
      --prefix=openstack- \
      --output-images-file /home/stack/satellite_images \
    • Use the -e option to include any environment files for optional services.
    • Use the -r option to include a custom roles file.
    • If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location: --set ceph_namespace, --set ceph_image, --set ceph_tag.
    Note

    This version of the openstack overcloud container image prepare command targets the registry on the registry.access.redhat.com to generate an image list. It uses different values than the openstack overcloud container image prepare command used in a later step.

  2. This creates a file called satellite_images with your container image information. You will use this file to synchronize container images to your Satellite 6 server.
  3. Remove the YAML-specific information from the satellite_images file and convert it into a flat file containing only the list of images. The following sed commands accomplish this:

    (undercloud) $ awk -F ':' '{if (NR!=1) {gsub("[[:space:]]", ""); print $2}}' ~/satellite_images > ~/satellite_images_names

    This provides a list of images that you pull into the Satellite server.

  4. Copy the satellite_images_names file to a system that contains the Satellite 6 hammer tool. Alternatively, use the instructions in the Hammer CLI Guide to install the hammer tool to the undercloud.
  5. Run the following hammer command to create a new product (OSP13 Containers) to your Satellite organization:

    $ hammer product create \
      --organization "ACME" \
      --name "OSP13 Containers"

    This custom product will contain our images.

  6. Add the base container image to the product:

    $ hammer repository create \
      --organization "ACME" \
      --product "OSP13 Containers" \
      --content-type docker \
      --url https://registry.access.redhat.com \
      --docker-upstream-name rhosp13/openstack-base \
      --name base
  7. Add the overcloud container images from the satellite_images file.

    $ while read IMAGE; do \
      IMAGENAME=$(echo $IMAGE | cut -d"/" -f2 | sed "s/openstack-//g" | sed "s/:.*//g") ; \
      hammer repository create \
      --organization "ACME" \
      --product "OSP13 Containers" \
      --content-type docker \
      --url https://registry.access.redhat.com \
      --docker-upstream-name $IMAGE \
      --name $IMAGENAME ; done < satellite_images_names
  8. Synchronize the container images:

    $ hammer product synchronize \
      --organization "ACME" \
      --name "OSP13 Containers"

    Wait for the Satellite server to complete synchronization.

    Note

    Depending on your configuration, hammer might ask for your Satellite server username and password. You can configure hammer to automatically login using a configuration file. See the "Authentication" section in the Hammer CLI Guide.

  9. If your Satellite 6 server uses content views, create a new content view version to incorporate the images.
  10. Check the tags available for the base image:

    $ hammer docker tag list --repository "base" \
      --organization "ACME" \
      --product "OSP13 Containers"

    This displays tags for the OpenStack Platform container images.

  11. Return to the undercloud and generate an environment file for the images on your Satellite server. The following is an example command for generating the environment file:

    (undercloud) $ openstack overcloud container image prepare \
      --namespace=satellite6.example.com:5000 \
      --prefix=acme-osp13_containers- \
      --tag-from-label {version}-{release} \
      --output-env-file=/home/stack/templates/overcloud_images.yaml
    Note

    This version of the openstack overcloud container image prepare command targets the Satellite server. It uses different values than the openstack overcloud container image prepare command used in a previous step.

    When running this command, include the following data:

    • --namespace - The URL and port of the registry on the Satellite server. The default registry port on Red Hat Satellite is 5000. For example, --namespace=satellite6.example.com:5000.
    • --prefix= - The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views:

      • If you use content views, the structure is [org]-[environment]-[content view]-[product]-. For example: acme-production-myosp13-osp13_containers-.
      • If you do not use content views, the structure is [org]-[product]-. For example: acme-osp13_containers-.
    • --tag-from-label {version}-{release} - Identifies the latest tag for each image.
    • -e - Include any environment files for optional services.
    • -r - Include a custom roles file.
    • --set ceph_namespace, --set ceph_image, --set ceph_tag - If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location. Note that ceph_image now includes a Satellite-specific prefix. This prefix is the same value as the --prefix option. For example:

      --set ceph_image=acme-osp13_containers-rhceph-3-rhel7

      This ensures the overcloud uses the Ceph container image using the Satellite naming convention.

  12. Modify the overcloud_images.yaml file and include the following parameters to authenticate with the Satellite server during deployment:

    ContainerImageRegistryLogin: true
    ContainerImageRegistryCredentials:
      <SATELLITE_SERVER>:
        <USERNAME>: <PASSWORD>
    • Replace <SATELLITE_SERVER> with the address of your Satellite server.
    • Replace <USERNAME> and <PASSWORD> with the credentials for your Satellite server.

      The overcloud_images.yaml file contains the image locations on the Satellite server. Include this file with your deployment.

The registry configuration is ready.

2.7. Next Steps

You now have a new overcloud_images.yaml environment file that contains a list of your container image sources. Include this file with all future upgrade and deployment operations.

You can now prepare the overcloud for the update.

Chapter 3. Upgrading the Undercloud

This process upgrades the undercloud and its overcloud images to Red Hat OpenStack Platform 13.

3.1. Performing a minor update of an undercloud

The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of your OpenStack Platform environment.

Procedure

  1. Log into the director as the stack user.
  2. Update the python-tripleoclient package and its dependencies to ensure you have the latest scripts for the minor version update:

    $ sudo yum update -y python-tripleoclient
  3. The director uses the openstack undercloud upgrade command to update the Undercloud environment. Run the command:

    $ openstack undercloud upgrade
  4. Wait until the undercloud upgrade process completes.
  5. Reboot the undercloud to update the operating system’s kernel and other system packages:

    $ sudo reboot
  6. Wait until the node boots.

3.2. Updating the overcloud images

You need to replace your current overcloud images with new versions. The new images ensure the director can introspect and provision your nodes using the latest version of OpenStack Platform software.

Prerequisites

  • You have updated the undercloud to the latest version.

Procedure

  1. Remove any existing images from the images directory on the stack user’s home (/home/stack/images):

    $ rm -rf ~/images/*
  2. Extract the archives:

    $ cd ~/images
    $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar; do tar -xvf $i; done
    $ cd ~
  3. Import the latest images into the director:

    $ openstack overcloud image upload --update-existing --image-path /home/stack/images/
  4. Configure your nodes to use the new images:

    $ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
  5. Verify the existence of the new images:

    $ openstack image list
    $ ls -l /httpboot
Important

When deploying overcloud nodes, ensure the overcloud image version corresponds to the respective heat template version. For example, only use the OpenStack Platform 13 images with the OpenStack Platform 13 heat templates.

Important

The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future.

3.3. Undercloud Post-Upgrade Notes

  • If using a local set of core templates in your stack users home directory, ensure you update the templates using the recommended workflow in "Using Customized Core Heat Templates". You must update the local copy before upgrading the overcloud.

3.4. Next Steps

The undercloud upgrade is complete. You can now prepare the overcloud for the upgrade.

Chapter 4. Updating the Overcloud

This process updates the overcloud.

Prerequisites

  • You have updated the undercloud to the latest version.

4.1. Consideration for custom roles

Check the following values in your roles file if your deployment includes custom roles:

  • Compare your custom roles file with the latest files in the /usr/share/openstack-tripleo-heat-templates/roles directory. Add any new parameters from the RoleParametersDefault sections for relevant roles for your environment to the equivalent roles in your custom roles file.
  • If you use Data Plane Development Kit (DPDK) and are upgrading from 13.4 or lower, ensure that the roles that contain OVS-DPDK services also contain the following mandatory parameters:

      RoleParametersDefault:
        VhostuserSocketGroup: "hugetlbfs"
        TunedProfileName: "cpu-paritioning"
        NovaLibvirtRxQueueSize: 1024
        NovaLibvirtTxQueueSize: 1024

4.2. Running the overcloud update preparation

The update requires running openstack overcloud update prepare command, which performs the following tasks:

  • Updates the overcloud plan to OpenStack Platform 13
  • Prepares the nodes for the update

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the update preparation command:

    $ openstack overcloud update prepare \
        --templates \
        -r <ROLES DATA FILE> \
        -n <NETWORK DATA FILE> \
        -e /home/stack/templates/overcloud_images.yaml \
        -e <ENVIRONMENT FILE> \
        -e <ENVIRONMENT FILE> \
        --stack <STACK_NAME>
        ...

    Include the following options relevant to your environment:

    • Custom configuration environment files (-e)
    • The environment file with your new container image locations (-e). Note that the update command might display a warning about using the --container-registry-file. You can ignore this warning as this option is deprecated in favor of using -e for the container image environment file.
    • If using your own custom roles, include your custom roles (roles_data) file (-r)
    • If using custom networks, include your composable network (network_data) file (-n)
    • If the name of your overcloud stack is different to the default name overcloud, include the --stack option in the update preparation command and replace <STACK_NAME> with the name of your stack.
  3. Wait until the update preparation completes.

4.3. Updating all Controller nodes

This process updates all the Controller nodes to the latest OpenStack Platform 13 version. The process involves running the openstack overcloud update run command and including the --nodes Controller option to restrict operations to the Controller nodes only.

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the update command:

    $ openstack overcloud update run --nodes Controller
  3. Wait until the Controller node update completes.

4.4. Updating all Compute nodes

This process updates all Compute nodes to the latest OpenStack Platform 13 version. The process involves running the openstack overcloud update run command and including the --nodes Compute option to restrict operations to the Compute nodes only.

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the update command:

    $ openstack overcloud update run --nodes Compute
  3. Wait until the Compute node update completes.

4.5. Updating all HCI Compute nodes

This process updates the Hyperconverged Infrastructure (HCI) Compute nodes. The process involves:

  • Running the openstack overcloud update run command and including the --nodes ComputeHCI option to restrict operations to the HCI nodes only.
  • Running the openstack overcloud ceph-upgrade run command to perform an update to a containerized Red Hat Ceph Storage 3 cluster.
Note

Currently, the following combinations of Ansible with ceph-ansible are supported:

  • ansible-2.6 with ceph-ansible-3.2
  • ansible-2.4 with ceph-ansible-3.1

If your environment has ansible-2.6 with ceph-ansible-3.1, run the following commands to update ceph-ansible to the newest version:

  # subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
  # subscription-manager repos --enable=rhel-7-server-ansible-2.6-rpms
  # yum update ceph-ansible

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the update command:

    $ openstack overcloud update run --nodes ComputeHCI
  3. Wait until the node update completes.
  4. Run the Ceph Storage update command. For example:

    $ openstack overcloud ceph-upgrade run \
        --templates \
        -e <ENVIRONMENT FILE> \
        -e /home/stack/templates/overcloud_images.yaml

    Include the following options relevant to your environment:

    • Custom configuration environment files (-e)
    • The environment file with your container image locations (-e). Note that the update command might display a warning about using the --container-registry-file. You can ignore this warning as this option is deprecated in favor of using -e for the container image environment file.
    • If applicable, your custom roles (roles_data) file (--roles-file)
    • If applicable, your composable network (network_data) file (--networks-file)
  5. Wait until the Compute HCI node update completes.

4.6. Updating all Ceph Storage nodes

This process updates the Ceph Storage nodes. The process involves:

  • Running the openstack overcloud update run command and including the --nodes CephStorage option to restrict operations to the Ceph Storage nodes only.
  • Running the openstack overcloud ceph-upgrade run command to perform an update to a containerized Red Hat Ceph Storage 3 cluster.
Note

Currently, the following combinations of Ansible with ceph-ansible are supported:

  • ansible-2.6 with ceph-ansible-3.2
  • ansible-2.4 with ceph-ansible-3.1

If your environment has ansible-2.6 with ceph-ansible-3.1, run the following commands to update ceph-ansible to the newest version:

  # subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
  # subscription-manager repos --enable=rhel-7-server-ansible-2.6-rpms
  # yum update ceph-ansible

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the update command:

    $ openstack overcloud update run --nodes CephStorage
  3. Wait until the node update completes.
  4. Run the Ceph Storage update command. For example:

    $ openstack overcloud ceph-upgrade run \
        --templates \
        -e <ENVIRONMENT FILE> \
        -e /home/stack/templates/overcloud_images.yaml

    Include the following options relevant to your environment:

    • Custom configuration environment files (-e)
    • The environment file with your container image locations (-e). Note that the update command might display a warning about using the --container-registry-file. You can ignore this warning as this option is deprecated in favor of using -e for the container image environment file.
    • If applicable, your custom roles (roles_data) file (--roles-file)
    • If applicable, your composable network (network_data) file (--networks-file)
  5. Wait until the Ceph Storage node update completes.

4.7. Finalizing the update

The update requires a final step to update the overcloud stack. This ensures that the stack resource structure aligns with a regular deployment of Red Hat OpenStack Platform 13 and you can perform standard openstack overcloud deploy functions in the future.

Procedure

  1. Source the stackrc file:

    $ source ~/stackrc
  2. Run the update finalization command:

    $ openstack overcloud update converge \
        --templates \
        -e /home/stack/templates/overcloud_images.yaml \
        -e <ENVIRONMENT FILE>

    Include the following options relevant to your environment:

    • Custom configuration environment files (-e)
    • The environment file with your new container image locations (-e). Note that the update command might display a warning about using the --container-registry-file. You can ignore this warning as this option is deprecated in favor of using -e for the container image environment file.
    • If applicable, your custom roles (roles_data) file (--roles-file)
    • If applicable, your composable network (network_data) file (--networks-file)
  3. Wait until the update finalization completes.

Chapter 5. Rebooting the overcloud

After a minor Red Hat OpenStack version update, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates may provide performance and security benefits.

Plan downtime to perform the following reboot procedures.

5.1. Rebooting controller and composable nodes

The following procedure reboots controller nodes and standalone nodes based on composable roles. This excludes Compute nodes and Ceph Storage nodes.

Procedure

  1. Log in to the node that you want to reboot.
  2. Optional: If the node uses Pacemaker resources, stop the cluster:

    [heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
  3. Reboot the node:

    [heat-admin@overcloud-controller-0 ~]$ sudo reboot
  4. Wait until the node boots.
  5. Check the services. For example:

    1. If the node uses Pacemaker services, check that the node has rejoined the cluster:

      [heat-admin@overcloud-controller-0 ~]$ sudo pcs status
    2. If the node uses Systemd services, check that all services are enabled:

      [heat-admin@overcloud-controller-0 ~]$ sudo systemctl status
    3. Repeat these steps for all Controller and composable nodes.

5.2. Rebooting a Ceph Storage (OSD) cluster

The following procedure reboots a cluster of Ceph Storage (OSD) nodes.

Procedure

  1. Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:

    $ sudo ceph osd set noout
    $ sudo ceph osd set norebalance
  2. Select the first Ceph Storage node to reboot and log into it.
  3. Reboot the node:

    $ sudo reboot
  4. Wait until the node boots.
  5. Log in to the node and check the cluster status:

    $ sudo ceph -s

    Check that the pgmap reports all pgs as normal (active+clean).

  6. Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
  7. When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:

    $ sudo ceph osd unset noout
    $ sudo ceph osd unset norebalance
  8. Perform a final status check to verify the cluster reports HEALTH_OK:

    $ sudo ceph status

5.3. Rebooting Compute nodes

Rebooting a Compute node involves the following workflow:

  • Select a Compute node to reboot and disable it so that it does not provision new instances.
  • Migrate the instances to another Compute node to minimise instance downtime.
  • Reboot the empty Compute node and enable it.

Procedure

  1. Log in to the undercloud as the stack user.
  2. To identify the UUID of the Compute node you aim to reboot, list all Compute nodes:

    $ source ~/stackrc
    (undercloud) $ openstack server list --name compute
  3. From the overcloud, select a Compute Node and disable it:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service list
    (overcloud) $ openstack compute service set <hostname> nova-compute --disable
  4. List all instances on the Compute node:

    (overcloud) $ openstack server list --host <hostname> --all-projects
  5. Migrate your instances. For more information on migration strategies, see Migrating virtual machines between Compute nodes.
  6. Log into the Compute Node and reboot it:

    [heat-admin@overcloud-compute-0 ~]$ sudo reboot
  7. Wait until the node boots.
  8. Enable the Compute node:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service set <hostname> nova-compute --enable
  9. Verify that the Compute node is enabled:

    (overcloud) $ openstack compute service list

5.4. Rebooting Compute HCI nodes

The following procedure reboots Compute hyperconverged infrastructure (HCI) nodes.

Procedure

  1. Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:

    $ sudo ceph osd set noout
    $ sudo ceph osd set norebalance
  2. Log in to the undercloud as the stack user.
  3. List all Compute nodes and their UUIDs:

    $ source ~/stackrc
    (undercloud) $ openstack server list --name compute

    Identify the UUID of the Compute node you aim to reboot.

  4. From the undercloud, select a Compute node and disable it:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service list
    (overcloud) $ openstack compute service set [hostname] nova-compute --disable
  5. List all instances on the Compute node:

    (overcloud) $ openstack server list --host [hostname] --all-projects
  6. Use one of the following commands to migrate your instances:

    1. Migrate the instance to a specific host of your choice:

      (overcloud) $ openstack server migrate [instance-id] --live [target-host]--wait
    2. Let nova-scheduler automatically select the target host:

      (overcloud) $ nova live-migration [instance-id]
    3. Live migrate all instances at once:

      $ nova host-evacuate-live [hostname]
      Note

      The nova command might cause some deprecation warnings, which are safe to ignore.

  7. Wait until the migration completes.
  8. Confirm that the migration was successful:

    (overcloud) $ openstack server list --host [hostname] --all-projects
  9. Continue migrating instances until none remain on the chosen Compute node.
  10. Log in to a Ceph MON or a Controller node and check the cluster status:

    $ sudo ceph -s

    Check that the pgmap reports all pgs as normal (active+clean).

  11. Reboot the Compute HCI node:

    $ sudo reboot
  12. Wait until the node boots.
  13. Enable the Compute node again:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service set [hostname] nova-compute --enable
  14. Verify that the Compute node is enabled:

    (overcloud) $ openstack compute service list
  15. Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
  16. When complete, log in to a Ceph MON or Controller node and enable cluster rebalancing again:

    $ sudo ceph osd unset noout
    $ sudo ceph osd unset norebalance
  17. Perform a final status check to verify the cluster reports HEALTH_OK:

    $ sudo ceph status