Chapter 4. Preparing for the Overcloud Upgrade

This process prepares the overcloud for the upgrade process.


  • You have upgraded the undercloud to the latest version.

4.1. Preparing Overcloud Registration Details

You need to provide the overcloud with the latest subscription details to ensure the overcloud consumes the latest packages during the upgrade process.


  • A subscription containing the latest OpenStack Platform repositories.
  • If using activation keys for registration, create a new activation key including the new OpenStack Platform repositories.


  1. Edit the environment file containing your registration details. For example:

    $ vi ~/templates/rhel-registration/environment-rhel-registration.yaml
  2. Edit the following parameter values:

    Update to include the new repositories for Red Hat OpenStack Platform 12.
    Update the activation key to access the Red Hat OpenStack Platform 12 repositories.
    If using a newer version of Red Hat Satellite 6, update the repository containing Satellite 6’s management tools.
  3. Save the environment file.

Related Information

4.2. Preparing for Containerized Services

Red Hat OpenStack Platform now uses containers to host and run OpenStack services. This requires you to:

  • Configure a container image source, such as a registry
  • Generate an environment file with image locations on your image source
  • Add the environment file to your overcloud deployment

For full instructions about generating this environment file for different use case, see "Configuring Container Registry Details" in the Director Installation and Usage guide.

The resulting environment file (/home/stack/templates/overcloud_images.yaml) contains parameters that point to the container image locations for each service. Include this file in all future deployment operations.

4.3. Preparing for New Composable Services

This version of Red Hat OpenStack Platform contains new composable services. If using a custom roles_data file, include these new services in their applicable roles.

All Roles

The following new services apply to all roles.

Allows the overcloud to require certificates from Certmonger. Only used if enabling TLS/SSL communication.
Installs docker to manage containerized services.
Installs the overcloud database client tool.
Installs the logrotate service for container logs.
Allows configuration of securetty on nodes. Enabled with the environments/securetty.yaml environment file.
Enables and configures the Linux tuning daemon (tuned).

Specific Roles

The following new services apply to specific roles:

Required on any role that also uses the OS::TripleO::Services::MySQL service, such as the Controller or standalone Database role.
Configures the iscsid service on the Controller, Compute, and BlockStorage roles.
Configures the migration target service on Compute nodes.

If using a custom roles_data file, add these services to required roles.

In addition, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide for updated lists of services for specific custom roles.

4.4. Preparing for Composable Networks

This version of Red Hat OpenStack Platform introduces a new feature for composable networks. If using a custom roles_data file, edit the file to add the composable networks to each role. For example, for Controller nodes:

- name: Controller
    - External
    - InternalApi
    - Storage
    - StorageMgmt
    - Tenant

Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file for further examples of syntax. Also check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles.

The following table provides a mapping of composable networks to custom standalone roles:

RoleNetworks Required

Ceph Storage Monitor

Storage, StorageMgmt

Ceph Storage OSD

Storage, StorageMgmt

Ceph Storage RadosGW

Storage, StorageMgmt

Cinder API



InternalApi, Tenant, Storage


External, InternalApi, Storage, StorageMgmt, Tenant










None required. Uses the Provisioning/Control Plane network for API.



Load Balancer

External, InternalApi, Storage, StorageMgmt, Tenant



Message Bus



InternalApi, Tenant

Neutron API





External, InternalApi, Tenant





Swift API


Swift Storage




4.5. Preparing for Deprecated Parameters

Note that the following parameters are deprecated and have been replaced with role-specific parameters:

Old ParameterNew Parameter

























Update these parameters in your custom environment files.

If your OpenStack Platform environment still requires these deprecated parameters, the default roles_data file allows their use. However, if you are using a custom roles_data file and your overcloud still requires these deprecated parameters, you can allow access to them by editing the roles_data file and adding the following to each role:

Controller Role

- name: Controller
  uses_deprecated_params: True
  deprecated_param_extraconfig: 'controllerExtraConfig'
  deprecated_param_flavor: 'OvercloudControlFlavor'
  deprecated_param_image: 'controllerImage'

Compute Role

- name: Compute
  uses_deprecated_params: True
  deprecated_param_image: 'NovaImage'
  deprecated_param_extraconfig: 'NovaComputeExtraConfig'
  deprecated_param_metadata: 'NovaComputeServerMetadata'
  deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints'
  deprecated_param_ips: 'NovaComputeIPs'
  deprecated_server_resource_name: 'NovaCompute'
  disable_upgrade_deployment: True

Object Storage Role

- name: ObjectStorage
  uses_deprecated_params: True
  deprecated_param_metadata: 'SwiftStorageServerMetadata'
  deprecated_param_ips: 'SwiftStorageIPs'
  deprecated_param_image: 'SwiftStorageImage'
  deprecated_param_flavor: 'OvercloudSwiftStorageFlavor'
  disable_upgrade_deployment: True

4.6. Preparing for Ceph Storage Node Upgrades

Due to the upgrade to containerized services, the method for installing and updating Ceph Storage nodes has changed. Ceph Storage configuration now uses a set of playbooks in the ceph-ansible packages, which you install on the undercloud.


  • Your overcloud has a director-managed Ceph Storage cluster.


  1. Install the ceph-ansible package to the undercloud:

    [stack@director ~]$ sudo yum install -y ceph-ansible
  2. Check that you are using the latest resources and configuration in your storage environment file. This requires the following changes:

    1. The resource_registry uses containerized services from the docker/services subdirectory of your core Heat template collection. For example:
  OS::TripleO::Services::CephMon: ../docker/services/ceph-ansible/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../docker/services/ceph-ansible/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../docker/services/ceph-ansible/ceph-client.yaml
  1. Use the new CephAnsibleDisksConfig parameter to define how your disks are mapped. Previous versions of Red Hat OpenStack Platform used the ceph::profile::params::osds hieradata to define the OSD layout. Convert this hieradata to the structure of the CephAnsibleDisksConfig parameter. For example, if your hieradata contained the following:

        ceph::profile::params::osd_journal_size: 512
          '/dev/sdb': {}
          '/dev/sdc': {}
          '/dev/sdd': {}

    Then the CephAnsibleDisksConfig would look like this:

        - /dev/sdb
        - /dev/sdc
        - /dev/sdd
        journal_size: 512
        osd_scenario: collocated

    For a full list of OSD disk layout options used in ceph-ansible, view the sample file in /usr/share/ceph-ansible/group_vars/osds.yml.sample.

Related Information

4.7. Preparing for Hyper-Converged Infrastructure (HCI) Upgrades

On a Hyper-Converged Infrastructure (HCI), the Ceph Storage and Compute services are colocated within a single role. However, you must complete the upgrade for HCI nodes in the same way as for Compute nodes. This means that you must delay the migration of the Ceph Storage services to containerized services until you have installed the core packages and enabled the container services.


  • Your overcloud uses a collocated role containing Compute and Ceph Storage.


  1. Edit the environment file containing your Ceph Storage configuration.
  2. Ensure that the resource_registry uses the Puppet resources. For example:

      OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
      OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
      OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml

    Use the contents of the /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph.yaml as an example.

  3. Upgrade your Controller-based nodes to containerized services using the instructions in Section 5.1, “Upgrading the Overcloud Nodes”.
  4. Upgrade your HCI nodes using the instructions in Section 5.3, “Upgrading the Compute Nodes”
  5. Edit the resource_registry in your Ceph Storage configuration to use the containerized services:

      OS::TripleO::Services::CephMon: ../docker/services/ceph-ansible/ceph-mon.yaml
      OS::TripleO::Services::CephOSD: ../docker/services/ceph-ansible/ceph-osd.yaml
      OS::TripleO::Services::CephClient: ../docker/services/ceph-ansible/ceph-client.yaml

    Use the contents of the /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml as an example.

  6. Add the CephAnsiblePlaybook parameter to the parameter_defaults section of your storage environment file:

      CephAnsiblePlaybook: /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
  7. Add the CephAnsibleDisksConfig parameter to the parameter_defaults section of your storage environment file and define the disk layout. For example:

        - /dev/vdb
        - /dev/vdc
        - /dev/vdd
        journal_size: 512
        osd_scenario: collocated
  8. Finalize the upgrade of your overcloud using the instructions in Section 5.4, “Finalizing the Upgrade”.

Related Information

  • For more information about configuring ceph-ansible management with OpenStack Platform director, see the Deploying an Overcloud with Containerized Red Hat Ceph guide.
  • HCI NUMA pinning for OSD nodes

    Red Hat OpenStack Platform (RHOSP) 11 included a post-deploy script to start OSDs with numactl to implement resource isolation. For more information, see Configure Ceph NUMA Pinning in the Hyper-Converged Infrastructure guide. There is no option to implement a NUMA preference in RHOSP12.

    During a sequential upgrade from RHOSP11 to RHOSP12 to RHOSP13, when your cluster is temporarily running RHOSP12 with Red Hat Ceph Storage 2, the OSDs do not have a NUMA preference. However, during the cluster upgrade from RHOSP12 to RHOSP13, Ceph upgrades from Red Hat Ceph Storage 2 to Red Hat Ceph Storage 3. When this happens, the init script that ceph-ansible provides can pass resource isolation options to the OSDs when the container manager starts the OSD containers.


    When you upgrade from RHOSP12 to RHOSP13, update your heat template to pass Ceph options. The following example shows the CPU and Memory in NUMA node 0. For your environment, use the values that are appropriate to your hardware. The example also includes the is_hci parameter set to true to optimize memory management.

        ceph_osd_docker_cpuset_cpus: "0,2,4,6,8,10,12,14"
        ceph_osd_docker_cpuset_mems: "0"
        is_hci: true

4.8. Preparing Access to the Undercloud’s Public API over SSL/TLS

The overcloud requires access to the undercloud’s OpenStack Object Storage (swift) Public API during the upgrade. If your undercloud uses a self-signed certificate, you need to add the undercloud’s certificate authority to each overcloud node.


  • The undercloud uses SSL/TLS for its Public API


  1. The director’s dynamic Ansible script has updated to the OpenStack Platform 12 version, which uses the RoleNetHostnameMap Heat parameter in the overcloud plan to define the inventory. However, the overcloud currently uses the OpenStack Platform 11 template versions, which do not have the RoleNetHostnameMap parameter. This means you need to create a temporary static inventory file, which you can generate with the following command:

    $ openstack server list -c Networks -f value | cut -d"=" -f2 > overcloud_hosts
  2. Create an Ansible playbook (undercloud-ca.yml) that contains the following:

    - name: Add undercloud CA to overcloud nodes
      hosts: all
      user: heat-admin
      become: true
        - name: Copy undercloud CA
            src: ca.crt.pem
            dest: /etc/pki/ca-trust/source/anchors/
        - name: Update trust
          command: "update-ca-trust extract"
        - name: Get the hostname of the undercloud
          command: hostname
          register: undercloud_hostname
        - name: Verify URL
            url: https://{{ undercloud_hostname.stdout }}:13808/healthcheck
            return_content: yes
          register: verify
        - name: Report output
            msg: "{{ ansible_hostname }} can access the undercloud's Public API"
          when: verify.content == "OK"

    This playbook contains multiple tasks that perform the following on each node:

    • Copy the undercloud’s certificate authority file (ca.crt.pem) to the overcloud node. The name of this file and its location might vary depending on your configuration. This example uses the name and location defined during the self-signed certificate procedure (see "SSL/TLS Certificate Configuration" in the Director Installation and Usage guide).
    • Execute the command to update the certificate authority trust database on the overcloud node.
    • Checks the undercloud’s Object Storage Public API from the overcloud node and reports if successful.
  3. Run the playbook with the following command:

    $ ansible-playbook -i overcloud_hosts undercloud-ca.yml

    This uses the temporary inventory to provide Ansible with your overcloud nodes.

  4. The resulting Ansible output should show a debug message for node. For example:

    ok: [] => {
        "msg": "overcloud-controller-0 can access the undercloud's Public API"

Related Information

  • For more information on running Ansible automation on your overcloud, see "Running Ansible Automation" in the Director Installation and Usage guide.

4.9. Preparing for Pre-Provisioned Nodes Upgrade

Pre-provisioned nodes are nodes created outside of the director’s management. An overcloud using pre-provisioned nodes requires some additional steps prior to upgrading.


  • The overcloud uses pre-provisioned nodes.


  1. Run the following commands to save a list of node IP addresses in the OVERCLOUD_HOSTS environment variable:

    $ source ~/stackrc
    $ export OVERCLOUD_HOSTS=$(openstack server list -f value -c Networks | cut -d "=" -f 2 | tr '\n' ' ')
  2. Run the following script:

    $ /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/
  3. Proceed with the upgrade.
  4. When upgrading Compute or Object Storage nodes, use the following:

    1. Use the -U option with the script and specify the stack user. This is because the default user for pre-provisioned nodes is stack and not heat-admin.
    2. Use the node’s IP address with the --upgrade option. This is because the node are not managed with the director’s Compute (nova) and Bare Metal (ironic) services and do not have a node name.

      For example:

      $ -U stack --upgrade

Related Information

4.10. Preparing an NFV-Configured Overcloud

When you upgrade from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12, the OVS package also upgrades from version 2.6 to version 2.7. To support this transition when you have OVS-DPDK configured, follow these guidelines.


Red Hat OpenStack Platform 12 operates in OVS client mode.


  • Your overcloud uses Network Functions Virtualization (NFV).


When you upgrade the Overcloud from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12 with OVS-DPDK configured, you must set the following additional parameters in an environment file.

  1. In the parameter_defaults section, add a network deployment parameter to run os-net-config during the upgrade process to associate the OVS 2.7 PCI address with DPDK ports:

      ComputeNetworkDeploymentActions: ['CREATE', 'UPDATE']

    The parameter name must match the name of the role you use to deploy DPDK. In this example, the role name is Compute so the parameter name is ComputeNetworkDeploymentActions.


    This parameter is not needed after the initial upgrade and should be removed from the environment file.

  2. In the resource_registry section, override the ComputeNeutronOvsAgent service to the neutron-ovs-dpdk-agent puppet service:

      OS::TripleO::Services::ComputeNeutronOvsAgent: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-ovs-dpdk-agent.yaml

    Red Hat OpenStack Platform 12 added a new service (OS::TripleO::Services::ComputeNeutronOvsDpdk) to support the addition of the new ComputeOvsDpdk role. The example above maps this externally for upgrades.

Include the resulting environment file as part of the openstack overcloud deploy command in Section 5.1, “Upgrading the Overcloud Nodes”.

4.11. General Considerations for Overcloud Upgrades

The following items are a set of general reminders to consider before upgrading the overcloud:

Custom ServiceNetMap
If upgrading an Overcloud with a custom ServiceNetMap, ensure you include the latest ServiceNetMap for the new services. The default list of services is defined with the ServiceNetMapDefaults parameter located in the network/service_net_map.j2.yaml file. For information on using a custom ServiceNetMap, see Isolating Networks in Advanced Overcloud Customization.
External Load Balancing
If using external load balancing, check for any new services to add to your load balancer. See also "Configuring Load Balancing Options" in the External Load Balancing for the Overcloud guide for service configuration.
Deprecated Deployment Options
Some options for the openstack overcloud deploy command are now deprecated. You should substitute these options for their Heat parameter equivalents. For these parameter mappings, see "Creating the Overcloud with the CLI Tools" in the Director Installation and Usage guide.