Chapter 4. Preparing for the Overcloud Upgrade

This process prepares the overcloud for the upgrade process.

Prerequisites

  • You have upgraded the undercloud to the latest version.

4.1. Preparing Overcloud Registration Details

You need to provide the overcloud with the latest subscription details to ensure the overcloud consumes the latest packages during the upgrade process.

Prerequisites

  • A subscription containing the latest OpenStack Platform repositories.
  • If using activation keys for registration, create a new activation key including the new OpenStack Platform repositories.

Procedure

  1. Edit the environment file containing your registration details. For example:

    $ vi ~/templates/rhel-registration/environment-rhel-registration.yaml
  2. Edit the following parameter values:

    rhel_reg_repos
    Update to include the new repositories for Red Hat OpenStack Platform 12.
    rhel_reg_activation_key
    Update the activation key to access the Red Hat OpenStack Platform 12 repositories.
    rhel_reg_sat_repo
    If using a newer version of Red Hat Satellite 6, update the repository containing Satellite 6’s management tools.
  3. Save the environment file.

Related Information

4.2. Preparing for Containerized Services

Red Hat OpenStack Platform now uses containers to host and run OpenStack services. This requires you to:

  • Configure a container image source, such as a registry
  • Generate an environment file with image locations on your image source
  • Add the environment file to your overcloud deployment

For full instructions about generating this environment file for different use case, see "Configuring Container Registry Details" in the Director Installation and Usage guide.

The resulting environment file (/home/stack/templates/overcloud_images.yaml) contains parameters that point to the container image locations for each service. Include this file in all future deployment operations.

4.3. Preparing for New Composable Services

This version of Red Hat OpenStack Platform contains new composable services. If using a custom roles_data file, include these new services in their applicable roles.

All Roles

The following new services apply to all roles.

OS::TripleO::Services::CertmongerUser
Allows the overcloud to require certificates from Certmonger. Only used if enabling TLS/SSL communication.
OS::TripleO::Services::Docker
Installs docker to manage containerized services.
OS::TripleO::Services::MySQLClient
Installs the overcloud database client tool.
OS::TripleO::Services::ContainersLogrotateCrond
Installs the logrotate service for container logs.
OS::TripleO::Services::Securetty
Allows configuration of securetty on nodes. Enabled with the environments/securetty.yaml environment file.
OS::TripleO::Services::Tuned
Enables and configures the Linux tuning daemon (tuned).

Specific Roles

The following new services apply to specific roles:

OS::TripleO::Services::Clustercheck
Required on any role that also uses the OS::TripleO::Services::MySQL service, such as the Controller or standalone Database role.
OS::TripleO::Services::Iscsid
Configures the iscsid service on the Controller, Compute, and BlockStorage roles.
OS::TripleO::Services::NovaMigrationTarget
Configures the migration target service on Compute nodes.

If using a custom roles_data file, add these services to required roles.

In addition, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide for updated lists of services for specific custom roles.

4.4. Preparing for Composable Networks

This version of Red Hat OpenStack Platform introduces a new feature for composable networks. If using a custom roles_data file, edit the file to add the composable networks to each role. For example, for Controller nodes:

- name: Controller
  networks:
    - External
    - InternalApi
    - Storage
    - StorageMgmt
    - Tenant

Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file for further examples of syntax. Also check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles.

The following table provides a mapping of composable networks to custom standalone roles:

RoleNetworks Required

Ceph Storage Monitor

Storage, StorageMgmt

Ceph Storage OSD

Storage, StorageMgmt

Ceph Storage RadosGW

Storage, StorageMgmt

Cinder API

InternalApi

Compute

InternalApi, Tenant, Storage

Controller

External, InternalApi, Storage, StorageMgmt, Tenant

Database

InternalApi

Glance

InternalApi

Heat

InternalApi

Horizon

InternalApi

Ironic

None required. Uses the Provisioning/Control Plane network for API.

Keystone

InternalApi

Load Balancer

External, InternalApi, Storage, StorageMgmt, Tenant

Manila

InternalApi

Message Bus

InternalApi

Networker

InternalApi, Tenant

Neutron API

InternalApi

Nova

InternalApi

OpenDaylight

External, InternalApi, Tenant

Redis

InternalApi

Sahara

InternalApi

Swift API

Storage

Swift Storage

StorageMgmt

Telemetry

InternalApi

4.5. Preparing for Deprecated Parameters

Note that the following parameters are deprecated and have been replaced with role-specific parameters:

Old ParameterNew Parameter

controllerExtraConfig

ControllerExtraConfig

OvercloudControlFlavor

OvercloudControllerFlavor

controllerImage

ControllerImage

NovaImage

ComputeImage

NovaComputeExtraConfig

ComputeExtraConfig

NovaComputeServerMetadata

ComputeServerMetadata

NovaComputeSchedulerHints

ComputeSchedulerHints

NovaComputeIPs

ComputeIPs

SwiftStorageServerMetadata

ObjectStorageServerMetadata

SwiftStorageIPs

ObjectStorageIPs

SwiftStorageImage

ObjectStorageImage

OvercloudSwiftStorageFlavor

OvercloudObjectStorageFlavor

Update these parameters in your custom environment files.

If your OpenStack Platform environment still requires these deprecated parameters, the default roles_data file allows their use. However, if you are using a custom roles_data file and your overcloud still requires these deprecated parameters, you can allow access to them by editing the roles_data file and adding the following to each role:

Controller Role

- name: Controller
  uses_deprecated_params: True
  deprecated_param_extraconfig: 'controllerExtraConfig'
  deprecated_param_flavor: 'OvercloudControlFlavor'
  deprecated_param_image: 'controllerImage'
  ...

Compute Role

- name: Compute
  uses_deprecated_params: True
  deprecated_param_image: 'NovaImage'
  deprecated_param_extraconfig: 'NovaComputeExtraConfig'
  deprecated_param_metadata: 'NovaComputeServerMetadata'
  deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints'
  deprecated_param_ips: 'NovaComputeIPs'
  deprecated_server_resource_name: 'NovaCompute'
  disable_upgrade_deployment: True
  ...

Object Storage Role

- name: ObjectStorage
  uses_deprecated_params: True
  deprecated_param_metadata: 'SwiftStorageServerMetadata'
  deprecated_param_ips: 'SwiftStorageIPs'
  deprecated_param_image: 'SwiftStorageImage'
  deprecated_param_flavor: 'OvercloudSwiftStorageFlavor'
  disable_upgrade_deployment: True
  ...

4.6. Preparing for Ceph Storage Node Upgrades

Due to the upgrade to containerized services, the method for installing and updating Ceph Storage nodes has changed. Ceph Storage configuration now uses a set of playbooks in the ceph-ansible packages, which you install on the undercloud.

Prerequisites

  • Your overcloud has a director-managed Ceph Storage cluster.

Procedure

  1. Install the ceph-ansible package to the undercloud:

    [stack@director ~]$ sudo yum install -y ceph-ansible
  2. Check that you are using the latest resources and configuration in your storage environment file. This requires the following changes:

    1. The resource_registry uses containerized services from the docker/services subdirectory of your core Heat template collection. For example:
resource_registry:
  OS::TripleO::Services::CephMon: ../docker/services/ceph-ansible/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../docker/services/ceph-ansible/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../docker/services/ceph-ansible/ceph-client.yaml
  1. Use the new CephAnsibleDisksConfig parameter to define how your disks are mapped. Previous versions of Red Hat OpenStack Platform used the ceph::profile::params::osds hieradata to define the OSD layout. Convert this hieradata to the structure of the CephAnsibleDisksConfig parameter. For example, if your hieradata contained the following:

    parameter_defaults:
      ExtraConfig:
        ceph::profile::params::osd_journal_size: 512
        ceph::profile::params::osds:
          '/dev/sdb': {}
          '/dev/sdc': {}
          '/dev/sdd': {}

    Then the CephAnsibleDisksConfig would look like this:

    parameters_default:
      CephAnsibleDisksConfig:
        devices:
        - /dev/sdb
        - /dev/sdc
        - /dev/sdd
        journal_size: 512
        osd_scenario: collocated

    For a full list of OSD disk layout options used in ceph-ansible, view the sample file in /usr/share/ceph-ansible/group_vars/osds.yml.sample.

Related Information

4.7. Preparing for Hyper-Converged Infrastructure (HCI) Upgrades

On a Hyper-Converged Infrastructure (HCI), the Ceph Storage and Compute services are collocated within a single role. However, you upgrade the HCI nodes the same way as a regular Compute nodes. In this situation, you delay migrating the Ceph Storage services to containerized services until the core packages have been installed and the container services enabled.

Prerequisites

  • Your overcloud uses a collocated role containing Compute and Ceph Storage.

Procedure

  1. Edit the environment file containing your Ceph Storage configuration.
  2. Ensure the resource_registry uses the Puppet resources. For example:

    resource_registry:
      OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
      OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
      OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml
    Note

    Use the contents of the /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph.yaml as an example.

  3. Upgrade your Controller-based nodes to containerized services using the instructions in Section 5.1, “Upgrading the Overcloud Nodes”.
  4. Upgrade your HCI nodes using the instructions in Section 5.3, “Upgrading the Compute Nodes”
  5. Edit the resource_registry in your Ceph Storage configuration to use the containerized services:

    resource_registry:
      OS::TripleO::Services::CephMon: ../docker/services/ceph-ansible/ceph-mon.yaml
      OS::TripleO::Services::CephOSD: ../docker/services/ceph-ansible/ceph-osd.yaml
      OS::TripleO::Services::CephClient: ../docker/services/ceph-ansible/ceph-client.yaml
    Note

    Use the contents of the /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml as an example.

  6. Add the CephAnsiblePlaybook parameter to the parameter_defaults section of your storage environment file:

      CephAnsiblePlaybook: /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
  7. Add the CephAnsibleDisksConfig parameter to the parameter_defaults section of your storage environment file and define the disk layout. For example:

      CephAnsibleDisksConfig:
        devices:
        - /dev/vdb
        - /dev/vdc
        - /dev/vdd
        journal_size: 512
        osd_scenario: collocated
  8. Finalize the upgrade of your overcloud using the instructions in Section 5.4, “Finalizing the Upgrade”.

Related Information

  • For more information about configuring ceph-ansible management with OpenStack Platform director, see the Deploying an Overcloud with Containerized Red Hat Ceph guide.

4.8. Preparing Access to the Undercloud’s Public API over SSL/TLS

The overcloud requires access to the undercloud’s OpenStack Object Storage (swift) Public API during the upgrade. If your undercloud uses a self-signed certificate, you need to add the undercloud’s certificate authority to each overcloud node.

Prerequisites

  • The undercloud uses SSL/TLS for its Public API

Procedure

  1. The director’s dynamic Ansible script has updated to the OpenStack Platform 12 version, which uses the RoleNetHostnameMap Heat parameter in the overcloud plan to define the inventory. However, the overcloud currently uses the OpenStack Platform 11 template versions, which do not have the RoleNetHostnameMap parameter. This means you need to create a temporary static inventory file, which you can generate with the following command:

    $ openstack server list -c Networks -f value | cut -d"=" -f2 > overcloud_hosts
  2. Create an Ansible playbook (undercloud-ca.yml) that contains the following:

    ---
    - name: Add undercloud CA to overcloud nodes
      hosts: all
      user: heat-admin
      become: true
      tasks:
        - name: Copy undercloud CA
          copy:
            src: ca.crt.pem
            dest: /etc/pki/ca-trust/source/anchors/
        - name: Update trust
          command: "update-ca-trust extract"
        - name: Get the hostname of the undercloud
          delegate_to: 127.0.0.1
          command: hostname
          register: undercloud_hostname
        - name: Verify URL
          uri:
            url: https://{{ undercloud_hostname.stdout }}:13808/healthcheck
            return_content: yes
          register: verify
        - name: Report output
          debug:
            msg: "{{ ansible_hostname }} can access the undercloud's Public API"
          when: verify.content == "OK"

    This playbook contains multiple tasks that perform the following on each node:

    • Copy the undercloud’s certificate authority file (ca.crt.pem) to the overcloud node. The name of this file and its location might vary depending on your configuration. This example uses the name and location defined during the self-signed certificate procedure (see "SSL/TLS Certificate Configuration" in the Director Installation and Usage guide).
    • Execute the command to update the certificate authority trust database on the overcloud node.
    • Checks the undercloud’s Object Storage Public API from the overcloud node and reports if successful.
  3. Run the playbook with the following command:

    $ ansible-playbook -i overcloud_hosts undercloud-ca.yml

    This uses the temporary inventory to provide Ansible with your overcloud nodes.

  4. The resulting Ansible output should show a debug message for node. For example:

    ok: [192.168.24.100] => {
        "msg": "overcloud-controller-0 can access the undercloud's Public API"
    }

Related Information

  • For more information on running Ansible automation on your overcloud, see "Running Ansible Automation" in the Director Installation and Usage guide.

4.9. Preparing for Pre-Provisioned Nodes Upgrade

Pre-provisioned nodes are nodes created outside of the director’s management. An overcloud using pre-provisioned nodes requires some additional steps prior to upgrading.

Prerequisites

  • The overcloud uses pre-provisioned nodes.

Procedure

  1. Run the following commands to save a list of node IP addresses in the OVERCLOUD_HOSTS environment variable:

    $ source ~/stackrc
    $ export OVERCLOUD_HOSTS=$(openstack server list -f value -c Networks | cut -d "=" -f 2 | tr '\n' ' ')
  2. Run the following script:

    $ /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
  3. Proceed with the upgrade.
  4. When upgrading Compute or Object Storage nodes, use the following:

    1. Use the -U option with the upgrade-non-controller.sh script and specify the stack user. This is because the default user for pre-provisioned nodes is stack and not heat-admin.
    2. Use the node’s IP address with the --upgrade option. This is because the node are not managed with the director’s Compute (nova) and Bare Metal (ironic) services and do not have a node name.

      For example:

      $ upgrade-non-controller.sh -U stack --upgrade 192.168.24.100

Related Information

4.10. Preparing an NFV-Configured Overcloud

When you upgrade from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12, the OVS package also upgrades from version 2.6 to version 2.7. To support this transition when you have OVS-DPDK configured, follow these guidelines.

Note

Red Hat OpenStack Platform 12 operates in OVS client mode.

Prerequisites

  • Your overcloud uses Network Functions Virtualization (NFV).

Procedure

When you upgrade the Overcloud from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12 with OVS-DPDK configured, you must set the following additional parameters in an environment file.

  1. In the parameter_defaults section, add a network deployment parameter to run os-net-config during the upgrade process to associate the OVS 2.7 PCI address with DPDK ports:

    parameter_defaults:
      ComputeNetworkDeploymentActions: ['CREATE', 'UPDATE']

    The parameter name must match the name of the role you use to deploy DPDK. In this example, the role name is Compute so the parameter name is ComputeNetworkDeploymentActions.

    Note

    This parameter is not needed after the initial upgrade and should be removed from the environment file.

  2. In the resource_registry section, override the ComputeNeutronOvsAgent service to the neutron-ovs-dpdk-agent puppet service:

    resource_registry:
      OS::TripleO::Services::ComputeNeutronOvsAgent: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-ovs-dpdk-agent.yaml

    Red Hat OpenStack Platform 12 added a new service (OS::TripleO::Services::ComputeNeutronOvsDpdk) to support the addition of the new ComputeOvsDpdk role. The example above maps this externally for upgrades.

Include the resulting environment file as part of the openstack overcloud deploy command in Section 5.1, “Upgrading the Overcloud Nodes”.

4.11. General Considerations for Overcloud Upgrades

The following items are a set of general reminders to consider before upgrading the overcloud:

Custom ServiceNetMap
If upgrading an Overcloud with a custom ServiceNetMap, ensure you include the latest ServiceNetMap for the new services. The default list of services is defined with the ServiceNetMapDefaults parameter located in the network/service_net_map.j2.yaml file. For information on using a custom ServiceNetMap, see Isolating Networks in Advanced Overcloud Customization.
External Load Balancing
If using external load balancing, check for any new services to add to your load balancer. See also "Configuring Load Balancing Options" in the External Load Balancing for the Overcloud guide for service configuration.
Deprecated Deployment Options
Some options for the openstack overcloud deploy command are now deprecated. You should substitute these options for their Heat parameter equivalents. For these parameter mappings, see "Creating the Overcloud with the CLI Tools" in the Director Installation and Usage guide.