Chapter 5. Preparing for the overcloud upgrade
This section prepares the overcloud for the upgrade process. Not all steps in this section will apply to your overcloud. However, it is recommended to step through each one and determine if your overcloud requires any additional configuration before the upgrade process begins.
5.1. Preparing for overcloud service downtime
The overcloud upgrade process disables the main services at key points. This means you cannot use any overcloud services to create new resources during the upgrade duration. Workloads running in the overcloud remain active during this period, which means instances continue to run through the upgrade duration.
It is important to plan a maintenance window to ensure no users can access the overcloud services during the upgrade duration.
Affected by overcloud upgrade
- OpenStack Platform services
Unaffected by overcloud upgrade
- Instances running during the upgrade
- Ceph Storage OSDs (backend storage for instances)
- Linux networking
- Open vSwitch networking
- Undercloud
5.2. Selecting Compute nodes for upgrade testing
The overcloud upgrade process allows you to either:
- Upgrade all nodes in a role
- Individual nodes separately
To ensure a smooth overcloud upgrade process, it is useful to test the upgrade on a few individual Compute nodes in your environment before upgrading all Compute nodes. This ensures no major issues occur during the upgrade while maintaining minimal downtime to your workloads.
Use the following recommendations to help choose test nodes for the upgrade:
- Select two or three Compute nodes for upgrade testing
- Select nodes without any critical instances running
- If necessary, migrate critical instances from the selected test Compute nodes to other Compute nodes
The instructions in Chapter 6, Upgrading the overcloud use compute-0 as an example of a Compute node to test the upgrade process before running the upgrade on all Compute nodes.
The next step updates your roles_data file to ensure any new composable services have been added to the relevant roles in your environment. To manually edit your existing roles_data file, use the following lists of new composable services for OpenStack Platform 13 roles.
If you enabled High Availability for Compute Instances (Instance HA) in Red Hat OpenStack Platform 12 or earlier and you want to perform a fast-forward upgrade to version 13 or later, you must manually disable Instance Ha first. For instructions, see Disabling Instance HA from previous versions.
5.3. New composable services
This version of Red Hat OpenStack Platform contains new composable services. If using a custom roles_data file with your own roles, include these new compulsory services in their applicable roles.
All Roles
The following new services apply to all roles.
OS::TripleO::Services::MySQLClient- Configures the MariaDB client on a node, which provides database configuration for other composable services. Add this service to all roles with standalone composable services.
OS::TripleO::Services::CertmongerUser- Allows the overcloud to require certificates from Certmonger. Only used if enabling TLS/SSL communication.
OS::TripleO::Services::Docker-
Installs
dockerto manage containerized services. OS::TripleO::Services::ContainersLogrotateCrond-
Installs the
logrotateservice for container logs. OS::TripleO::Services::Securetty-
Allows configuration of
securettyon nodes. Enabled with theenvironments/securetty.yamlenvironment file. OS::TripleO::Services::Tuned-
Enables and configures the Linux tuning daemon (
tuned). OS::TripleO::Services::AuditD-
Adds the
auditddaemon and configures rules. Disabled by default. OS::TripleO::Services::Collectd-
Adds the
collectddaemon. Disabled by default. OS::TripleO::Services::Rhsm- Configures subscriptions using an Ansible-based method. Disabled by default.
OS::TripleO::Services::RsyslogSidecar- Configures a sidecar container for logging. Disabled by default.
Specific Roles
The following new services apply to specific roles:
OS::TripleO::Services::NovaPlacement- Configures the OpenStack Compute (nova) Placement API. If using a standalone Nova API role in your current overcloud, add this service to the role. Otherwise, add the service to the Controller role.
OS::TripleO::Services::PankoApi- Configures the OpenStack Telemetry Event Storage (panko) service. If using a standalone Telemetry role in your current overcloud, add this service to the role. Otherwise, add the service to the Controller role.
OS::TripleO::Services::Clustercheck-
Required on any role that also uses the
OS::TripleO::Services::MySQLservice, such as the Controller or standalone Database role. OS::TripleO::Services::Iscsid-
Configures the
iscsidservice on the Controller, Compute, and BlockStorage roles. OS::TripleO::Services::NovaMigrationTarget- Configures the migration target service on Compute nodes.
OS::TripleO::Services::Ec2Api- Enables the OpenStack Compute (nova) EC2-API service on Controller nodes. Disabled by default.
OS::TripleO::Services::CephMgr-
Enables the Ceph Manager service on Controller nodes. Enabled as a part of the
ceph-ansibleconfiguration. OS::TripleO::Services::CephMds- Enables the Ceph Metadata Service (MDS) on Controller nodes. Disabled by default.
OS::TripleO::Services::CephRbdMirror- Enables the RADOS Block Device (RBD) mirroring service. Disabled by default.
In addition, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide for updated lists of services for specific custom roles.
In addition to new composable services, take note of any deprecated services since OpenStack Platform 13.
5.4. Deprecated composable services
If using a custom roles_data file, remove these services from their applicable roles.
OS::TripleO::Services::Core- This service acted as a core dependency for other Pacemaker services. This service has been removed to accommodate high availability composable services.
OS::TripleO::Services::VipHosts- This service configured the /etc/hosts file with node hostnames and IP addresses. This service is now integrated directly into the director’s Heat templates.
OS::TripleO::Services::FluentdClient-
This service has been replaced with the
OS::TripleO::Services::Fluentdservice. OS::TripleO::Services::ManilaBackendGeneric- The Manila generic backend is no longer supported.
If using a custom roles_data file, remove these services from their respective roles.
In addition, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide for updated lists of services for specific custom roles.
5.5. Deprecated parameters
Note that the following parameters are deprecated and have been replaced with role-specific parameters:
| Old Parameter | New Parameter |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update these parameters in your custom environment files.
If your OpenStack Platform environment still requires these deprecated parameters, the default roles_data file allows their use. However, if you are using a custom roles_data file and your overcloud still requires these deprecated parameters, you can allow access to them by editing the roles_data file and adding the following to each role:
Controller Role
- name: Controller uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' ...
Compute Role
- name: Compute uses_deprecated_params: True deprecated_param_image: 'NovaImage' deprecated_param_extraconfig: 'NovaComputeExtraConfig' deprecated_param_metadata: 'NovaComputeServerMetadata' deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' deprecated_param_ips: 'NovaComputeIPs' deprecated_server_resource_name: 'NovaCompute' disable_upgrade_deployment: True ...
Object Storage Role
- name: ObjectStorage uses_deprecated_params: True deprecated_param_metadata: 'SwiftStorageServerMetadata' deprecated_param_ips: 'SwiftStorageIPs' deprecated_param_image: 'SwiftStorageImage' deprecated_param_flavor: 'OvercloudSwiftStorageFlavor' disable_upgrade_deployment: True ...
5.6. Deprecated CLI options
Some command line options are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults section on an environment file. The following table maps deprecated options to their Heat template equivalents.
Table 5.1. Mapping deprecated CLI options to Heat template parameters
| Option | Description | Heat Template Parameter |
|---|---|---|
|
| The number of Controller nodes to scale out |
|
|
| The number of Compute nodes to scale out |
|
|
| The number of Ceph Storage nodes to scale out |
|
|
| The number of Cinder nodes to scale out |
|
|
| The number of Swift nodes to scale out |
|
|
| The flavor to use for Controller nodes |
|
|
| The flavor to use for Compute nodes |
|
|
| The flavor to use for Ceph Storage nodes |
|
|
| The flavor to use for Cinder nodes |
|
|
| The flavor to use for Swift storage nodes |
|
|
| Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation |
|
|
| An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed |
|
|
| The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network |
|
|
| Defines the interface to bridge onto br-ex for network nodes |
|
|
| The tenant network type for Neutron |
|
|
| The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string |
|
|
| Ranges of GRE tunnel IDs to make available for tenant network allocation |
|
|
| Ranges of VXLAN VNI IDs to make available for tenant network allocation |
|
|
| The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the 'datacentre' physical network |
|
|
| The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string |
|
|
| Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron | No parameter mapping. |
|
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. | No parameter mapping |
|
| Sets the NTP server to use to synchronize time |
|
These parameters have been removed from Red Hat OpenStack Platform. It is recommended to convert your CLI options to Heat parameters and add them to an environment file.
Later examples in this guide include an deprecated_cli_options.yaml environment file that includes these new parameters.
5.7. Composable networks
This version of Red Hat OpenStack Platform introduces a new feature for composable networks. If using a custom roles_data file, edit the file to add the composable networks to each role. For example, for Controller nodes:
- name: Controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml file for further examples of syntax. Also check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles.
The following table provides a mapping of composable networks to custom standalone roles:
| Role | Networks Required |
|---|---|
| Ceph Storage Monitor |
|
| Ceph Storage OSD |
|
| Ceph Storage RadosGW |
|
| Cinder API |
|
| Compute |
|
| Controller |
|
| Database |
|
| Glance |
|
| Heat |
|
| Horizon |
|
| Ironic | None required. Uses the Provisioning/Control Plane network for API. |
| Keystone |
|
| Load Balancer |
|
| Manila |
|
| Message Bus |
|
| Networker |
|
| Neutron API |
|
| Nova |
|
| OpenDaylight |
|
| Redis |
|
| Sahara |
|
| Swift API |
|
| Swift Storage |
|
| Telemetry |
|
In previous versions, the *NetName parameters (e.g. InternalApiNetName) changed the names of the default networks. This is no longer supported. Use a custom composable network file. For more information, see "Using Composable Networks" in the Advanced Overcloud Customization guide.
5.8. Preparing for Ceph Storage node upgrades
Due to the upgrade to containerized services, the method for installing and updating Ceph Storage nodes has changed. Ceph Storage configuration now uses a set of playbooks in the ceph-ansible package, which you install on the undercloud.
Prerequisites
- Your overcloud has a director-managed Ceph Storage cluster.
Procedure
Install the
ceph-ansiblepackage to the undercloud:[stack@director ~]$ sudo yum install -y ceph-ansible
Check that you are using the latest resources and configuration in your storage environment file. This requires the following changes:
The
resource_registryuses containerized services from thedocker/servicessubdirectory of your core Heat template collection instead of thepuppet/servicessubdirectory. For example, replace:resource_registry: OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-client.yaml
With:
resource_registry: OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-client.yaml
ImportantThis configuration is included in the
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yamlenvironment file, which you can include with all future deployment commands with-e.Use the new
CephAnsibleDisksConfigparameter to define how your disks are mapped. Previous versions of Red Hat OpenStack Platform used theceph::profile::params::osdshieradata to define the OSD layout. Convert this hieradata to the structure of theCephAnsibleDisksConfigparameter. For example, if your hieradata contained the following:parameter_defaults: ExtraConfig: ceph::profile::params::osd_journal_size: 512 ceph::profile::params::osds: '/dev/sdb': {} '/dev/sdc': {} '/dev/sdd': {}Then the
CephAnsibleDisksConfigwould look like this:parameter_defaults: ExtraConfig: {} CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd journal_size: 512 osd_scenario: collocatedFor a full list of OSD disk layout options used in
ceph-ansible, view the sample file in/usr/share/ceph-ansible/group_vars/osds.yml.sample.
-
Make sure to include the new Ceph configuration environment files with future deployment commands using the
-eoption.
5.9. Preparing Storage Backends
Some storage backends have changed from using configuration hooks to their own composable service. If using a custom storage backend, check the associated environment file in the environments directory for new parameters and resources. Update any custom environment files for your backends. For example:
-
For the NetApp Block Storage (cinder) backend, use the new
environments/cinder-netapp-config.yamlin your deployment. -
For the Dell EMC Block Storage (cinder) backend, use the new
environments/cinder-dellsc-config.yamlin your deployment. -
For the Dell EqualLogic Block Storage (cinder) backend, use the new
environments/cinder-dellps-config.yamlin your deployment.
For example, the NetApp Block Storage (cinder) backend used the following resources for these respective versions:
-
OpenStack Platform 10 and below:
OS::TripleO::ControllerExtraConfigPre: ../puppet/extraconfig/pre_deploy/controller/cinder-netapp.yaml -
OpenStack Platform 11 and above:
OS::TripleO::Services::CinderBackendNetApp: ../puppet/services/cinder-backend-netapp.yaml
As a result, you now use the new OS::TripleO::Services::CinderBackendNetApp resource and its associated service template for this backend.
5.10. Preparing Access to the Undercloud’s Public API over SSL/TLS
The overcloud requires access to the undercloud’s OpenStack Object Storage (swift) Public API during the upgrade. If your undercloud uses a self-signed certificate, you need to add the undercloud’s certificate authority to each overcloud node.
Prerequisites
- The undercloud uses SSL/TLS for its Public API
Procedure
The director’s dynamic Ansible script has updated to the OpenStack Platform 12 version, which uses the
RoleNetHostnameMapHeat parameter in the overcloud plan to define the inventory. However, the overcloud currently uses the OpenStack Platform 11 template versions, which do not have theRoleNetHostnameMapparameter. This means you need to create a temporary static inventory file, which you can generate with the following command:$ openstack server list -c Networks -f value | cut -d"=" -f2 > overcloud_hosts
Create an Ansible playbook (
undercloud-ca.yml) that contains the following:--- - name: Add undercloud CA to overcloud nodes hosts: all user: heat-admin become: true vars: ca_certificate: /etc/pki/ca-trust/source/anchors/cm-local-ca.pem tasks: - name: Copy undercloud CA copy: src: "{{ ca_certificate }}" dest: /etc/pki/ca-trust/source/anchors/ - name: Update trust command: "update-ca-trust extract" - name: Get the swift endpoint shell: | sudo hiera swift::keystone::auth::public_url | awk -F/ '{print $3}' register: swift_endpoint delegate_to: 127.0.0.1 become: yes become_user: stack - name: Verify URL uri: url: https://{{ swift_endpoint.stdout }}/healthcheck return_content: yes register: verify - name: Report output debug: msg: "{{ ansible_hostname }} can access the undercloud's Public API" when: verify.content == "OK"This playbook contains multiple tasks that perform the following on each node:
-
Copy the undercloud’s certificate authority file to the overcloud node. If generated by the undercloud, the default location is
/etc/pki/ca-trust/source/anchors/cm-local-ca.pem. - Execute the command to update the certificate authority trust database on the overcloud node.
- Checks the undercloud’s Object Storage Public API from the overcloud node and reports if successful.
-
Copy the undercloud’s certificate authority file to the overcloud node. If generated by the undercloud, the default location is
Run the playbook with the following command:
$ ansible-playbook -i overcloud_hosts undercloud-ca.yml
This uses the temporary inventory to provide Ansible with your overcloud nodes.
If using a custom certificate authority file, you can change the
ca_certificatevariable to a location. For example:$ ansible-playbook -i overcloud_hosts undercloud-ca.yml -e ca_certificate=/home/stack/ssl/ca.crt.pem
The resulting Ansible output should show a debug message for node. For example:
ok: [192.168.24.100] => { "msg": "overcloud-controller-0 can access the undercloud's Public API" }
Related Information
- For more information on running Ansible automation on your overcloud, see "Running the dynamic inventory script" in the Director Installation and Usage guide.
5.11. Configuring registration for fast forward upgrades
The fast forward upgrade process uses a new method to switch repositories. This means you need to remove the old rhel-registration environment files from your deployment command. For example:
- environment-rhel-registration.yaml
- rhel-registration-resource-registry.yaml
The fast forward upgrade process uses a script to change repositories during each stage of the upgrade. This script is included as part of the OS::TripleO::Services::TripleoPackages composable service (puppet/services/tripleo-packages.yaml) using the FastForwardCustomRepoScriptContent parameter. This is the script:
#!/bin/bash
set -e
case $1 in
ocata)
subscription-manager repos --disable=rhel-7-server-openstack-10-rpms
subscription-manager repos --enable=rhel-7-server-openstack-11-rpms
;;
pike)
subscription-manager repos --disable=rhel-7-server-openstack-11-rpms
subscription-manager repos --enable=rhel-7-server-openstack-12-rpms
;;
queens)
subscription-manager repos --disable=rhel-7-server-openstack-12-rpms
subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
subscription-manager repos --disable=rhel-7-server-rhceph-2-osd-rpms
subscription-manager repos --disable=rhel-7-server-rhceph-2-mon-rpms
subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms
subscription-manager repos --disable=rhel-7-server-rhceph-2-tools-rpms
subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
;;
*)
echo "unknown release $1" >&2
exit 1
esacThe director passes the upstream codename of each OpenStack Platform version to the script:
| Codename | Version |
|---|---|
|
| OpenStack Platform 11 |
|
| OpenStack Platform 12 |
|
| OpenStack Platform 13 |
The change to queens also disables Ceph Storage 2 repositories and enables the Ceph Storage 3 MON and Tools repositories. The change does not enable the Ceph Storage 3 OSD repositories because these are now containerized.
In some situations, you might need to use a custom script. For example:
- Using Red Hat Satellite with custom repository names.
- Using a disconnected repository with custom names.
- Additional commands to execute at each stage.
In these situations, include your custom script by setting the FastForwardCustomRepoScriptContent parameter:
parameter_defaults:
FastForwardCustomRepoScriptContent: |
[INSERT UPGRADE SCRIPT HERE]For example, use the following script to change repositories with a set of Satellite 6 activation keys:
parameter_defaults:
FastForwardCustomRepoScriptContent: |
set -e
URL="satellite.example.com"
case $1 in
ocata)
subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp11 --org=Default_Organization
;;
pike)
subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp12 --org=Default_Organization
;;
queens)
subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp13 --org=Default_Organization
;;
*)
echo "unknown release $1" >&2
exit 1
esac
Later examples in this guide include an custom_repositories_script.yaml environment file that includes your custom script.
5.12. Checking custom Puppet parameters
If you use the ExtraConfig interfaces for customizations of Puppet parameters, Puppet might report duplicate declaration errors during the upgrade. This is due to changes in the interfaces provided by the puppet modules themselves.
This procedure shows how to check for any custom ExtraConfig hieradata parameters in your environment files.
Procedure
Select an environment file and the check if it has an
ExtraConfigparameter:$ grep ExtraConfig ~/templates/custom-config.yaml
-
If the results show an
ExtraConfigparameter for any role (e.g.ControllerExtraConfig) in the chosen file, check the full parameter structure in that file. If the parameter contains any puppet Hierdata with a
SECTION/parametersyntax followed by avalue, it might have been been replaced with a parameter with an actual Puppet class. For example:parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'Check the director’s Puppet modules to see if the parameter now exists within a Puppet class. For example:
$ grep dnsmasq_local_resolv
If so, change to the new interface.
The following are examples to demonstrate the change in syntax:
Example 1:
parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'Changes to:
parameter_defaults: ExtraConfig: neutron::agents::dhcp::dnsmasq_local_resolv: trueExample 2:
parameter_defaults: ExtraConfig: ceilometer::config::ceilometer_config: 'oslo_messaging_rabbit/rabbit_qos_prefetch_count': value: '32'Changes to:
parameter_defaults: ExtraConfig: oslo::messaging::rabbit::rabbit_qos_prefetch_count: '32'
5.13. Converting network interface templates to the new structure
Previously the network interface structure used a OS::Heat::StructuredConfig resource to configure interfaces:
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
[NETWORK INTERFACE CONFIGURATION HERE]
The templates now use a OS::Heat::SoftwareConfig resource for configuration:
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template:
get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
params:
$network_config:
network_config:
[NETWORK INTERFACE CONFIGURATION HERE]
This configuration takes the interface configuration stored in the $network_config variable and injects it as a part of the run-os-net-config.sh script.
It is mandatory to update your network interface template to use this new structure and check your network interface templates still conforms to the syntax. Not doing so can cause failure during the fast forward upgrade process.
The director’s Heat template collection contains a script to help convert your templates to this new format. This script is located in /usr/share/openstack-tripleo-heat-templates/tools/yaml-nic-config-2-script.py. For an example of usage:
$ /usr/share/openstack-tripleo-heat-templates/tools/yaml-nic-config-2-script.py \
--script-dir /usr/share/openstack-tripleo-heat-templates/network/scripts \
[NIC TEMPLATE] [NIC TEMPLATE] ...Ensure your templates does not contain any commented lines when using this script. This can cause errors when parsing the old template structure.
For more information, see "Isolating Networks".
5.14. Next Steps
The overcloud preparation stage is complete. You can now perform an upgrade of the overcloud from 10 to 13 using the steps in Chapter 6, Upgrading the overcloud.
