Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Chapter 5. Preparing for the overcloud upgrade
This section prepares the overcloud for the upgrade process. Not all steps in this section will apply to your overcloud. However, it is recommended to step through each one and determine if your overcloud requires any additional configuration before the upgrade process begins.
5.1. Preparing for overcloud service downtime
The overcloud upgrade process disables the main services at key points. This means you cannot use any overcloud services to create new resources during the upgrade duration. Workloads running in the overcloud remain active during this period, which means instances continue to run through the upgrade duration.
It is important to plan a maintenance window to ensure no users can access the overcloud services during the upgrade duration.
Affected by overcloud upgrade
- OpenStack Platform services
Unaffected by overcloud upgrade
- Instances running during the upgrade
- Ceph Storage OSDs (backend storage for instances)
- Linux networking
- Open vSwitch networking
- Undercloud
5.2. Selecting Compute nodes for upgrade testing
The overcloud upgrade process allows you to either:
- Upgrade all nodes in a role
- Individual nodes separately
To ensure a smooth overcloud upgrade process, it is useful to test the upgrade on a few individual Compute nodes in your environment before upgrading all Compute nodes. This ensures no major issues occur during the upgrade while maintaining minimal downtime to your workloads.
Use the following recommendations to help choose test nodes for the upgrade:
- Select two or three Compute nodes for upgrade testing
- Select nodes without any critical instances running
- If necessary, migrate critical instances from the selected test Compute nodes to other Compute nodes
The instructions in Chapter 6, Upgrading the overcloud use compute-0
as an example of a Compute node to test the upgrade process before running the upgrade on all Compute nodes.
The next step updates your roles_data
file to ensure any new composable services have been added to the relevant roles in your environment. To manually edit your existing roles_data
file, use the following lists of new composable services for OpenStack Platform 13 roles.
If you enabled High Availability for Compute Instances (Instance HA) in Red Hat OpenStack Platform 12 or earlier and you want to perform a fast-forward upgrade to version 13 or later, you must manually disable Instance Ha first. For instructions, see Disabling Instance HA from previous versions.
5.3. New composable services
This version of Red Hat OpenStack Platform contains new composable services. If using a custom roles_data
file with your own roles, include these new compulsory services in their applicable roles.
All Roles
The following new services apply to all roles.
OS::TripleO::Services::MySQLClient
- Configures the MariaDB client on a node, which provides database configuration for other composable services. Add this service to all roles with standalone composable services.
OS::TripleO::Services::CertmongerUser
- Allows the overcloud to require certificates from Certmonger. Only used if enabling TLS/SSL communication.
OS::TripleO::Services::Docker
-
Installs
docker
to manage containerized services. OS::TripleO::Services::ContainersLogrotateCrond
-
Installs the
logrotate
service for container logs. OS::TripleO::Services::Securetty
-
Allows configuration of
securetty
on nodes. Enabled with theenvironments/securetty.yaml
environment file. OS::TripleO::Services::Tuned
-
Enables and configures the Linux tuning daemon (
tuned
). OS::TripleO::Services::AuditD
-
Adds the
auditd
daemon and configures rules. Disabled by default. OS::TripleO::Services::Collectd
-
Adds the
collectd
daemon. Disabled by default. OS::TripleO::Services::Rhsm
- Configures subscriptions using an Ansible-based method. Disabled by default.
OS::TripleO::Services::RsyslogSidecar
- Configures a sidecar container for logging. Disabled by default.
Specific Roles
The following new services apply to specific roles:
OS::TripleO::Services::NovaPlacement
- Configures the OpenStack Compute (nova) Placement API. If using a standalone Nova API role in your current overcloud, add this service to the role. Otherwise, add the service to the Controller role.
OS::TripleO::Services::PankoApi
- Configures the OpenStack Telemetry Event Storage (panko) service. If using a standalone Telemetry role in your current overcloud, add this service to the role. Otherwise, add the service to the Controller role.
OS::TripleO::Services::Clustercheck
-
Required on any role that also uses the
OS::TripleO::Services::MySQL
service, such as the Controller or standalone Database role. OS::TripleO::Services::Iscsid
-
Configures the
iscsid
service on the Controller, Compute, and BlockStorage roles. OS::TripleO::Services::NovaMigrationTarget
- Configures the migration target service on Compute nodes.
OS::TripleO::Services::Ec2Api
- Enables the OpenStack Compute (nova) EC2-API service on Controller nodes. Disabled by default.
OS::TripleO::Services::CephMgr
-
Enables the Ceph Manager service on Controller nodes. Enabled as a part of the
ceph-ansible
configuration. OS::TripleO::Services::CephMds
- Enables the Ceph Metadata Service (MDS) on Controller nodes. Disabled by default.
OS::TripleO::Services::CephRbdMirror
- Enables the RADOS Block Device (RBD) mirroring service. Disabled by default.
In addition, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide for updated lists of services for specific custom roles.
In addition to new composable services, take note of any deprecated services since OpenStack Platform 13.
5.4. Deprecated composable services
If using a custom roles_data
file, remove these services from their applicable roles.
OS::TripleO::Services::Core
- This service acted as a core dependency for other Pacemaker services. This service has been removed to accommodate high availability composable services.
OS::TripleO::Services::VipHosts
- This service configured the /etc/hosts file with node hostnames and IP addresses. This service is now integrated directly into the director’s Heat templates.
OS::TripleO::Services::FluentdClient
-
This service has been replaced with the
OS::TripleO::Services::Fluentd
service. OS::TripleO::Services::ManilaBackendGeneric
- The Manila generic backend is no longer supported.
If using a custom roles_data
file, remove these services from their respective roles.
In addition, see the "Service Architecture: Standalone Roles" section in the Advanced Overcloud Customization guide for updated lists of services for specific custom roles.
5.5. Switching to containerized services
The fast forward upgrade process converts specific Systemd services to containerized services. This process occurs automatically if you use the default environment files from /usr/share/openstack-tripleo-heat-templates/environments/
.
If you use custom environment files to enable services on your overcloud, check the environment files for a resource_registry
section and that any registered composable services map to composable services.
Procedure
View your custom environment file:
$ cat ~/templates/custom_environment.yaml
-
Check for a
resource_registry
section in the file contents. Check for any composable services in the
resource_registry
section. Composable services being with the following namespace:OS::TripleO::Services
For example, the following composable service is for the OpenStack Bare Metal Service (ironic) API:
OS::TripleO::Services::IronicApi
Check if the composable service maps to a Puppet-specific Heat template. For example:
resource_registry: OS::TripleO::Services::IronicApi: /usr/share/openstack-triple-heat-template/puppet/services/ironic-api.yaml
Check if a containerized version of the Heat template exists in
/usr/share/openstack-triple-heat-template/docker/services/
and remap the service to the containerized version:resource_registry: OS::TripleO::Services::IronicApi: /usr/share/openstack-triple-heat-template/docker/services/ironic-api.yaml
Alternatively, use the updated environment files for the service, which are located in
/usr/share/openstack-tripleo-heat-templates/environments/
. For example, the latest environment file for enabling the OpenStack Bare Metal Service (ironic) is/usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml
, which contains the containerized service mappings.If the custom service does not use a containerised service, keep the mapping to the Puppet-specific Heat template.
5.6. Deprecated parameters
Note that the following parameters are deprecated and have been replaced.
Old Parameter | New Parameter |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note
If you are using a custom Compute role, in order to use the role-specific parameter_defaults: NovaComputeSchedulerHints: {}
You must add this configuration to use any role-specific |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For the values of the new parameters, use double quotation marks without nested single quotation marks, as shown in the following examples:
Old Parameter With Value | New Parameter With Value |
---|---|
|
|
|
|
Update these parameters in your custom environment files. The following parameters have been deprecated with no current equivalent.
- NeutronL3HA
-
L3 high availability is enabled in all cases except for configurations with distributed virtual routing (
NeutronEnableDVR
). - CeilometerWorkers
- Ceilometer is deprecated in favor of newer components (Gnocchi, Aodh, Panko).
- CinderNetappEseriesHostType
- All E-series support has been deprecated.
- ControllerEnableSwiftStorage
-
Manipulation of the
ControllerServices
parameter should be used instead. - OpenDaylightPort
- Use the EndpointMap to define a default port for OpenDaylight.
- OpenDaylightConnectionProtocol
- The value of this parameter is now determined based on whether or not you are deploying the Overcloud with TLS.
Run the following egrep
command in your /home/stack
directory to identify any environment files that contain deprecated parameters:
$ egrep -r -w 'KeystoneNotificationDriver|controllerExtraConfig|OvercloudControlFlavor|controllerImage|NovaImage|NovaComputeExtraConfig|NovaComputeServerMetadata|NovaComputeSchedulerHints|NovaComputeIPs|SwiftStorageServerMetadata|SwiftStorageIPs|SwiftStorageImage|OvercloudSwiftStorageFlavor|NeutronDpdkCoreList|NeutronDpdkMemoryChannels|NeutronDpdkSocketMemory|NeutronDpdkDriverType|HostCpusList|NeutronDpdkCoreList|HostCpusList|NeutronL3HA|CeilometerWorkers|CinderNetappEseriesHostType|ControllerEnableSwiftStorage|OpenDaylightPort|OpenDaylightConnectionProtocol' *
If your OpenStack Platform environment still requires these deprecated parameters, the default roles_data
file allows their use. However, if you are using a custom roles_data
file and your overcloud still requires these deprecated parameters, you can allow access to them by editing the roles_data
file and adding the following to each role:
Controller Role
- name: Controller uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' ...
Compute Role
- name: Compute uses_deprecated_params: True deprecated_param_image: 'NovaImage' deprecated_param_extraconfig: 'NovaComputeExtraConfig' deprecated_param_metadata: 'NovaComputeServerMetadata' deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' deprecated_param_ips: 'NovaComputeIPs' deprecated_server_resource_name: 'NovaCompute' disable_upgrade_deployment: True ...
Object Storage Role
- name: ObjectStorage uses_deprecated_params: True deprecated_param_metadata: 'SwiftStorageServerMetadata' deprecated_param_ips: 'SwiftStorageIPs' deprecated_param_image: 'SwiftStorageImage' deprecated_param_flavor: 'OvercloudSwiftStorageFlavor' disable_upgrade_deployment: True ...
5.7. Deprecated CLI options
Some command line options are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults
section on an environment file. The following table maps deprecated options to their Heat template equivalents.
Table 5.1. Mapping deprecated CLI options to Heat template parameters
Option | Description | Heat Template Parameter |
---|---|---|
| The number of Controller nodes to scale out |
|
| The number of Compute nodes to scale out |
|
| The number of Ceph Storage nodes to scale out |
|
| The number of Cinder nodes to scale out |
|
| The number of Swift nodes to scale out |
|
| The flavor to use for Controller nodes |
|
| The flavor to use for Compute nodes |
|
| The flavor to use for Ceph Storage nodes |
|
| The flavor to use for Cinder nodes |
|
| The flavor to use for Swift storage nodes |
|
| Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation |
|
| An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed |
|
| The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network |
|
| Defines the interface to bridge onto br-ex for network nodes |
|
| The tenant network type for Neutron |
|
| The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string |
|
| Ranges of GRE tunnel IDs to make available for tenant network allocation |
|
| Ranges of VXLAN VNI IDs to make available for tenant network allocation |
|
| The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the 'datacentre' physical network |
|
| The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string |
|
| Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron | No parameter mapping. |
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. | No parameter mapping |
| Sets the NTP server to use to synchronize time |
|
These parameters have been removed from Red Hat OpenStack Platform. It is recommended to convert your CLI options to Heat parameters and add them to an environment file.
The following is an example of a file called deprecated_cli_options.yaml
, which contains some of these new parameters:
parameter_defaults: ControllerCount: 3 ComputeCount: 3 CephStorageCount: 3 ...
Later examples in this guide include an deprecated_cli_options.yaml
environment file that includes these new parameters.
5.8. Composable networks
This version of Red Hat OpenStack Platform introduces a new feature for composable networks. If using a custom roles_data
file, edit the file to add the composable networks to each role. For example, for Controller nodes:
- name: Controller networks: - External - InternalApi - Storage - StorageMgmt - Tenant
Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
file for further examples of syntax. Also check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles
.
The following table provides a mapping of composable networks to custom standalone roles:
Role | Networks Required |
---|---|
Ceph Storage Monitor |
|
Ceph Storage OSD |
|
Ceph Storage RadosGW |
|
Cinder API |
|
Compute |
|
Controller |
|
Database |
|
Glance |
|
Heat |
|
Horizon |
|
Ironic | None required. Uses the Provisioning/Control Plane network for API. |
Keystone |
|
Load Balancer |
|
Manila |
|
Message Bus |
|
Networker |
|
Neutron API |
|
Nova |
|
OpenDaylight |
|
Redis |
|
Sahara |
|
Swift API |
|
Swift Storage |
|
Telemetry |
|
In previous versions, the *NetName
parameters (e.g. InternalApiNetName
) changed the names of the default networks. This is no longer supported. Use a custom composable network file. For more information, see "Using Composable Networks" in the Advanced Overcloud Customization guide.
5.9. Preparing for Ceph Storage or HCI node upgrades
Due to the upgrade to containerized services, the method for installing and updating Ceph Storage nodes has changed. Ceph Storage configuration now uses a set of playbooks in the ceph-ansible
package, which you install on the undercloud.
- Important
- If you are using a hyperconverged deployment, see Section 6.7, “Upgrading hyperconverged nodes” for how to upgrade.
- If you are using a mixed hyperconverged deployment, see Section 6.8, “Upgrading mixed hyperconverged nodes” for how to upgrade.
Procedure
If you are using a director-managed or external Ceph Storage cluster, install the
ceph-ansible
package:Enable the Ceph Tools repository on the undercloud:
[stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
Install the
ceph-ansible
package to the undercloud:[stack@director ~]$ sudo yum install -y ceph-ansible
Check your Ceph-specific environment files and ensure your Ceph-specific heat resources use containerized services:
For director-managed Ceph Storage clusters, ensure that the resources in the
resource_register
point to the templates indocker/services/ceph-ansible
:resource_registry: OS::TripleO::Services::CephMgr: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-client.yaml
ImportantThis configuration is included in the
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
environment file, which you can include with all future deployment commands with-e
.NoteIf the environment or template file that you want to use in an environment is not present in the
/usr/share
directory, you must include the absolute path to the file.For external Ceph Storage clusters, make sure the resource in the
resource_register
points to the template indocker/services/ceph-ansible
:resource_registry: OS::TripleO::Services::CephExternal: /usr/share/openstack-tripleo-heat-templates/docker/services/ceph-ansible/ceph-external.yaml
ImportantThis configuration is included in the
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml
environment file, which you can include with all future deployment commands with-e
.
For director-managed Ceph Storage clusters, use the new
CephAnsibleDisksConfig
parameter to define how your disks are mapped. Previous versions of Red Hat OpenStack Platform used theceph::profile::params::osds
hieradata to define the OSD layout. Convert this hieradata to the structure of theCephAnsibleDisksConfig
parameter. The following examples show how to convert the hieradata to the structure of theCephAnsibleDisksConfig
parameter in the case of collocated and non-collocated Ceph journal disks.ImportantYou must set the
osd_scenario
. If you leaveosd_scenario
unset, it can result in a failed deployment.In a scenario where the Ceph journal disks are collocated, if your hieradata contains the following:
parameter_defaults: ExtraConfig: ceph::profile::params::osd_journal_size: 512 ceph::profile::params::osds: '/dev/sdb': {} '/dev/sdc': {} '/dev/sdd': {}
Convert the hieradata in the following way with the
CephAnsibleDisksConfig
parameter and setceph::profile::params::osds
to{}
:parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd journal_size: 512 osd_scenario: collocated ExtraConfig: ceph::profile::params::osds: {}
In a scenario where the journals are on faster dedicated devices and are non-collocated, if the hieradata contains the following:
parameter_defaults: ExtraConfig: ceph::profile::params::osd_journal_size: 512 ceph::profile::params::osds: '/dev/sdb': journal: ‘/dev/sdn’ '/dev/sdc': journal: ‘/dev/sdn’ '/dev/sdd': journal: ‘/dev/sdn’
Convert the hieradata in the following way with the
CephAnsibleDisksConfig
parameter and setceph::profile::params::osds
to{}
:parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/sdn - /dev/sdn - /dev/sdn journal_size: 512 osd_scenario: non-collocated ExtraConfig: ceph::profile::params::osds: {}
For a full list of OSD disk layout options used in
ceph-ansible
, view the sample file in/usr/share/ceph-ansible/group_vars/osds.yml.sample
.Include the new Ceph configuration environment files with future deployment commands using the
-e
option. This includes the following files:Director-managed Ceph Storage:
-
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
. - The environment file with the Ansible-based disk-mapping.
- Any additional environment files with Ceph Storage customization.
-
External Ceph Storage:
-
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml
- Any additional environment files with Ceph Storage customization.
-
5.10. Updating environment variables for Ceph or HCI nodes with non-homogeneous disks
For HCI nodes, you use the old syntax for disks during the compute service upgrade and the new syntax for disks during the storage service upgrade, see Section 5.9, “Preparing for Ceph Storage or HCI node upgrades” However, it may also be necessary to update the syntax for non-homogeneous disks.
If the disks on the nodes you are upgrading are not the same, then they are non-homogeneous. For example, the disks on a mix of HCI nodes and Ceph Storage nodes may be non-homogeneous.
OpenStack Platform 12 and later introduced the use of ceph-ansible, which changed the syntax of how to update mixed nodes with non-homogeneous disks. This means that starting in OpenStack Platform 12, you cannot use the composable role syntax of RoleExtraConfig
, to denote disks. See the following example.
The following example does not work for OpenStack Platform 12 or later:
CephStorageExtraConfig: ceph::profile::params::osds: '/dev/sda' '/dev/sdb' '/dev/sdc' '/dev/sdd' ComputeHCIExtraConfig: ceph::profile::params::osds: '/dev/sda' '/dev/sdb'
For OpenStack Platform 12 and later, you must update the templates before you upgrade. For more information about how to update the templates for non-homogeneous disks, see Configuring Ceph Storage Cluster Setting in the Deploying an Overcloud with Containerized Red Hat Ceph guide.
5.11. Increasing the restart delay for large Ceph clusters
During the upgrade, each Ceph monitor and OSD is stopped sequentially. The migration does not continue until the same service that was stopped is successfully restarted. Ansible waits 15 seconds (the delay) and checks 5 times for the service to start (the retries). If the service does not restart, the migration stops so the operator can intervene.
Depending on the size of the Ceph cluster, you may need to increase the retry or delay values. The exact names of these parameters and their defaults are as follows:
health_mon_check_retries: 5 health_mon_check_delay: 15 health_osd_check_retries: 5 health_osd_check_delay: 15
You can update the default values for these parameters. For example, to make the cluster check 30 times and wait 40 seconds between each check for the Ceph OSDs, and check 20 times and wait 10 seconds between each check for the Ceph MONs, pass the following parameters in a yaml
file with -e
using the openstack overcloud deploy
command:
parameter_defaults: CephAnsibleExtraConfig: health_osd_check_delay: 40 health_osd_check_retries: 30 health_mon_check_retries: 10 health_mon_check_delay: 20
5.12. Preparing Storage Backends
Some storage backends have changed from using configuration hooks to their own composable service. If using a custom storage backend, check the associated environment file in the environments
directory for new parameters and resources. Update any custom environment files for your backends. For example:
-
For the NetApp Block Storage (cinder) backend, use the new
environments/cinder-netapp-config.yaml
in your deployment. -
For the Dell EMC Block Storage (cinder) backend, use the new
environments/cinder-dellsc-config.yaml
in your deployment. -
For the Dell EqualLogic Block Storage (cinder) backend, use the new
environments/cinder-dellps-config.yaml
in your deployment.
For example, the NetApp Block Storage (cinder) backend used the following resources for these respective versions:
-
OpenStack Platform 10 and below:
OS::TripleO::ControllerExtraConfigPre: ../puppet/extraconfig/pre_deploy/controller/cinder-netapp.yaml
-
OpenStack Platform 11 and above:
OS::TripleO::Services::CinderBackendNetApp: ../puppet/services/cinder-backend-netapp.yaml
As a result, you now use the new OS::TripleO::Services::CinderBackendNetApp
resource and its associated service template for this backend.
5.13. Preparing Access to the Undercloud’s Public API over SSL/TLS
The overcloud requires access to the undercloud’s OpenStack Object Storage (swift) Public API during the upgrade. If your undercloud uses a self-signed certificate, you need to add the undercloud’s certificate authority to each overcloud node.
Prerequisites
- The undercloud uses SSL/TLS for its Public API
Procedure
The director’s dynamic Ansible script has updated to the OpenStack Platform 12 version, which uses the
RoleNetHostnameMap
Heat parameter in the overcloud plan to define the inventory. However, the overcloud currently uses the OpenStack Platform 11 template versions, which do not have theRoleNetHostnameMap
parameter. This means you need to create a temporary static inventory file, which you can generate with the following command:$ openstack server list -c Networks -f value | cut -d"=" -f2 > overcloud_hosts
Create an Ansible playbook (
undercloud-ca.yml
) that contains the following:--- - name: Add undercloud CA to overcloud nodes hosts: all user: heat-admin become: true vars: ca_certificate: /etc/pki/ca-trust/source/anchors/cm-local-ca.pem tasks: - name: Copy undercloud CA copy: src: "{{ ca_certificate }}" dest: /etc/pki/ca-trust/source/anchors/ - name: Update trust command: "update-ca-trust extract" - name: Get the swift endpoint shell: | sudo hiera swift::keystone::auth::public_url | awk -F/ '{print $3}' register: swift_endpoint delegate_to: 127.0.0.1 become: yes become_user: stack - name: Verify URL uri: url: https://{{ swift_endpoint.stdout }}/healthcheck return_content: yes register: verify - name: Report output debug: msg: "{{ ansible_hostname }} can access the undercloud's Public API" when: verify.content == "OK"
This playbook contains multiple tasks that perform the following on each node:
-
Copy the undercloud’s certificate authority file to the overcloud node. If generated by the undercloud, the default location is
/etc/pki/ca-trust/source/anchors/cm-local-ca.pem
. - Execute the command to update the certificate authority trust database on the overcloud node.
- Checks the undercloud’s Object Storage Public API from the overcloud node and reports if successful.
-
Copy the undercloud’s certificate authority file to the overcloud node. If generated by the undercloud, the default location is
Run the playbook with the following command:
$ ansible-playbook -i overcloud_hosts undercloud-ca.yml
This uses the temporary inventory to provide Ansible with your overcloud nodes.
If using a custom certificate authority file, you can change the
ca_certificate
variable to a location. For example:$ ansible-playbook -i overcloud_hosts undercloud-ca.yml -e ca_certificate=/home/stack/ssl/ca.crt.pem
The resulting Ansible output should show a debug message for node. For example:
ok: [192.168.24.100] => { "msg": "overcloud-controller-0 can access the undercloud's Public API" }
Related Information
- For more information on running Ansible automation on your overcloud, see "Running the dynamic inventory script" in the Director Installation and Usage guide.
5.14. Configuring registration for fast forward upgrades
The fast forward upgrade process uses a new method to switch repositories. This means you need to remove the old rhel-registration
environment files from your deployment command. For example:
- environment-rhel-registration.yaml
- rhel-registration-resource-registry.yaml
The fast forward upgrade process uses a script to change repositories during each stage of the upgrade. This script is included as part of the OS::TripleO::Services::TripleoPackages
composable service (puppet/services/tripleo-packages.yaml
) using the FastForwardCustomRepoScriptContent
parameter. This is the script:
#!/bin/bash set -e case $1 in ocata) subscription-manager repos --disable=rhel-7-server-openstack-10-rpms subscription-manager repos --enable=rhel-7-server-openstack-11-rpms ;; pike) subscription-manager repos --disable=rhel-7-server-openstack-11-rpms subscription-manager repos --enable=rhel-7-server-openstack-12-rpms ;; queens) subscription-manager repos --disable=rhel-7-server-openstack-12-rpms subscription-manager release --set=7.9 subscription-manager repos --enable=rhel-7-server-openstack-13-rpms subscription-manager repos --disable=rhel-7-server-rhceph-2-osd-rpms subscription-manager repos --disable=rhel-7-server-rhceph-2-mon-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms subscription-manager repos --disable=rhel-7-server-rhceph-2-tools-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms subscription-manager repos --enable=rhel-7-server-openstack-13-deployment-tools-rpms ;; *) echo "unknown release $1" >&2 exit 1 esac
The director passes the upstream codename of each OpenStack Platform version to the script:
Codename | Version |
---|---|
| OpenStack Platform 11 |
| OpenStack Platform 12 |
| OpenStack Platform 13 |
The change to queens
also disables Ceph Storage 2 repositories and enables the Ceph Storage 3 MON and Tools repositories. The change does not enable the Ceph Storage 3 OSD repositories because these are now containerized.
In some situations, you might need to use a custom script. For example:
- Using Red Hat Satellite with custom repository names.
- Using a disconnected repository with custom names.
- Additional commands to execute at each stage.
In these situations, include your custom script by setting the FastForwardCustomRepoScriptContent
parameter:
parameter_defaults: FastForwardCustomRepoScriptContent: | [INSERT UPGRADE SCRIPT HERE]
For example, use the following script to change repositories with a set of Satellite 6 activation keys:
parameter_defaults: FastForwardCustomRepoScriptContent: | set -e URL="satellite.example.com" case $1 in ocata) subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp11 --org=Default_Organization ;; pike) subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp12 --org=Default_Organization ;; queens) subscription-manager register --baseurl=https://$URL --force --activationkey=rhosp13 --org=Default_Organization ;; *) echo "unknown release $1" >&2 exit 1 esac
Later examples in this guide include an custom_repositories_script.yaml
environment file that includes your custom script.
5.15. Checking custom Puppet parameters
If you use the ExtraConfig
interfaces for customizations of Puppet parameters, Puppet might report duplicate declaration errors during the upgrade. This is due to changes in the interfaces provided by the puppet modules themselves.
This procedure shows how to check for any custom ExtraConfig
hieradata parameters in your environment files.
Procedure
Select an environment file and the check if it has an
ExtraConfig
parameter:$ grep ExtraConfig ~/templates/custom-config.yaml
-
If the results show an
ExtraConfig
parameter for any role (e.g.ControllerExtraConfig
) in the chosen file, check the full parameter structure in that file. If the parameter contains any puppet Hierdata with a
SECTION/parameter
syntax followed by avalue
, it might have been been replaced with a parameter with an actual Puppet class. For example:parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'
Check the director’s Puppet modules to see if the parameter now exists within a Puppet class. For example:
$ grep dnsmasq_local_resolv
If so, change to the new interface.
The following are examples to demonstrate the change in syntax:
Example 1:
parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'
Changes to:
parameter_defaults: ExtraConfig: neutron::agents::dhcp::dnsmasq_local_resolv: true
Example 2:
parameter_defaults: ExtraConfig: ceilometer::config::ceilometer_config: 'oslo_messaging_rabbit/rabbit_qos_prefetch_count': value: '32'
Changes to:
parameter_defaults: ExtraConfig: oslo::messaging::rabbit::rabbit_qos_prefetch_count: '32'
5.16. Converting network interface templates to the new structure
Previously the network interface structure used a OS::Heat::StructuredConfig
resource to configure interfaces:
resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: [NETWORK INTERFACE CONFIGURATION HERE]
The templates now use a OS::Heat::SoftwareConfig
resource for configuration:
resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: $network_config: network_config: [NETWORK INTERFACE CONFIGURATION HERE]
This configuration takes the interface configuration stored in the $network_config
variable and injects it as a part of the run-os-net-config.sh
script.
It is mandatory to update your network interface template to use this new structure and check your network interface templates still conforms to the syntax. Not doing so can cause failure during the fast forward upgrade process.
The director’s Heat template collection contains a script to help convert your templates to this new format. This script is located in /usr/share/openstack-tripleo-heat-templates/tools/yaml-nic-config-2-script.py
. For an example of usage:
$ /usr/share/openstack-tripleo-heat-templates/tools/yaml-nic-config-2-script.py \ --script-dir /usr/share/openstack-tripleo-heat-templates/network/scripts \ [NIC TEMPLATE] [NIC TEMPLATE] ...
Ensure your templates does not contain any commented lines when using this script. This can cause errors when parsing the old template structure.
For more information, see "Network isolation".
5.17. Checking DPDK and SR-IOV configuration
This section is for overclouds using NFV technologies, such as Data Plane Development Kit (DPDK) integration and Single Root Input/Output Virtualization (SR-IOV). If your overcloud does not use these features, ignore this section.
In Red Hat OpenStack Platform 10, it is not necessary to replace the first-boot scripts file with host-config-and-reboot.yaml
, a template for OpenStack Platform 13. Maintaining the first-boot scripts throughout the upgrade avoids and additional reboot.
5.17.1. Upgrading a DPDK environment
For environments with DPDK, check the specific service mappings to ensure a successful transition to a containerized environment.
Procedure
The fast forward upgrade for DPDK services occurs automatically due to the conversion to containerized services. If using custom environment files for DPDK, manually adjust these environment files to map to the containerized service.
OS::TripleO::Services::ComputeNeutronOvsDpdk: /usr/share/openstack-tripleo-heat-templates/docker/services/neutron-ovs-dpdk-agent.yaml
NoteAlternatively, use the latest NFV environment file,
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml
.Map the OpenStack Network (Neutron) agent service to the appropriate containerized template:
If you are using the default
Compute
role for DPDK, map theComputeNeutronOvsAgent
service to theneutron-ovs-dpdk-agent.yaml
file in thedocker/services
directory of the core heat template collection.resource_registry: OS::TripleO::Services::ComputeNeutronOvsAgent: /usr/share/openstack-tripleo-heat-templates/docker/services/neutron-ovs-dpdk-agent.yaml
-
If you are using a custom role for DPDK, then a custom composable service, for example
ComputeNeutronOvsDpdkAgentCustom
, should exist. Map this service to theneutron-ovs-dpdk-agent.yaml
file in the docker directory.
Add the following services and extra parameters to the DPDK role definition:
RoleParametersDefault: VhostuserSocketGroup: "hugetlbfs" TunedProfileName: "cpu-paritioning" ServicesDefault: - OS::TripleO::Services::ComputeNeutronOvsDPDK
Remove the following services:
ServicesDefault: - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::Tuned
5.17.2. Upgrading an SR-IOV environment
For environments with SR-IOV, check the following service mappings to ensure a successful transition to a containerized environment.
Procedure
The fast forward upgrade for SR-IOV services occurs automatically due to the conversion to containerized services. If you are using custom environment files for SR-IOV, ensure that these services map to the containerized service correctly.
OS::TripleO::Services::NeutronSriovAgent: /usr/share/openstack-tripleo-heat-templates/docker/services/neutron-sriov-agent.yaml OS::TripleO::Services::NeutronSriovHostConfig: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-sriov-host-config.yaml
NoteAlternatively, use the lastest NFV environment file,
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml
.Ensure the
roles_data.yaml
file contains the required SR-IOV services.If you are using the default
Compute
role for SR-IOV, include the appropriate services in this role in OpenStack Platform 13.-
Copy the
roles_data.yaml
file from/usr/share/openstack-tripleo-heat-templates
to your custom templates directory, for example,/home/stack/templates
. Add the following services to the default compute role:
- OS::TripleO::Services::NeutronSriovAgent
- OS::TripleO::Services::NeutronSriovHostConfig
Remove the following services from the default Compute role:
- OS::TripleO::Services::NeutronLinuxbridgeAgent
OS::TripleO::Services::Tuned
If you are using a custom
Compute
role for SR-IOV, theNeutronSriovAgent
service should be present. Add theNeutronSriovHostConfig
service, which is introduced in Red Hat OpenStack Platform 13.NoteThe
roles_data.yaml
file should be included when running theffwd-upgrade
commandsprepare
andconverge
in following sections.
-
Copy the
5.18. Preparing for Pre-Provisioned Nodes Upgrade
Pre-provisioned nodes are nodes created outside of the director’s management. An overcloud using pre-provisioned nodes requires some additional steps prior to upgrading.
Prerequisites
- The overcloud uses pre-provisioned nodes.
Procedure
Run the following commands to save a list of node IP addresses in the
OVERCLOUD_HOSTS
environment variable:$ source ~/stackrc $ export OVERCLOUD_HOSTS=$(openstack server list -f value -c Networks | cut -d "=" -f 2 | tr '\n' ' ')
Run the following script:
$ /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
Proceed with the upgrade.
-
When using the
openstack overcloud upgrade run
command with pre-provisioned nodes, include the--ssh-user tripleo-admin
parameter. When upgrading Compute or Object Storage nodes, use the following:
-
Use the
-U
option with theupgrade-non-controller.sh
script and specify thestack
user. This is because the default user for pre-provisioned nodes isstack
and notheat-admin
. Use the node’s IP address with the
--upgrade
option. This is because the nodes are not managed with the director’s Compute (nova) and Bare Metal (ironic) services and do not have a node name.For example:
$ upgrade-non-controller.sh -U stack --upgrade 192.168.24.100
-
Use the
-
When using the
Related Information
- For more information on pre-provisioned nodes, see "Configuring a Basic Overcloud using Pre-Provisioned Nodes" in the Director Installation and Usage guide.
5.19. Next Steps
The overcloud preparation stage is complete. You can now perform an upgrade of the overcloud from 10 to 13 using the steps in Chapter 6, Upgrading the overcloud.