Upgrading OpenStack
Upgrading Red Hat Enterprise Linux OpenStack Platform
Abstract
Chapter 1. Introduction
1.1. Upgrade Methods Comparison
Table 1.1. Upgrade Methods
| Method | Description | Pros | Cons |
|---|---|---|---|
| All at Once |
In this method, you take down all of the OpenStack services at the same time, do the upgrade, then bring all services back up after the upgrade process is complete.
For more information, see Chapter 3, Upgrade OpenStack All at Once.
|
This upgrade process is simple. Because everything is down, no orchestration is required. Although services will be down, VM workloads can be kept running if there are no requirements to move from one version of Red Hat Enterprise Linux to another (that is, from v7.0 to v7.1).
|
All of your services will be unavailable at the same time. In a large environment, the upgrade can result in a potentially extensive downtime while you wait for database-schema upgrades to complete. Downtime can be mitigated by proper dry-runs of your upgrade procedure on a test environment as well as scheduling a specific downtime window for the upgrade.
|
| Service by Service with Live Compute Upgrade |
This method is a variation of the service-by-service upgrade, with a change in how the Compute service is upgraded. With this method, you can take advantage of Red Hat Enterprise Linux OpenStack Platform 7 features that allow to run older compute nodes in parallel with upgraded compute nodes.
For more information, see Chapter 4, Upgrade OpenStack by Updating Each Service Individually, with Live Compute.
|
This method minimizes interruptions to your Compute service, with only a few minutes for the smaller services, and a longer migration interval for the workloads moving to newly-upgraded Compute hosts. Existing workloads can run indefinitely, and you do not need to wait for a database migration.
|
Additional hardware resources may be required to bring up the Red Hat Enterprise Linux OpenStack Platform 7 (Kilo) Compute nodes.
|
- Ensure you have subscribed to the correct channels for this release on all hosts.
- The upgrade will involve some service interruptions.
- Running instances will not be affected by the upgrade process unless you either reboot a Compute node or explicitly shut down an instance.
- To upgrade OpenStack Networking, you will need to set the correct
libvirt_vif_driverin/etc/nova/nova.confas the old hybrid driver is now deprecated. To do so, run the following on your Compute API host:#openstack-config --set /etc/nova/nova.conf \DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Warning
- Upgrading any Beta release of Red Hat Enterprise Linux OpenStack Platform to any supported release (for example, 6 or 7).
- Upgrading Compute Networking (nova-networking) to OpenStack Networking (neutron) in Red Hat Enterprise Linux OpenStack Platform 7. The only supported networking upgrade is between versions of OpenStack Networking (neutron) from Red Hat Enterprise Linux OpenStack Platform version 6 to version 7 .
Chapter 2. Prerequisites
Warning
- Red Hat Enterprise Linux OpenStack Platform 4 (Havana) -- rhel-6-server-openstack-4.0-rpms
- Red Hat Enterprise Linux OpenStack Platform 5 (Icehouse) -- rhel-7-server-openstack-5.0-rpms
- Red Hat Enterprise Linux OpenStack Platform 6 (Juno) -- rhel-7-server-openstack-6.0-rpms
Note
cloud-init.
#subscription-manager repos --enable=rhel-7-server-rh-common-rpms
2.1. Configure Content Delivery Network (CDN) Channels
subscription-manager to use the correct channels.
#subscription-manager repos --enable=[reponame]
#subscription-manager repos --disable=[reponame]
The following tables outline the channels for Red Hat Enterprise Linux 7.
Table 2.1. Required Channels
| Channel | Repository Name |
|---|---|
| Red Hat Enterprise Linux 7 Server (RPMS) |
rhel-7-server-rpms
|
| Red Hat OpenStack 7.0 for Server 7 (RPMS) |
rhel-7-server-openstack-7.0-rpms
|
| Red Hat Enterprise Linux 7 Server - RH Common (RPMs) |
rhel-7-server-rh-common-rpms
|
Table 2.2. Optional Channels
| Channel | Repository Name |
|---|---|
| Red Hat Enterprise Linux 7 Server - Optional |
rhel-7-server-optional-rpms
|
The following tables outline the channels for the Red Hat Enterprise Linux OpenStack Platform Director.
Table 2.3. Required Channels
| Channel | Repository Name |
|---|---|
| Red Hat Enterprise Linux OpenStack Platform Director 7.0 (RPMs) |
rhel-7-server-openstack-7.0-director-rpms
|
| Red Hat Enterprise Linux 7 Server (RPMS) |
rhel-7-server-rpms
|
| Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server |
rhel-server-rhscl-7-rpms
|
| Images on CDN (Optional) |
rhel-7-server-openstack-7.0-files
|
| Red Hat Enterprise Linux OpenStack Platform 7.0 Operational Tools |
rhel-7-server-openstack-7.0-optools-rpms
|
The following table outlines the channels you must disable to ensure Red Hat Enterprise Linux OpenStack Platform 7 functions correctly.
Table 2.4. Disable Channels
| Channel | Repository Name |
|---|---|
| Red Hat CloudForms Management Engine |
"cf-me-*"
|
| Red Hat CloudForms Tools for RHEL 6 |
"rhel-6-server-cf-*"
|
| Red Hat Enterprise Virtualization |
"rhel-6-server-rhev*"
|
| Red Hat Enterprise Linux 6 Server - Extended Update Support |
"*-eus-rpms"
|
Chapter 3. Upgrade OpenStack All at Once
Note
3.1. Upgrade All OpenStack Services Simultaneously
Procedure 3.1. Upgrading OpenStack components on a host
- Install the yum repository for Red Hat Enterprise Linux OpenStack Platform 7 (Kilo).
- Check that the
openstack-utilspackage is installed:#yum install openstack-utils - Take down all OpenStack services on all the nodes. This step depends on how your services are distributed among your nodes.
- In a non-HA environment:To stop all the OpenStack services running on a node, login to the node and run:
#openstack-service stop - In an HA environment:
- To stop all the OpenStack services running on a node, login to the node and run:
#openstack-service stop - Disable all Pacemaker-managed resources by setting the
stop-all-resourcesproperty on the cluster. Run the following on a single member of your Pacemaker cluster:#pcs property set stop-all-resources=trueThen wait until the output ofpcs statusshows that all resources have stopped.
- Perform a complete upgrade of all packages, and then flush expired tokens in the Identity service (might decrease the time required to synchronize the database):
#yum upgrade - Perform the necessary configuration updates on each of your services.
- Identity serviceIn the RHEL OpenStack Platform 7 (Kilo) release, the location of the token persistence backends has changed. You need to update the
driveroption in the[token]section of thekeystone.conf. To do this, replace any instance ofkeystone.token.backendswithkeystone.token.persistence.backends.#sed -i 's/keystone.token.backends/keystone.token.persistence.backends/g' \/etc/keystone/keystone.confPackage updates may include newsystemdunit files, so confirm thatsystemdis aware of any updated files.#systemctl daemon-reload - OpenStack Networking serviceOnce you have completed upgrading the OpenStack Networking service, you need to edit the
rootwrap dhcp.filterconfiguration file.To do so, in the/usr/share/neutron/rootwrap/dhcp.filtersfile, replace the value ofdnsmasq. For example, replace:dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
with:dnsmasq: CommandFilter, dnsmasq, root
- Upgrade the database schema for each service that uses the database. To do so, login to the node hosting the service and run:
#openstack-db --service SERVICENAME --updateUse the service's project name as the SERVICENAME. For example, to upgrade the database schema of the Identity service:#openstack-db --service keystone --updateTable 3.1. Project name of each OpenStack service that uses the database
Service Project name Identity keystone Block Storage cinder Image Service glance Compute nova Networking neutron Orchestration heat Certain services require additional database maintenance as part of the Juno to Kilo upgrade that is not covered by theopenstack-dbcommand:- Identity serviceEarlier versions of the installer may not have configured your system to automatically purge expired Keystone tokens. It is possible that your token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade.You can alleviate this problem by running the following command before beginning the Keystone database upgrade process:
#keystone-manage token_flushThis will flush expired tokens from the database. You should arrange to run this command periodically (for example, daily) usingcron. - ComputeAfter fully upgrading to Kilo (that is, all nodes running Kilo), you should start a background migration of flavor information. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. Run the following command as a
novauser:#runuser -u nova -- nova-manage db migrate_flavor_data
- Review the resulting configuration files. The upgraded packages will have installed
.rpmnewfiles appropriate to the Red Hat Enterprise Linux OpenStack Platform 7 version of the service.New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service , see Configuration Reference available from: Red Hat Enterprise Linux OpenStack Platform Documentation Suite. - If the package upgrades you performed require a reboot (for example, if a new kernel was installed as part of the upgrade), reboot the affected hosts now while the OpenStack service is still disabled.
- In a non-HA environment:To restart the OpenStack service, run the following command on each node:
#openstack-service start - In a HA environment:
- Allow Pacemaker to restart your resources by resetting the
stop-all-resourcesproperty. On a single member of your Pacemaker cluster, run:#pcs property set stop-all-resources=falseThen wait until the output ofpcs statusshows that all resources have started (this may take several minutes). - Restart OpenStack services on the compute nodes. On each compute node, run:
#openstack-service start
Chapter 4. Upgrade OpenStack by Updating Each Service Individually, with Live Compute
Note
Note
4.1. Upgrading OpenStack by Updating Service Individually, With Live Compute in a Non-HA Environment
- Pre-upgrade tasks:On all of your hosts:
- Install the yum repository for Red Hat Enterprise Linux OpenStack Platform 7 (Kilo).
- Upgrade the
openstack-selinuxpackage, if available:#yum upgrade openstack-selinuxThis is necessary to ensure that the upgraded services will run correctly on a system with SELinux enabled.
- Upgrade each of your services:The following steps provide specific instructions for each service, and the order in which they should be upgraded.
- Identity (keystone)Earlier versions of the installer may not have configured your system to automatically purge expired Keystone token, it is possible that your token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade.To flush expired tokens from the database and alleviate the problem, the
keystone-managecommand can be used before running the Identity database upgrade.This will flush expired tokens from the database. You can arrange to run this command periodically (e.g., daily) usingcron.On your Identity host, run:#openstack-service stop keystone#yum -d1 -y upgrade \*keystone\*#keystone-manage token_flush#openstack-db --service keystone --update#openstack-service start keystone - Object Storage (swift)On your Object Storage hosts, run:
#openstack-service stop swift#yum -d1 -y upgrade \*swift\*#openstack-service start swift - Image Service (glance)On your Image Service host, run:
#openstack-service stop glance#yum -d1 -y upgrade \*glance\*#openstack-db --service glance --update#openstack-service start glance - Block Storage (cinder)On your Block Storage host, run:
#openstack-service stop cinder#yum -d1 -y upgrade \*cinder\*#openstack-db --service cinder --update#openstack-service start cinder - Orchestration (heat)On your Orchestration host, run:
#openstack-service stop heat#yum -d1 -y upgrade \*heat\*#openstack-db --service heat --update#openstack-service start heat - Telemetry (ceilometer)
- On all nodes hosting Telemetry component services, run:
#openstack-service stop ceilometer#yum -d1 -y upgrade \*ceilometer\* - On the controller node, where database is installed, run:
#ceilometer-dbsyncThis command allows you to configure MySQL as a back-end for Telemetry service.For a list of Telemetry component services, refer to Launch the Telemetry API and Agents. - After completing the package upgrade, restart the Telemetry service by running the following command on all nodes hosting Telemetry component services:
#openstack-service start ceilometer
- Compute (nova)
- If you are performing a rolling upgrade of your compute hosts you need to set explicit API version limits to ensure compatibility between your Juno and Kilo environments.Before starting Kilo controller or compute services, you need to set the
computeoption in the[upgrade_levels]section ofnova.conftojuno:#crudini --set /etc/nova/nova.conf upgrade_levels compute junoYou need to make this change on your controllers and on your compute hosts.You should undo this operation after upgrading all of your compute hosts to OpenStack Kilo. - On your Compute host, run:
#openstack-service stop nova#yum -d1 -y upgrade \*nova\*#openstack-db --service nova --update - After fully upgrading to Kilo (i.e. all nodes are running Kilo), you should start a background migration of flavor information. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. Run the following command as a
novauser:#runuser -u nova -- nova-manage db migrate_flavor_data - After you have upgraded all of your hosts to Kilo, you will want to remove the API limits configured in the previous step. On all of your hosts:
#crudini --del /etc/nova/nova.conf upgrade_levels compute - Restart the Compute service on all the compute hosts and controllers:
#openstack-service start nova
- OpenStack Networking (neutron)
- On your OpenStack Networking host, run:
#openstack-service stop neutron#yum -d1 -y upgrade \*neutron\*#openstack-db --service neutron --update - Once you have completed upgrading the OpenStack Networking service, you need to edit the
rootwrap dhcp.filterconfiguration file.To do so, in the/usr/share/neutron/rootwrap/dhcp.filtersfile, replace the value ofdnsmasq. For example, replace:dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
with:dnsmasq: CommandFilter, dnsmasq, root
- Restart the OpenStack Networking service:
#openstack-service start neutron
- Dashboard (horizon)On your Dashboard host, run:
#yum -y upgrade \*horizon\* \*openstack-dashboard\*#yum -d1 -y upgrade \*horizon\* \*python-django\*#systemctl restart httpd
- Post-upgrade tasks:
- After completing all of your individual service upgrades, you should perform a complete package upgrade on all of your systems:
#yum upgradeThis will ensure that all packages are up-to-date. You may want to schedule a restart of your OpenStack hosts at a future date in order to ensure that all running processes are using updated versions of the underlying binaries. - Review the resulting configuration files. The upgraded packages will have installed
.rpmnewfiles appropriate to the Red Hat Enterprise Linux OpenStack Platform 7 version of the service.New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service , see Configuration Reference available from: Red Hat Enterprise Linux OpenStack Platform Documentation Suite.
4.2. Upgrading OpenStack by Updating Service Individually, With Live Compute in an HA Environment
- Pre-upgrade tasks:On all of your hosts:
- If you are running
Puppetas configured by Staypuft, you must disable it:#systemctl stop puppet#systemctl disable puppetThis ensures that the Staypuft-configured puppet will not revert changes made as part of the upgrade process. - Install the yum repository for Red Hat Enterprise Linux OpenStack Platform 7 (Kilo).
- Manually upgrade all the python packages.
#yum upgrade python* - Upgrade the
openstack-selinuxpackage, if available:#yum upgrade openstack-selinuxThis is necessary to ensure that the upgraded services will run correctly on a system with SELinux enabled.
- Service upgrades:Upgrade each of your services. The following is a reasonable order in which to perform the upgrades on your controllers:Upgrade MariaDB:Perform the follow steps on each host running MariaDB. Complete the steps on one host before starting the process on another host.
- Stop the service from running on the local node:
#pcs resource ban galera-master $(crm_node -n) - Wait until
pcs statusshows that the service is no longer running on the local node. This may take a few minutes. The local node will first transition to slave mode:Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac525400aeb753 pcmk-mac525400bab8ae ] Slaves: [ pcmk-mac5254004bd62f ]
It will eventually transition to stopped:Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac525400aeb753 pcmk-mac525400bab8ae ] Stopped: [ pcmk-mac5254004bd62f ]
- Upgrade the relevant packages.
#yum upgrade '*mariadb*' '*galera*' - Allow Pacemaker to schedule the
galeraresource on the local node:#pcs resource clear galera-master - Wait until
pcs statusshows that the galera resource is running on the local node as a master. The output frompcs statusshould include something like:Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac5254004bd62f pcmk-mac525400aeb753 pcmk-mac525400bab8ae ]
Upgrade MongoDB:- Remove the
mongodresource from Pacemaker's control:#pcs resource unmanage mongod-clone - Stop the service on all of your controllers. On each controller, run:
#systemctl stop mongod - Upgrade the relevant packages:
#yum upgrade 'mongodb*' 'python-pymongo*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Restart the
mongodservice on your controllers by running, on each controller:#systemctl start mongod - Clean up the resource to Pacemaker control:
#pcs resource cleanup mongod-clone - Return the resource to Pacemaker control:
#pcs resource manage mongod-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Identity service (keystone):- Remove Identity service from Pacemaker's control:
#pcs resource unmanage keystone-clone - Stop the Identity service by running the following on each of your controllers:
#systemctl stop openstack-keystone - Upgrade the relevant packages:
#yum upgrade 'openstack-keystone*' 'python-keystone*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - In the RHEL OpenStack Platform 7 (Kilo) release, the location of the token persistence backends has changed. You need to update the
driveroption in the[token]section of thekeystone.conf. To do this, replace any instance ofkeystone.token.backendswithkeystone.token.persistence.backends.#sed -i 's/keystone.token.backends/keystone.token.persistence.backends/g' \/etc/keystone/keystone.confPackage updates may include newsystemdunit files, so ensure thatsystemdis aware of any updated files.#systemctl daemon-reload - Earlier versions of the installer may not have configured your system to automatically purge expired Keystone token, it is possible that your token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade.To flush expired tokens from the database and alleviate the problem, the
keystone-managecommand can be used before running the Identity database upgrade.This will flush expired tokens from the database. You can arrange to run this command periodically (e.g., daily) usingcron.#keystone-manage token_flush - Update the Identity service database schema:
#openstack-db --service keystone --update - Restart the service by running the following on each of your controllers:
#systemctl start openstack-keystone - Clean up the Identity service using Pacemaker:
#pcs resource cleanup keystone-clone - Return the resource to Pacemaker control:
#pcs resource manage keystone-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Image service (glance):- Stop the Image service resources in Pacemaker:
#pcs resource disable glance-registry-clone#pcs resource disable glance-api-clone - Wait until the output of
pcs statusshows that both services have stopped running. - Upgrade the relevant packages:
#yum upgrade 'openstack-glance*' 'python-glance*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Update the Image service database schema:
#openstack-db --service glance --update - Clean up the Image service using Pacemaker:
#pcs resource cleanup glance-api-clone#pcs resource cleanup glance-registry-clone - Restart Image service resources in Pacemaker:
#pcs resource enable glance-api-clone#pcs resource enable glance-registry-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Block Storage service (cinder):- Stop all Block Storage service resources in Pacemaker:
#pcs resource disable cinder-api-clone#pcs resource disable cinder-scheduler-clone#pcs resource disable cinder-volume - Wait until the output of
pcs statusshows that the above services have stopped running. - Upgrade the relevant packages:
#yum upgrade 'openstack-cinder*' 'python-cinder*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Update the Block Storage service database schema:
#openstack-db --service cinder --update - Clean up the Block Storage service using Pacemaker:
#pcs resource cleanup cinder-volume#pcs resource cleanup cinder-scheduler-clone#pcs resource cleanup cinder-api-clone - Restart all Block Storage service resources in Pacemaker:
#pcs resource enable cinder-volume#pcs resource enable cinder-scheduler-clone#pcs resource enable cinder-api-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Orchestration (heat):- Stop Orchestration resources in Pacemaker:
#pcs resource disable heat-api-clone#pcs resource disable heat-api-cfn-clone#pcs resource disable heat-api-cloudwatch-clone#pcs resource disable heat - Wait until the output of
pcs statusshows that the above services have stopped running. - Upgrade the relevant packages:
#yum upgrade 'openstack-heat*' 'python-heat*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Update the Orchestration database schema:
#openstack-db --service heat --update - Clean up the Orchestration service using Pacemaker:
#pcs resource cleanup heat#pcs resource cleanup heat-api-cloudwatch-clone#pcs resource cleanup heat-api-cfn-clone#pcs resource cleanup heat-api-clone - Restart Orchestration resources in Pacemaker:
#pcs resource enable heat#pcs resource enable heat-api-cloudwatch-clone#pcs resource enable heat-api-cfn-clone#pcs resource enable heat-api-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Telemetry (ceilometer):- Stop all Telemetry resources in Pacemaker:
#pcs resource disable openstack-ceilometer-central#pcs resource disable openstack-ceilometer-api-clone#pcs resource disable openstack-ceilometer-alarm-evaluator-clone#pcs resource disable openstack-ceilometer-collector-clone#pcs resource disable openstack-ceilometer-notification-clone#pcs resource disable openstack-ceilometer-alarm-notifier-clone#pcs resource disable ceilometer-delay-clone - Wait until the output of
pcs statusshows that the above services have stopped running. - Upgrade the relevant packages:
#yum upgrade 'openstack-ceilometer*' 'python-ceilometer*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - If you are using the MySQL backend for Telemetry, update the Telemetry database schema.
#openstack-db --service ceilometer --updateNote
This step is not necessary of you are using the MongoDB backend. - Clean up the Telemetry service using Pacemaker:
#pcs resource cleanup ceilometer-delay-clone#pcs resource cleanup openstack-ceilometer-alarm-notifier-clone#pcs resource cleanup openstack-ceilometer-notification-clone#pcs resource cleanup openstack-ceilometer-collector-clone#pcs resource cleanup openstack-ceilometer-alarm-evaluator-clone#pcs resource cleanup openstack-ceilometer-api-clone#pcs resource cleanup openstack-ceilometer-central - Restart all Telemetry resources in Pacemaker:
#pcs resource enable ceilometer-delay-clone#pcs resource enable openstack-ceilometer-alarm-notifier-clone#pcs resource enable openstack-ceilometer-notification-clone#pcs resource enable openstack-ceilometer-collector-clone#pcs resource enable openstack-ceilometer-alarm-evaluator-clone#pcs resource enable openstack-ceilometer-api-clone#pcs resource enable openstack-ceilometer-central - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Compute (nova):- Stop all Compute resources in Pacemaker:
#pcs resource disable openstack-nova-novncproxy-clone#pcs resource disable openstack-nova-consoleauth-clone#pcs resource disable openstack-nova-conductor-clone#pcs resource disable openstack-nova-api-clone#pcs resource disable openstack-nova-scheduler-clone - Wait until the output of
pcs statusshows that the above services have stopped running. - Upgrade the relevant packages:
#yum upgrade 'openstack-nova*' 'python-nova*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Update the Compute database schema:
#openstack-db --service nova --updateAfter fully upgrading to Kilo (i.e. all nodes are running Kilo), you should start a background migration of flavor information. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. Run the following command as anovauser:#runuser -u nova -- nova-manage db migrate_flavor_data - If you are performing a rolling upgrade of your compute hosts you need to set explicit API version limits to ensure compatibility between your Juno and Kilo environments.Before starting Kilo controller or compute services, you need to set the
computeoption in the[upgrade_levels]section ofnova.conftojuno:#crudini --set /etc/nova/nova.conf upgrade_levels compute junoYou will need to first unmanage the Compute resources by runningpcs resource unmanageon one of your controllers:#pcs resource unmanage openstack-nova-novncproxy-clone#pcs resource unmanage openstack-nova-consoleauth-clone#pcs resource unmanage openstack-nova-conductor-clone#pcs resource unmanage openstack-nova-api-clone#pcs resource unmanage openstack-nova-scheduler-cloneRestart all the services on all controllers:#openstack-service restart novaYou should return control to the Pacemaker after upgrading all of your compute hosts to OpenStack Kilo.#pcs resource manage openstack-nova-scheduler-clone#pcs resource manage openstack-nova-api-clone#pcs resource manage openstack-nova-conductor-clone#pcs resource manage openstack-nova-consoleauth-clone#pcs resource manage openstack-nova-novncproxy-clone - Clean up all Compute resources in Pacemaker:
#pcs resource cleanup openstack-nova-scheduler-clone#pcs resource cleanup openstack-nova-api-clone#pcs resource cleanup openstack-nova-conductor-clone#pcs resource cleanup openstack-nova-consoleauth-clone#pcs resource cleanup openstack-nova-novncproxy-clone - Restart all Compute resources in Pacemaker:
#pcs resource enable openstack-nova-scheduler-clone#pcs resource enable openstack-nova-api-clone#pcs resource enable openstack-nova-conductor-clone#pcs resource enable openstack-nova-consoleauth-clone#pcs resource enable openstack-nova-novncproxy-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade OpenStack Networking (neutron):- Prevent Pacemaker from triggering the OpenStack Networking cleanup scripts:
#pcs resource unmanage neutron-ovs-cleanup-clone#pcs resource unmanage neutron-netns-cleanup-clone - Stop OpenStack Networking resources in Pacemaker:
#pcs resource disable neutron-server-clone#pcs resource disable neutron-openvswitch-agent-clone#pcs resource disable neutron-dhcp-agent-clone#pcs resource disable neutron-l3-agent-clone#pcs resource disable neutron-metadata-agent-clone - Upgrade the relevant packages
#yum upgrade 'openstack-neutron*' 'python-neutron*' - Install packages for the advanced Openstack Networking services enabled in the
neutron.conffile, for example,openstack-neutron-vpnaas,openstack-neutron-fwaasandopenstack-neutron-lbaas.#yum install openstack-neutron-vpnaas#yum install openstack-neutron-fwaas#yum install openstack-neutron-lbaasInstalling these packages will create the corresponding configuration files. - For the VPNaaS, LBaaS service entries in the
neutron.conffile, copy theservice_providerentries to the correspondingneutron-*aas.conffile located in/etc/neutronand comment these entries from theneutron.conffile.For the FWaaS service entry, theservice_providerparameters should remain in theneutron.conffile. - On every node that runs the LBaaS agents, install the
openstack-neutron-lbaaspackage.#yum install openstack-neutron-lbaas - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Update the OpenStack Networking database schema:
#openstack-db --service neutron --update - Once you have completed upgrading the OpenStack Networking service, you need to edit the
rootwrap dhcp.filterconfiguration file.To do so, in the/usr/share/neutron/rootwrap/dhcp.filtersfile, replace the value ofdnsmasq. For example, replace:dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
withdnsmasq: CommandFilter, dnsmasq, root
- Clean up OpenStack Networking resources in Pacemaker:
#pcs resource cleanup neutron-metadata-agent-clone#pcs resource cleanup neutron-l3-agent-clone#pcs resource cleanup neutron-dhcp-agent-clone#pcs resource cleanup neutron-openvswitch-agent-clone#pcs resource cleanup neutron-server-clone - Restart OpenStack Networking resources in Pacemaker:
#pcs resource enable neutron-metadata-agent-clone#pcs resource enable neutron-l3-agent-clone#pcs resource enable neutron-dhcp-agent-clone#pcs resource enable neutron-openvswitch-agent-clone#pcs resource enable neutron-server-clone - Return the cleanup agents to Pacemaker control:
#pcs resource manage neutron-ovs-cleanup-clone#pcs resource manage neutron-netns-cleanup-clone - Wait until the output of
pcs statusshows that the above resources are running.
Upgrade Dashboard (horizon):- Stop the Dashboard resource in Pacemaker:
#pcs resource disable horizon-clone - Wait until the output of
pcs statusshows that the service has stopped running. - Upgrade the relevant packages:
#yum upgrade httpd 'openstack-dashboard*' 'python-django*' - Reload
systemdto account for updated unit files:#systemctl daemon-reload - Correct the Dashboard configuration:Fix Apache Configuration:The
openstack-dashboardpackage installs/etc/httpd/conf.d/openstack-dashboard.conffile, but the Staypuft installer replaces this with the/etc/httpd/conf.d/15-horizon_vhost.conffile. After upgrading horizon, you will have the following configuration files:15-horizon_vhost.confopenstack-dashboard.confopenstack-dashboard.conf.rpmnew
Ensure you make the following changes:- Remove the
openstack-dashboard.conf.rpmnewfile:#rm openstack-dashboard.conf.rpmnew - Modify the
15-horizon_vhost.conffile by replacing:Alias /static "/usr/share/openstack-dashboard/static"
withAlias /dashboard/static "/usr/share/openstack-dashboard/static"
Fix Dashboard Configuration:Theopenstack-dashboardpackage installs the/etc/openstack-dashboard/local_settingsfile. After an upgrade, you will find the following configuration files:/etc/openstack-dashboard/local_settings/etc/openstack-dashboard/local_settings.rpmnew
Ensure you make the following changes:- Backup your existing
local_settingsfile:#cp local_settings local_settings.old - Rename the
local_settings.rpmnewfile tolocal_settingsfile:#mv local_settings.rpmnew local_settings - Replace the following configuration options with the corresponding value from your
local_settings.oldfile:- ALLOWED_HOSTS
- SECRET_KEY
- CACHES
- OPENSTACK_KEYSTONE_URL
- Restart the web server on all your controllers to apply all changes:
#service httpd restart
- Clean up the Dashboard resource in Pacemaker:
#pcs resource cleanup horizon-clone - Restart the Dashboard resource in Pacemaker:
#pcs resource enable horizon-clone - Wait until the output of
pcs statusshows that the above resource is running.
Upgrade Compute hosts (nova):On each compute host:- Stop all OpenStack services on the host:
#openstack-service stop - Upgrade all packages:
#yum upgrade - If you are performing a rolling upgrade of your compute hosts you need to set explicit API version limits to ensure compatibility between your Juno and Kilo environments.Before starting Kilo controller or compute services, you need to set the
computeoption in the[upgrade_levels]section ofnova.conftojuno:#crudini --set /etc/nova/nova.conf upgrade_levels compute junoYou need to make this change on your controllers and on your compute hosts. - Start all openstack services on the host:
#openstack-service start - After you have upgraded all of your hosts to Kilo, you will want to remove the API limits configured in the previous step. On all of your hosts:
#crudini --del /etc/nova/nova.conf upgrade_levels compute
Post-upgrade tasks:- After completing all of your individual service upgrades, you should perform a complete package upgrade on all of your systems:
#yum upgradeThis will ensure that all packages are up-to-date. You may want to schedule a restart of your OpenStack hosts at a future date in order to ensure that all running processes are using updated versions of the underlying binaries. - Review the resulting configuration files. The upgraded packages will have installed
.rpmnewfiles appropriate to the Red Hat Enterprise Linux OpenStack Platform 7 version of the service.New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service , see Configuration Reference available from: Red Hat Enterprise Linux OpenStack Platform Documentation Suite.
