Chapter 6. Downgrading OpenShift
Following an OpenShift Container Platform upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Container Platform 3.5 to 3.4 downgrade path.
These steps are currently only supported for RPM-based installations of OpenShift Container Platform and assumes downtime of the entire cluster.
6.2. Verifying Backups
The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file and the etcd data directory. Ensure these exist on your masters and etcd members:
Also, back up the node-config.yaml file on each node (including masters, which have the node component on them) with a timestamp:
If you use a separate etcd cluster instead of a single embedded etcd instance, the backup is likely created on all etcd members, though only one is required for the recovery process. You can run a separate etcd instance that is co-located with your master nodes.
The RPM downgrade process in a later step should create .rpmsave backups of the following files, but it may be a good idea to keep a separate copy regardless:
/etc/sysconfig/atomic-openshift-master /etc/etcd/etcd.conf 1
- Only required if using a separate etcd cluster.
6.3. Shutting Down the Cluster
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, ensure the relevant services are stopped.
On the master in a single master cluster:
# systemctl stop atomic-openshift-master
On each master in a multi-master cluster:
# systemctl stop atomic-openshift-master-api # systemctl stop atomic-openshift-master-controllers
On all master and node hosts:
# systemctl stop atomic-openshift-node
On any etcd hosts for a separate etcd cluster:
# systemctl stop etcd
6.4. Removing RPMs
The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed. Run the following command on each host to remove the
dockerpackages from the exclude list:
# atomic-openshift-excluder unexclude # atomic-openshift-docker-excluder unexclude
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, remove the following packages:
# yum remove atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node\ atomic-openshift-excluder\ atomic-openshift-docker-excluder
If you use a separate etcd cluster, also remove the etcd package:
# yum remove etcd
If using the embedded etcd, leave the etcd package installed. It is required for running the
etcdctlcommand to issue operations in later steps.
6.5. Downgrading Docker
Both OpenShift Container Platform 3.4 and 3.5 require Docker 1.12, so Docker does not need to be downgraded.
6.6. Reinstalling RPMs
Disable the OpenShift Container Platform 3.5 repositories, and re-enable the 3.4 repositories:
# subscription-manager repos \ --disable=rhel-7-server-ose-3.5-rpms \ --enable=rhel-7-server-ose-3.4-rpms
On each master, install the following packages:
# yum install atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluder
On each node, install the following packages:
# yum install atomic-openshift \ atomic-openshift-node \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluder
If you use a separate etcd cluster, install the following package on each etcd member:
# yum install etcd
6.7. Restoring etcd
See Backup and Restore.
6.8. Bringing OpenShift Container Platform Services Back Online
See Backup and Restore.
6.9. Verifying the Downgrade
To verify the downgrade, first check that all nodes are marked as Ready:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
Then, verify that you are running the expected versions of the docker-registry and router images, if deployed:
# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:v188.8.131.52-2", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v184.108.40.206-2",
You can use the diagnostics tool on the master to look for common issues and provide suggestions:
# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.