Upgrading
Upgrading OpenShift Dedicated
Abstract
Chapter 1. Preparing to upgrade OpenShift Dedicated to 4.9
Upgrading your OpenShift Dedicated clusters to OpenShift 4.9 requires you to evaluate and migrate your APIs as the latest version of Kubernetes has removed a significant number of APIs.
Before you can upgrade your OpenShift Dedicated clusters, you must update the required tools to the appropriate version.
1.1. Administrator acknowledgment when upgrading to OpenShift 4.9
OpenShift Dedicated 4.9 uses Kubernetes 1.22, which removed a significant number of deprecated v1beta1
APIs.
OpenShift Dedicated 4.8.14 introduced a requirement that an administrator must provide a manual acknowledgment before the cluster can be upgraded from OpenShift Dedicated 4.8 to 4.9. This is to help prevent issues after upgrading to OpenShift Dedicated 4.9, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this is done, the administrator can provide the administrator acknowledgment.
All OpenShift Dedicated 4.8 clusters require this administrator acknowledgment before they can be upgraded to OpenShift Dedicated 4.9.
1.2. Removed Kubernetes APIs
OpenShift Dedicated 4.9 uses Kubernetes 1.22, which removed the following deprecated v1beta1
APIs. You must migrate manifests and API clients to use the v1
API version. For more information about migrating removed APIs, see the Kubernetes documentation.
Table 1.1. v1beta1
APIs removed from Kubernetes 1.22
Resource | API | Notable changes |
---|---|---|
APIService | apiregistration.k8s.io/v1beta1 | No |
CertificateSigningRequest | certificates.k8s.io/v1beta1 | |
ClusterRole | rbac.authorization.k8s.io/v1beta1 | No |
ClusterRoleBinding | rbac.authorization.k8s.io/v1beta1 | No |
CSIDriver | storage.k8s.io/v1beta1 | No |
CSINode | storage.k8s.io/v1beta1 | No |
CustomResourceDefinition | apiextensions.k8s.io/v1beta1 | |
Ingress | extensions/v1beta1 | |
Ingress | networking.k8s.io/v1beta1 | |
IngressClass | networking.k8s.io/v1beta1 | No |
Lease | coordination.k8s.io/v1beta1 | No |
LocalSubjectAccessReview | authorization.k8s.io/v1beta1 | |
MutatingWebhookConfiguration | admissionregistration.k8s.io/v1beta1 | |
PriorityClass | scheduling.k8s.io/v1beta1 | No |
Role | rbac.authorization.k8s.io/v1beta1 | No |
RoleBinding | rbac.authorization.k8s.io/v1beta1 | No |
SelfSubjectAccessReview | authorization.k8s.io/v1beta1 | |
StorageClass | storage.k8s.io/v1beta1 | No |
SubjectAccessReview | authorization.k8s.io/v1beta1 | |
TokenReview | authentication.k8s.io/v1beta1 | No |
ValidatingWebhookConfiguration | admissionregistration.k8s.io/v1beta1 | |
VolumeAttachment | storage.k8s.io/v1beta1 | No |
1.3. Evaluating your cluster for removed APIs
There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Dedicated cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs.
1.3.1. Reviewing alerts to identify uses of removed APIs
The APIRemovedInNextReleaseInUse
alert tells you that there are removed APIs in use on your cluster. If this alert is firing in your cluster, review the alert; take action to clear the alert by migrating manifests and API clients to use the new API version. You can use the APIRequestCount
API to get more information about which APIs are in use and which workloads are using removed APIs.
1.3.2. Using APIRequestCount to identify uses of removed APIs
You can use the APIRequestCount
API to track API requests and review if any of them are using one of the removed APIs.
Prerequisites
-
You must have access to the cluster as a user with the
cluster-admin
role.
Procedure
Run the following command and examine the
REMOVEDINRELEASE
column of the output to identify the removed APIs that are currently in use:$ oc get apirequestcounts
Example output
NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H cloudcredentials.v1.operator.openshift.io 32 111 ingresses.v1.networking.k8s.io 28 110 ingresses.v1beta1.extensions 1.22 16 66 ingresses.v1beta1.networking.k8s.io 1.22 0 1 installplans.v1alpha1.operators.coreos.com 93 167 ...
NoteYou can safely ignore the following entries that appear in the results:
-
system:serviceaccount:kube-system:generic-garbage-collector
appears in the results because it walks through all registered APIs searching for resources to remove. -
system:kube-controller-manager
appears in the results because it walks through all resources to count them while enforcing quotas.
You can also use
-o jsonpath
to filter the results:$ oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}'
Example output
1.22 certificatesigningrequests.v1beta1.certificates.k8s.io 1.22 ingresses.v1beta1.extensions 1.22 ingresses.v1beta1.networking.k8s.io
-
1.3.3. Using APIRequestCount to identify which workloads are using the removed APIs
You can examine the APIRequestCount
resource for a given API version to help identify which workloads are using the API.
Prerequisites
-
You must have access to the cluster as a user with the
cluster-admin
role.
Procedure
Run the following command and examine the
username
anduserAgent
fields to help identify the workloads that are using the API:$ oc get apirequestcounts <resource>.<version>.<group> -o yaml
For example:
$ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o yaml
You can also use
-o jsonpath
to extract theusername
values from anAPIRequestCount
resource:$ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o jsonpath='{range ..username}{$}{"\n"}{end}' | sort | uniq
Example output
user1 user2 app:serviceaccount:delta
1.4. Migrating instances of removed APIs
For information on how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation.
Chapter 2. OpenShift Dedicated cluster upgrades
You can schedule automatic or manual upgrade policies to update the version of your OpenShift Dedicated clusters. Upgrading OpenShift Dedicated clusters can be done through Red Hat OpenShift Cluster Manager or OpenShift Cluster Manager CLI.
Red Hat Site Reliability Engineers (SREs) monitor upgrade progress and remedy any issues encountered.
2.1. Understanding OpenShift Dedicated cluster upgrades
When upgrades are made available for your OpenShift Dedicated cluster, you can upgrade to the newest version through Red Hat OpenShift Cluster Manager or OpenShift Cluster Manager CLI. You can set your upgrade policies on existing clusters or during cluster creation, and upgrades can be scheduled to occur automatically or manually.
Red Hat Site Reliability Engineers (SRE) will provide a curated list of available versions for your OpenShift Dedicated clusters. For each cluster you will be able to review the full list of available releases, as well as the corresponding release notes. OpenShift Cluster Manager will enable installation of clusters at the latest supported versions, and upgrades can be canceled at any time.
You can also set a grace period for how long PodDisruptionBudget
protected workloads are respected during upgrades. After this grace period, any workloads protected by PodDisruptionBudget
that have not been successfully drained from a node, will be forcibly deleted.
All Kubernetes objects and PVs in each OpenShift Dedicated cluster are backed up as part of the OpenShift Dedicated service. Application and application data backups are not a part of the OpenShift Dedicated service. Ensure you have a backup policy in place for your applications and application data prior to scheduling upgrades.
When following a scheduled upgrade policy, there might be a delay of an hour or more before the upgrade process begins, even if it is an immediate upgrade. Additionally, the duration of the upgrade might vary based on your workload configuration.
2.1.1. Recurring upgrades
Upgrades can be scheduled to occur automatically on a day and time specified by the cluster owner or administrator. Upgrades occur on a weekly basis, unless an upgrade is unavailable for that week.
If you select recurring updates for your cluster, you must provide an administrator’s acknowledgment. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment. For information about administrator acknowledgment, see Administrator acknowledgment when upgrading to OpenShift 4.9.
Recurring upgrade policies are optional and if they are not set, the upgrade policies default to individual.
2.1.2. Individual upgrades
If you opt for individual upgrades, you are responsible for updating your cluster. If you select an update version that requires approval, you must provide an administrator’s acknowledgment. For information about administrator acknowledgment, see Administrator acknowledgment when upgrading to OpenShift 4.9.
If your cluster version becomes outdated, it will transition to a limited support status. For more information on OpenShift life cycle policies, see OpenShift Dedicated update life cycle.
2.1.3. Upgrade notifications
From OpenShift Cluster Manager console you can view your cluster’s history from the Overview tab. The Upgrade states can be viewed in the service log under the Cluster history heading.
Every change of state also triggers an email notification to the cluster owner and subscribed users. You will receive email notifications for the following events:
- An upgrade has been scheduled.
- An upgrade has started.
- An upgrade has completed.
- An upgrade has been canceled.
For recurring upgrades, you will also receive email notifications before the upgrade occurs based on the following cadence:
- 2 week notice
- 1 week notice
- 1 day notice
Additional resources
- For more information about the service log and adding cluster notification contacts, see Accessing the service logs for OpenShift Dedicated clusters.
2.2. Scheduling recurring upgrades for your cluster
You can use OpenShift Cluster Manager to schedule recurring, automatic upgrades for z-stream patch versions for your OpenShift Dedicated cluster. Based on upstream changes, there might be times when no updates are released. Therefore, no upgrade occurs for that week.
Procedure
- From OpenShift Cluster Manager, select your cluster from the clusters list.
- Click the Upgrade settings tab to access the upgrade operator.
- To schedule recurring upgrades, select Recurring updates.
- Provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- Specify the day of the week and the time you want your cluster to upgrade.
- Click Save.
- Optional: Set a grace period for Node draining by selecting a designated amount of time from the drop down list. A 1 hour grace period is set by default.
- To edit an existing recurring upgrade policy, edit the preferred day or start time from the Upgrade Settings tab. Click Save.
- To cancel a recurring upgrade policy, switch the upgrade method to individual from the Upgrade Settings tab. Click Save.
On the Upgrade settings tab, the Upgrade status box indicates that an upgrade is scheduled. The date and time of the next scheduled update is listed.
2.3. Scheduling individual upgrades for your cluster
You can use OpenShift Cluster Manager to manually upgrade your OpenShift Dedicated cluster one time.
Procedure
- From OpenShift Cluster Manager, select your cluster from the clusters list.
- Click the Upgrade settings tab to access the upgrade operator. You can also update your cluster from the Overview tab by clicking Update next to the cluster version under the Details heading.
- To schedule an individual upgrade, select Individual updates.
- Click Update in the Update Status box.
- Select the version you want to upgrade your cluster to. Recommended cluster upgrades appear in the UI. To learn more about each available upgrade version, click View release notes.
- If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Click Next.
To schedule your upgrade:
- Click Upgrade now to upgrade within the next hour.
- Click Schedule a different time and specify the date and time that you want the cluster to upgrade.
- Click Next.
- Review the upgrade policy and click Confirm upgrade.
- A confirmation appears when the cluster upgrade has been scheduled. Click Close.
- Optional: Set a grace period for Node draining by selecting a designated amount of time from the drop down list. A 1 hour grace period is set by default.
From the Overview tab, next to the cluster version, the UI notates that the upgrade has been scheduled. Click View details to view the upgrade details. If you need to cancel the scheduled upgrade, you can click Cancel this upgrade from the View Details pop-up.
The same upgrade details are available on the Upgrade settings tab under the Upgrade status box. If you need to cancel the scheduled upgrade, you can click Cancel this upgrade from the Upgrade status box.
In the event that a CVE or other critical issue to OpenShift Dedicated is found, all clusters are upgraded within 48 hours of the fix being released. You are notified when the fix is available and informed that the cluster will be automatically upgraded at your latest preferred start time before the 48 hour window closes. You can also upgrade manually at any time before the recurring upgrade starts.