Menu Close
Settings Close

Language and Page Formatting Options

Upgrading

OpenShift Dedicated 4

Upgrading OpenShift Dedicated

Red Hat OpenShift Documentation Team

Abstract

This document provides information about upgrading OpenShift Dedicated clusters.

Chapter 1. Preparing to upgrade OpenShift Dedicated to 4.9

Upgrading your OpenShift Dedicated clusters to OpenShift 4.9 requires you to evaluate and migrate your APIs as the latest version of Kubernetes has removed a significant number of APIs.

Before you can upgrade your OpenShift Dedicated clusters, you must update the required tools to the appropriate version.

1.1. Administrator acknowledgment when upgrading to OpenShift 4.9

OpenShift Dedicated 4.9 uses Kubernetes 1.22, which removed a significant number of deprecated v1beta1 APIs.

OpenShift Dedicated 4.8.14 introduced a requirement that an administrator must provide a manual acknowledgment before the cluster can be upgraded from OpenShift Dedicated 4.8 to 4.9. This is to help prevent issues after upgrading to OpenShift Dedicated 4.9, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this is done, the administrator can provide the administrator acknowledgment.

All OpenShift Dedicated 4.8 clusters require this administrator acknowledgment before they can be upgraded to OpenShift Dedicated 4.9.

1.2. Removed Kubernetes APIs

OpenShift Dedicated 4.9 uses Kubernetes 1.22, which removed the following deprecated v1beta1 APIs. You must migrate manifests and API clients to use the v1 API version. For more information about migrating removed APIs, see the Kubernetes documentation.

Table 1.1. v1beta1 APIs removed from Kubernetes 1.22

ResourceAPINotable changes

APIService

apiregistration.k8s.io/v1beta1

No

CertificateSigningRequest

certificates.k8s.io/v1beta1

Yes

ClusterRole

rbac.authorization.k8s.io/v1beta1

No

ClusterRoleBinding

rbac.authorization.k8s.io/v1beta1

No

CSIDriver

storage.k8s.io/v1beta1

No

CSINode

storage.k8s.io/v1beta1

No

CustomResourceDefinition

apiextensions.k8s.io/v1beta1

Yes

Ingress

extensions/v1beta1

Yes

Ingress

networking.k8s.io/v1beta1

Yes

IngressClass

networking.k8s.io/v1beta1

No

Lease

coordination.k8s.io/v1beta1

No

LocalSubjectAccessReview

authorization.k8s.io/v1beta1

Yes

MutatingWebhookConfiguration

admissionregistration.k8s.io/v1beta1

Yes

PriorityClass

scheduling.k8s.io/v1beta1

No

Role

rbac.authorization.k8s.io/v1beta1

No

RoleBinding

rbac.authorization.k8s.io/v1beta1

No

SelfSubjectAccessReview

authorization.k8s.io/v1beta1

Yes

StorageClass

storage.k8s.io/v1beta1

No

SubjectAccessReview

authorization.k8s.io/v1beta1

Yes

TokenReview

authentication.k8s.io/v1beta1

No

ValidatingWebhookConfiguration

admissionregistration.k8s.io/v1beta1

Yes

VolumeAttachment

storage.k8s.io/v1beta1

No

1.3. Evaluating your cluster for removed APIs

There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Dedicated cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs.

1.3.1. Reviewing alerts to identify uses of removed APIs

The APIRemovedInNextReleaseInUse alert tells you that there are removed APIs in use on your cluster. If this alert is firing in your cluster, review the alert; take action to clear the alert by migrating manifests and API clients to use the new API version. You can use the APIRequestCount API to get more information about which APIs are in use and which workloads are using removed APIs.

1.3.2. Using APIRequestCount to identify uses of removed APIs

You can use the APIRequestCount API to track API requests and review if any of them are using one of the removed APIs.

Prerequisites

  • You must have access to the cluster as a user with the cluster-admin role.

Procedure

  • Run the following command and examine the REMOVEDINRELEASE column of the output to identify the removed APIs that are currently in use:

    $ oc get apirequestcounts

    Example output

    NAME                                        REMOVEDINRELEASE   REQUESTSINCURRENTHOUR   REQUESTSINLAST24H
    cloudcredentials.v1.operator.openshift.io                      32                      111
    ingresses.v1.networking.k8s.io                                 28                      110
    ingresses.v1beta1.extensions                1.22               16                      66
    ingresses.v1beta1.networking.k8s.io         1.22               0                       1
    installplans.v1alpha1.operators.coreos.com                     93                      167
    ...

    Note

    You can safely ignore the following entries that appear in the results:

    • system:serviceaccount:kube-system:generic-garbage-collector appears in the results because it walks through all registered APIs searching for resources to remove.
    • system:kube-controller-manager appears in the results because it walks through all resources to count them while enforcing quotas.

    You can also use -o jsonpath to filter the results:

    $ oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}'

    Example output

    1.22    certificatesigningrequests.v1beta1.certificates.k8s.io
    1.22    ingresses.v1beta1.extensions
    1.22    ingresses.v1beta1.networking.k8s.io

1.3.3. Using APIRequestCount to identify which workloads are using the removed APIs

You can examine the APIRequestCount resource for a given API version to help identify which workloads are using the API.

Prerequisites

  • You must have access to the cluster as a user with the cluster-admin role.

Procedure

  • Run the following command and examine the username and userAgent fields to help identify the workloads that are using the API:

    $ oc get apirequestcounts <resource>.<version>.<group> -o yaml

    For example:

    $ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o yaml

    You can also use -o jsonpath to extract the username values from an APIRequestCount resource:

    $ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o jsonpath='{range ..username}{$}{"\n"}{end}' | sort | uniq

    Example output

    user1
    user2
    app:serviceaccount:delta

1.4. Migrating instances of removed APIs

For information on how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation.

Chapter 2. OpenShift Dedicated cluster upgrades

You can schedule automatic or manual upgrade policies to update the version of your OpenShift Dedicated clusters. Upgrading OpenShift Dedicated clusters can be done through Red Hat OpenShift Cluster Manager or OpenShift Cluster Manager CLI.

2.1. Understanding OpenShift Dedicated cluster upgrades

When upgrades are made available for your OpenShift Dedicated cluster, you can upgrade to the newest version through Red Hat OpenShift Cluster Manager or OpenShift Cluster Manager CLI. You can set your upgrade policies on existing clusters or during cluster creation, and upgrades can be scheduled to occur automatically or manually.

Red Hat Site Reliability Engineers (SRE) will provide a curated list of available versions for your OpenShift Dedicated clusters. For each cluster you will be able to review the full list of available releases, as well as the corresponding release notes. OpenShift Cluster Manager will enable installation of clusters at the latest supported versions, and upgrades can be canceled at any time.

You can also set a grace period for how long PodDisruptionBudget protected workloads are respected during upgrades. After this grace period, any workloads protected by PodDisruptionBudget that have not been successfully drained from a node, will be forcibly deleted.

Note

All Kubernetes objects and PVs in each OpenShift Dedicated cluster are backed up as part of the OpenShift Dedicated service. Application and application data backups are not a part of the OpenShift Dedicated service. Ensure you have a backup policy in place for your applications and application data prior to scheduling upgrades.

2.1.1. Recurring upgrades

Upgrades can be scheduled to occur automatically on a day and time specified by the cluster owner or administrator. Upgrades occur on a weekly basis, unless an upgrade is unavailable for that week.

If you select recurring updates for your cluster, you must provide an administrator’s acknowledgment. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment. For information about administrator acknowledgment, see Administrator acknowledgment when upgrading to OpenShift 4.9.

Note

Recurring upgrade policies are optional and if they are not set, the upgrade policies default to individual.

2.1.2. Individual upgrades

If you opt for individual upgrades, you are responsible for updating your cluster. If you select an update version that requires approval, you must provide an administrator’s acknowledgment. For information about administrator acknowledgment, see Administrator acknowledgment when upgrading to OpenShift 4.9.

If your cluster version becomes outdated, it will transition to a limited support status. For more information on OpenShift life cycle policies, see OpenShift Dedicated update life cycle.

2.1.3. Upgrade notifications

From OpenShift Cluster Manager console you can view your cluster’s history from the Overview tab. The Upgrade states can be viewed in the service log under the Cluster history heading.

Every change of state also triggers an email notification to the cluster owner and subscribed users. You will receive email notifications for the following events:

  • An upgrade has been scheduled.
  • An upgrade has started.
  • An upgrade has completed.
  • An upgrade has been canceled.
Note

For recurring upgrades, you will also receive email notifications before the upgrade occurs based on the following cadence:

  • 2 week notice
  • 1 week notice
  • 1 day notice

Additional resources

2.2. Scheduling recurring upgrades for your cluster

You can use OpenShift Cluster Manager to schedule recurring, automatic upgrades for z-stream patch versions for your OpenShift Dedicated cluster. Based on upstream changes, there might be times when no updates are released. Therefore, no upgrade occurs for that week.

Procedure

  1. From OpenShift Cluster Manager, select your cluster from the clusters list.
  2. Click the Upgrade settings tab to access the upgrade operator.
  3. To schedule recurring upgrades, select Recurring updates.
  4. Provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
  5. Specify the day of the week and the time you want your cluster to upgrade.
  6. Click Save.
  7. Optional: Set a grace period for Node draining by selecting a designated amount of time from the drop down list. A 1 hour grace period is set by default.
  8. To edit an existing recurring upgrade policy, edit the preferred day or start time from the Upgrade Settings tab. Click Save.
  9. To cancel a recurring upgrade policy, switch the upgrade method to individual from the Upgrade Settings tab. Click Save.

On the Upgrade settings tab, the Upgrade status box indicates that an upgrade is scheduled. The date and time of the next scheduled update is listed.

2.3. Scheduling individual upgrades for your cluster

You can use OpenShift Cluster Manager to manually upgrade your OpenShift Dedicated cluster one time.

Procedure

  1. From OpenShift Cluster Manager, select your cluster from the clusters list.
  2. Click the Upgrade settings tab to access the upgrade operator. You can also update your cluster from the Overview tab by clicking Update next to the cluster version under the Details heading.
  3. To schedule an individual upgrade, select Individual updates.
  4. Click Update in the Update Status box.
  5. Select the version you want to upgrade your cluster to. Recommended cluster upgrades appear in the UI. To learn more about each available upgrade version, click View release notes.
  6. If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
  7. Click Next.
  8. To schedule your upgrade:

    • Click Upgrade now to upgrade within the next hour.
    • Click Schedule a different time and specify the date and time that you want the cluster to upgrade.
  9. Click Next.
  10. Review the upgrade policy and click Confirm upgrade.
  11. A confirmation appears when the cluster upgrade has been scheduled. Click Close.
  12. Optional: Set a grace period for Node draining by selecting a designated amount of time from the drop down list. A 1 hour grace period is set by default.

From the Overview tab, next to the cluster version, the UI notates that the upgrade has been scheduled. Click View details to view the upgrade details. If you need to cancel the scheduled upgrade, you can click Cancel this upgrade from the View Details pop-up.

The same upgrade details are available on the Upgrade settings tab under the Upgrade status box. If you need to cancel the scheduled upgrade, you can click Cancel this upgrade from the Upgrade status box.

Warning

In the event that a CVE or other critical issue to OpenShift Dedicated is found, all clusters are upgraded within 48 hours of the fix being released. You are notified when the fix is available and informed that the cluster will be automatically upgraded at your latest preferred start time before the 48 hour window closes. You can also upgrade manually at any time before the recurring upgrade starts.

Legal Notice

Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.