Worker nodes with different version after an upgrade in OpenShift 4

Solution Verified - Updated -

Issue

  • After OpenShift upgrade, worker nodes versions are different from master nodes:
$ oc get nodes
compute-0         Ready                      worker   274d   v1.14.6-152-g117ba1f
compute-1         Ready                      worker   274d   v1.14.6-152-g117ba1f
compute-2         Ready                      worker   274d   v1.14.6-152-g117ba1f
compute-3         Ready,SchedulingDisabled   worker   244d   v1.14.6+0a21dd3b3
control-plane-0   Ready                      master   274d   v1.16.2
control-plane-1   Ready                      master   274d   v1.16.2
control-plane-2   Ready                      master   225d   v1.16.2
  • The machine config pool for the worker node is in "Degraded" state:
$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECO
master   rendered-master-25be7f6fd1d6ec4cee0ace77a2c64856   True      False      False      3              3                   3                     0
worker   rendered-worker-4b5261aab73d037696d8e2f576847e77   False     True       True       4              0                   0                     1

Taking a look at the "Cluster Operator" status after the Openshift update, they don't show any issue:

# oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.18    True        False         False      89d
cloud-credential                           4.3.18    True        False         False      271d
cluster-autoscaler                         4.3.18    True        False         False      271d
console                                    4.3.18    True        False         False      7h26m
dns                                        4.3.18    True        False         False      69d
image-registry                             4.3.18    True        False         False      25d
ingress                                    4.3.18    True        False         False      69d
insights                                   4.3.18    True        False         False      89d
kube-apiserver                             4.3.18    True        False         False      271d
kube-controller-manager                    4.3.18    True        False         False      271d
kube-scheduler                             4.3.18    True        False         False      271d
machine-api                                4.3.18    True        False         False      271d
machine-config                             4.3.18    True        False         False      20d
marketplace                                4.3.18    True        False         False      7h33m
monitoring                                 4.3.18    True        False         False      87m
network                                    4.3.18    True        False         False      271d
node-tuning                                4.3.18    True        False         False      7h34m
openshift-apiserver                        4.3.18    True        False         False      7h25m
openshift-controller-manager               4.3.18    True        False         False      69d
openshift-samples                          4.3.18    True        False         False      7h57m
operator-lifecycle-manager                 4.3.18    True        False         False      271d
operator-lifecycle-manager-catalog         4.3.18    True        False         False      271d
operator-lifecycle-manager-packageserver   4.3.18    True        False         False      7h27m
service-ca                                 4.3.18    True        False         False      271d
service-catalog-apiserver                  4.3.18    True        False         False      271d
service-catalog-controller-manager         4.3.18    True        False         False      271d
storage                                    4.3.18    True        False         False      8h
  • The "machine-config-daemon-host.service" logs on the worker node is reporting:
$ oc debug node/<worker node>
$ chroot /host
$ journalctl -u machine-config-daemon-host.service --no-pager
...
May 05 16:27:02 compute-3 machine-config-daemon[469898]: I0505 16:27:02.440757  469898 rpm-ostree.go:356] Running captured: podman create --net=none --name ostree-container-pivot quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cd521fb34c0d362205a1e55ad8c9c8dd6c7365b71a357ef705692ed80f7b112
May 05 16:27:02 compute-3 machine-config-daemon[469898]: Error: error creating container storage: the container name "ostree-container-pivot" is already in use by "083d4738f3bee56724b4d28bd8f0d176080ae2870daad6a77df2d2e79a59e7be". You have to remove that container to be able to reuse that name.: that name is already in use
...

Environment

  • OpenShift Container Platform (OCP) 4.1-4.3

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content