Do nodes become NotReady during a minor upgrade in OpenShift 4?
Issue
- During an upgrade, SDN pods restart, web console becomes unavailable, and nodes become NotReady for a brief time:
NAME STATUS ROLES AGE VERSION
master-0.server.example.com Ready master,worker 4h25m v1.14.6+9fb2d5cf9
worker-0.server.example.com Ready worker 4h25m v1.14.6+9fb2d5cf9
worker-1.server.example.com NotReady worker 4h25m v1.14.6+9fb2d5cf9
- This causes an outage, documented in the journal as NotReady message:
Sep 21 03:00:28 ip-10-0-140-173 hyperkube[1346]: I0921 03:00:28.784508 1346 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-21 03:00:28.784452389 +0000 UTC m=+71.300956518 LastTransitionTime:2019-09-21 03:00:28.784452389 +0000 UTC m=+71.300956518 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network}
Sep 21 03:00:28 ip-10-0-140-173 hyperkube[1346]: I0921 03:00:28.785392 1346 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-140-173.ec2.internal", UID:"ip-10-0-140-173.ec2.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node ip-10-0-140-173.ec2.internal status is now: NodeNotReady
....
Sep 21 03:00:58 ip-10-0-140-173 hyperkube[1346]: I0921 03:00:58.812503 1346 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-21 03:00:58.812452071 +0000 UTC m=+101.328956195 LastTransitionTime:2019-09-21 03:00:58.812452071 +0000 UTC m=+101.328956195 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network}
Environment
- OpenShift Container Platform
- 4.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.