Openshift cluster not starting after the node restart.

Latest response

Hi,

We restarted Openshift cluster nodes both masters and workers. After restart we are not able to login in into cluster. Kubelet is working on all the nodes. when we are trying to login it is throwing error like 'Unable to connect to the server: EOF".

I0329 06:33:48.223880   21050 loader.go:375] Config loaded from file:  /root/.kube/config
I0329 06:33:48.224429   21050 round_trippers.go:423] curl -k -v -XHEAD  'https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/'
I0329 06:33:48.252979   21050 round_trippers.go:443] HEAD https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/  in 28 milliseconds
I0329 06:33:48.253026   21050 round_trippers.go:449] Response Headers:
I0329 06:33:48.253101   21050 round_trippers.go:423] curl -k -v -XHEAD  'https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/'
I0329 06:33:48.259049   21050 round_trippers.go:443] HEAD https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/ 403 Forbidden in 5 milliseconds
I0329 06:33:48.259066   21050 round_trippers.go:449] Response Headers:
I0329 06:33:48.259070   21050 round_trippers.go:452]     Content-Type: application/json
I0329 06:33:48.259074   21050 round_trippers.go:452]     X-Content-Type-Options: nosniff
I0329 06:33:48.259077   21050 round_trippers.go:452]     X-Kubernetes-Pf-Flowschema-Uid: 17003f8a-fa21-447b-a853-9863dbd328f7
I0329 06:33:48.259080   21050 round_trippers.go:452]     X-Kubernetes-Pf-Prioritylevel-Uid: e31b76b8-c325-4019-bc47-293e5f940792
I0329 06:33:48.259084   21050 round_trippers.go:452]     Content-Length: 186
I0329 06:33:48.259088   21050 round_trippers.go:452]     Date: Mon, 29 Mar 2021 10:33:48 GMT
I0329 06:33:48.259092   21050 round_trippers.go:452]     Audit-Id: f1594e82-af43-4e0a-9cd2-bf108f9f8e69
I0329 06:33:48.259095   21050 round_trippers.go:452]     Cache-Control: no-cache, private
I0329 06:33:48.259111   21050 request_token.go:86] GSSAPI Enabled
I0329 06:33:48.259144   21050 round_trippers.go:423] curl -k -v -XGET  -H "X-Csrf-Token: 1" 'https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/.well-known/oauth-authorization-server'
I0329 06:33:48.259659   21050 round_trippers.go:443] GET https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/.well-known/oauth-authorization-server 200 OK in 0 milliseconds
I0329 06:33:48.259667   21050 round_trippers.go:449] Response Headers:
I0329 06:33:48.259670   21050 round_trippers.go:452]     Audit-Id: 60eeb747-9b25-47a8-b984-66c7dddef020
I0329 06:33:48.259673   21050 round_trippers.go:452]     Cache-Control: no-cache, private
I0329 06:33:48.259675   21050 round_trippers.go:452]     Content-Type: application/json
I0329 06:33:48.259678   21050 round_trippers.go:452]     X-Kubernetes-Pf-Flowschema-Uid: 17003f8a-fa21-447b-a853-9863dbd328f7
I0329 06:33:48.259680   21050 round_trippers.go:452]     X-Kubernetes-Pf-Prioritylevel-Uid: e31b76b8-c325-4019-bc47-293e5f940792
I0329 06:33:48.259683   21050 round_trippers.go:452]     Content-Length: 648
I0329 06:33:48.259685   21050 round_trippers.go:452]     Date: Mon, 29 Mar 2021 10:33:48 GMT
I0329 06:33:48.261108   21050 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/openshift (linux/amd64) kubernetes/b66f2d3" 'https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/api/v1/namespaces/openshift/configmaps/motd'
I0329 06:33:48.261621   21050 round_trippers.go:443] GET https://api.ocp4-cluster-001.int.dhdigital.co.in:6443/api/v1/namespaces/openshift/configmaps/motd 403 Forbidden in 0 milliseconds
I0329 06:33:48.261629   21050 round_trippers.go:449] Response Headers:
I0329 06:33:48.261632   21050 round_trippers.go:452]     Cache-Control: no-cache, private
I0329 06:33:48.261634   21050 round_trippers.go:452]     Content-Type: application/json
I0329 06:33:48.261637   21050 round_trippers.go:452]     X-Content-Type-Options: nosniff
I0329 06:33:48.261640   21050 round_trippers.go:452]     X-Kubernetes-Pf-Flowschema-Uid: 17003f8a-fa21-447b-a853-9863dbd328f7
I0329 06:33:48.261642   21050 round_trippers.go:452]     X-Kubernetes-Pf-Prioritylevel-Uid: e31b76b8-c325-4019-bc47-293e5f940792
I0329 06:33:48.261645   21050 round_trippers.go:452]     Content-Length: 303
I0329 06:33:48.261648   21050 round_trippers.go:452]     Date: Mon, 29 Mar 2021 10:33:48 GMT
I0329 06:33:48.261650   21050 round_trippers.go:452]     Audit-Id: bb1c2ecb-76aa-4703-9bad-520fe1d697c3
I0329 06:33:48.261671   21050 request.go:1068] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"motd\" is forbidden: User \"system:anonymous\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift\"","reason":"Forbidden","details":{"name":"motd","kind":"configmaps"},"code":403}
I0329 06:33:48.261974   21050 helpers.go:234] Connection error: Head https://oauth-openshift.apps.ocp4-cluster-001.int.dhdigital.co.in: EOF
F0329 06:33:48.261986   21050 helpers.go:115] Unable to connect to the server: EOF

Kindly provide some way to resolve this issue.

Attachments

Responses

Farzam Alam,

I'm not familiar with OpenShift or the specific error you discovered, so this link is a bit of a wild stab at the issue

Is this a Red Hat supported OpenShift product? I imagine you have done some initial searches along to find a link such as above?

I hope this helps,

Regards,
RJ

Hi RJ,

Thank you for the reply. Yes we are using Redhat Openshift product. Actually the problem doesn't seem relating stall kubeconfig file as we are getting some errors in kubelet logs on master nodes. Please find trace for it.

Mar 17 06:16:35 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:35.873831    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:16:35 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:35.873841    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:16:35 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:35.873863    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:16:35 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:35.873867    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:36.193126    4997 kubelet_getters.go:173] status for pod kube-apiserver-control-plane-0.ocp4-cluster-001.int.dhdigital.co.in updated to {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-15 07:15:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-15 07:15:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-15 07:15:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-15 07:15:34 +0000 UTC  }]    192.168.10.204 192.168.10.204 [{192.168.10.204}] 2021-03-15 07:15:34 +0000 UTC [{setup {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2021-03-15 07:10:25 +0000 UTC,FinishedAt:2021-03-15 07:10:25 +0000 UTC,ContainerID:cri-o://0b4325f045c6ec358ed6f7c4e712142e5e7927f2b495748b33c3580a070495ad,}} {nil nil nil} true 0 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:971636b331fd2003ec1db1016bcfafa607dc88b4cd002f66de87b0a8beb4218e quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:971636b331fd2003ec1db1016bcfafa607dc88b4cd002f66de87b0a8beb4218e cri-o://0b4325f045c6ec358ed6f7c4e712142e5e7927f2b495748b33c3580a070495ad <nil>}] [{kube-apiserver {nil &ContainerStateRunning{StartedAt:2021-03-15 07:10:49 +0000 UTC,} nil} {nil nil &ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:ct: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:44.242368       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.205:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.205:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:44.299775       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.204:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.204:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:44.919253       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.205:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.205:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:45.109832       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:45.325447       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.204:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.204:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:45.807267       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:46.153999       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.206:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.206:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: Error: context deadline exceeded
Mar 17 06:16:36 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:36.193173    4997 kubelet_getters.go:173] status for pod kube-controller-manager-control-plane-0.ocp4-cluster-001.int.dhdigital.co.in updated to {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-15 07:15:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-17 06:16:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-17 06:16:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-15 07:15:34 +0000 UTC  }]    192.168.10.204 192.168.10.204 [{192.168.10.204}] 2021-03-15 07:15:34 +0000 UTC [] [{cluster-policy-controller {nil &ContainerStateRunning{StartedAt:2021-03-17 06:15:55 +0000 UTC,} nil} {nil nil &ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Error,Message:e server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)
Mar 17 06:16:39 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:16:39.234626    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:16:42 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:42.879805    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:16:42 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:42.879818    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:16:42 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:42.879821    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:16:42 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:42.879851    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:16:46.236107    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:46.478094    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:46.478111    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:46.478114    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:46.478133    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:46.478284    4997 status_manager.go:435] Ignoring same status for pod "kube-apiserver-control-plane-0.ocp4-cluster-001.int.dhdigital.co.in_openshift-kube-apiserver(323581182ced278b72e4002aed53fbb0)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-15 07:15:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-15 07:15:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-15 07:15:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-15 07:15:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.10.204 PodIP:192.168.10.204 PodIPs:[{IP:192.168.10.204}] StartTime:2021-03-15 07:15:34 +0000 UTC InitContainerStatuses:[{Name:setup State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2021-03-15 07:10:25 +0000 UTC,FinishedAt:2021-03-15 07:10:25 +0000 UTC,ContainerID:cri-o://0b4325f045c6ec358ed6f7c4e712142e5e7927f2b495748b33c3580a070495ad,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:971636b331fd2003ec1db1016bcfafa607dc88b4cd002f66de87b0a8beb4218e ImageID:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:971636b331fd2003ec1db1016bcfafa607dc88b4cd002f66de87b0a8beb4218e ContainerID:cri-o://0b4325f045c6ec358ed6f7c4e712142e5e7927f2b495748b33c3580a070495ad Started:<nil>}] ContainerStatuses:[{Name:kube-apiserver State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-03-15 07:10:49 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:ct: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:44.242368       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.205:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.205:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:44.299775       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.204:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.204:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:44.919253       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.205:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.205:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:45.109832       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:45.325447       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.204:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.204:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:45.807267       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: W0315 07:10:46.153999       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://192.168.10.206:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.10.206:2379: connect: connection refused". Reconnecting...
Mar 17 06:16:46 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: Error: context deadline exceeded
Mar 17 06:16:49 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:49.886052    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:16:49 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:49.886069    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:16:49 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:49.886080    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:16:49 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:49.886083    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:16:53 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:16:53.237563    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.477308    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.477410    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.477492    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.477555    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.478132    4997 status_manager.go:435] Ignoring same status for pod "kube-controller-manager-control-plane-0.ocp4-cluster-001.int.dhdigital.co.in_openshift-kube-controller-manager(870e0878b8237e4bddd00c1e3311b8c0)", status: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-15 07:15:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-17 06:16:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-17 06:16:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-15 07:15:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.10.204 PodIP:192.168.10.204 PodIPs:[{IP:192.168.10.204}] StartTime:2021-03-15 07:15:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:cluster-policy-controller State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-03-17 06:15:55 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Error,Message:e server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.891676    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.891690    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.891694    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:16:56 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:16:56.891698    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:17:00 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:17:00.239100    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:17:03 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:03.897184    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:17:03 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:03.897192    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:17:03 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:03.897196    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:17:03 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:03.897200    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:17:07 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:17:07.240493    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:17:10 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:10.903158    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:17:10 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:10.903171    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:17:10 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:10.903188    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:17:10 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:10.903192    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:17:14 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:17:14.242114    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:17:17 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:17.908948    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:17:17 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:17.909084    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:17:17 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:17.909134    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:17:17 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:17.909182    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:17:18 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:18.477944    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:17:18 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:18.477955    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:17:18 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:18.477970    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:17:18 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:18.477974    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs
Mar 17 06:17:21 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: E0317 06:17:21.243890    4997 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "control-plane-0.ocp4-cluster-001.int.dhdigital.co.in" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Mar 17 06:17:23 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:23.477634    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/gce-pd
Mar 17 06:17:23 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:23.477648    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/cinder
Mar 17 06:17:23 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:23.477672    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/azure-disk
Mar 17 06:17:23 control-plane-0.ocp4-cluster-001.int.dhdigital.co.in hyperkube[4997]: I0317 06:17:23.477676    4997 setters.go:777] Error getting volume limit for plugin kubernetes.io/aws-ebs

Request you to provide any observation.