Making Cloud Configuration Changes in OpenShift results in Errors, Nodes set to NotReady (on OpenStack, AWS)
Issue
- I changed node configuration as described in documentation, added to the kubelet arguments, and config file is found and read
kubeletArguments:
cloud-provider:
- "openstack"
cloud-config:
- "/etc/origin/node/cloud.conf"
-
Change node config, restart node.
-
Logs go into a loop:
Jul 12 20:08:00 node1.example.com atomic-openshift-node[98369]: I0712 20:08:00.243409 98435 kubelet.go:2770] Recording NodeHasSufficientDisk event message for node node1.example.com
Jul 12 20:08:00 node1.example.com atomic-openshift-node[98369]: I0712 20:08:00.243459 98435 kubelet.go:1134] Attempting to register node node1.example.com
Jul 12 20:08:00 node1.example.com atomic-openshift-node[98369]: E0712 20:08:00.822027 98435 kubelet.go:1173] Previously "node1.example.com" had externalID "[id]"; now it is "node1.example.com"; will delete and recreate.
Jul 12 20:08:00 node1.example.com atomic-openshift-node[98369]: E0712 20:08:00.823340 98435 kubelet.go:1175] Unable to delete old node: User "system:node:node1.example.com" cannot delete nodes at the cluster scope
- Updated as instructed via docs
Status of nodes prior to change:
NAME STATUS AGE
node1.example.com Ready,SchedulingDisabled 23h
node2.example.com Ready 23h
node3.example.com Ready 23h
Status after node restarted
AME STATUS AGE
node1.example.com NotReady,SchedulingDisabled 1d
node2.example.com Ready 1d
node3.example.com Ready 1d
Removed changes to node configuration, restarted both master and node and status back to Ready.
Environment
- Red Hat OpenShift Container Platform
- 3.2.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
