SDN pod in crashloopbackoff state after upgrading OpenShift cluster from 4.4 -> 4.5 with multiple value of clusterCIDR

Solution In Progress - Updated -

Issue

  • After upgrade from 4.4 to 4.5 the sdn-config ConfigMap has field (clusterCIDR: "") , an empty string and failed to parse.

  • sdn pod logs

Aug 25 20:56:13 worker01.xxxx hyperkube[1424]: F0825 20:55:08.999094 2999122 proxy.go:112] Unable to configure local traffic detector: invalid CIDR address:

  • sdn-config

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: ""
  qps: 0
clusterCIDR: ""       ---------------> empty string                  
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 0

Environment

  • Red Hat OpenShift Container Platform
    • 4.4 , 4.5 and 4.6

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content