Pod token-refresher logs timeout messages in ROSA/OSD

Solution Verified - Updated -

Environment

  • Red Hat OpenShift Service on AWS (ROSA)
    • 4
  • Red Hat OpenShift Dedicated (OSD)
    • 4

Issue

  • The pod token-refresher in CrashLoopBackOff state.
  • Misconfiguration of proxy network requirements.
  • Message in the logs:
$  oc logs token-refresher-55988d94dc-g98l6
2023/10/25 08:40:54 http: proxy error: dial tcp x.x.x.x:443: i/o timeout

Resolution

The issue is harmless, the token-refresher pod function is to allow Red Hat SRE forward Telemetry metrics, there are no further impacts in the cluster itself.

Workaround

  • Exclude sso.redhat.com URL from proxy re-encryption.
  • Currently, the token-refresher pod does not support communicating through the proxy, hence setup the URLs only to bypass the proxy as per the documentation.
  • An internal RFE is in progress to allow token-refresher pod to use a proxy.

Root Cause

Currently, the token-refresher pod does not support communicating through the proxy.

Diagnostic Steps

  • Check the status of the token-refresher pod
$ oc get po -n openshift-monitoring
token-refresher-xxxxxxxxxx-xxxxx                        0/1     CrashLoopBackOff   268 (3m7s ago)   24h
  • Check the logs of the token-refresher pod
$ oc logs token-refresher-xxxxxxxxxx-xxxxx
level=info name=token-refresher ts=20xx-xx-xxT12:09:48.262932741Z caller=main.go:138 msg=token-refresher
20xx/xx/xx 12:10:18 OIDC provider initialization failed: Get "https://sso.redhat.com/auth/realms/redhat-external/.well-known/openid-configuration": dial tcp xx.xxx.xxx.xxx:443: i/o timeout

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments