After upgrade Openshift 4.9+, Oauth-proxy unable to load OpenShift configuration: unable to retrieve authentication information for tokens: the server could not find the requested resource

Solution Verified - Updated -

Environment

  • Red Hat OpenShift Container Platform (RHOCP)
    • 4.9+
  • Red Hat OpenShift on AWS (ROSA)
    • 4.9+
  • Red Hat OpenShift Dedicated (OSD)
    • 4.9+
  • Azure Red Hat OpenShift (ARO)
    • 4.9+

Issue

Pod which has oauth-proxy container as a sidecar, keeps crashloopbackoff after cluster upgrade to 4.9+, with the following errors:

2023/02/20 07:22:24 provider.go:290: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates. 
2023/02/20 07:22:24 main.go:138: Invalid configuration:   unable to load OpenShift configuration: unable to retrieve authentication information for tokens: the server could not find the requested resource

Resolution

Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

  • Replace container image openshift/oauth-proxy:latest with registry.redhat.io/openshift4/ose-oauth-proxy

  • Modify the related command line options

  • Modify RBAC if need.

Command line options: https://github.com/openshift/oauth-proxy#command-line-options

openshift4/ose-oauth-proxy: https://catalog.redhat.com/software/containers/openshift4/ose-oauth-proxy/5cdb2133bed8bd5717d5ae64

Reference: https://github.com/openshift/oauth-proxy/issues/179

Example:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: oauth-service-account-access
  namespace: default
data:
  description: Fake ConfigMap to give all authenticated users access to a given service behind openshift-oauth-proxy.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: oauth-service-account-access
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  resourceNames:
  - oauth-service-account-access
---
# allow all service accounts read access to configmap oauth-service-account-access
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: oauth-service-account-access
  namespace: default
subjects:
- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: oauth-service-account-access
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: proxy
  namespace: default
  annotations:
    serviceaccounts.openshift.io/oauth-redirectreference.primary: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"proxy"}}'
# Create a secure connection to the proxy via a route
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: proxy-can-create-token-reviews
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: proxy
  namespace: default
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: proxy
  namespace: default
spec:
  to:
    kind: Service
    name: proxy
  tls:
    termination: Reencrypt
---
apiVersion: v1
kind: Service
metadata:
  name: proxy
  namespace: default
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: proxy-tls
spec:
  ports:
  - name: proxy
    port: 443
    targetPort: 8443
  selector:
    app: proxy
# Launch a proxy as a sidecar
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: proxy
  template:
    metadata:
      labels:
        app: proxy
    spec:
      serviceAccountName: proxy
      containers:
      - name: oauth-proxy
        image: registry.redhat.io/openshift4/ose-oauth-proxy
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
        ports:
        - containerPort: 8443
          name: public
        args:
        - --https-address=:8443
        - --provider=openshift
        - --openshift-service-account=proxy
        - --upstream=http://localhost:8080
        - --tls-cert=/etc/tls/private/tls.crt
        - --tls-key=/etc/tls/private/tls.key
        - --openshift-delegate-urls={"/":{"group":"","resource":"configmaps","verb":"get","namespace":"default","name":"oauth-service-account-access"}}
        - --cookie-secret=SECRET
        volumeMounts:
        - mountPath: /etc/tls/private
          name: proxy-tls

      - name: app
        image: openshift/hello-openshift:latest
      volumes:
      - name: proxy-tls
        secret:
          secretName: proxy-tls

Root Cause

Outdated oauth-proxy image from Dockerhub which still attempts to do v1beta1 TokenReviews which will be no longer available in Openshift 4.9+(Kubernetes 1.22+).

The openshift/oauth-proxy:latest image from the dockerhub has not been updated for over 4 years.

openshift/oauth-proxy:latest on Dockerhub:https://hub.docker.com/r/openshift/oauth-proxy/tags

Diagnostic Steps

oc get pod $pod_name -o json| jq ".spec.containers[].image"

and make sure using registry.redhat.io/openshift4/ose-oauth-proxy instead of openshift/oauth-proxy:latest

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments