Redirect from OpenShift console to OpenShift Web Console is not working

Solution Verified - Updated -

Environment

  • OpenShift Container Platform
    • 3.11

Issue

  • Accessing the OpenShift Web Console from the drop-down menu in the console attempts to redirect to the Web Console but fails.
  • Both the console pod and the Web Console pod appear to be running fine.
  • Logs may point to a certificate issue:
14:04:24 http: TLS handshake error from [::1]:41646: remote error: tls: bad certificate
14:04:24 auth: unable to verify auth code with issuer: Post https://node.webconsole.example.com:8443/oauth/token: x509: certificate is valid for console.openshift-console.svc, console.openshift-console.svc.cluster.local, not node.webconsole.example.com
2019/08/13 14:04:24 server: authentication failed: unauthenticated

Resolution

  • Look at the hosts and resolv.conf fileon the master node where the web console is running to check for DNS conflicts.
# cat /etc/hosts

#127.0.0.1 node.example.com osmaster
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
169.62.153.89    node.example.com osmaster
169.62.153.73    node1.example.com osnode1
169.62.153.91    node2.example.com osnode2
169.62.153.78   node3.example.com osnode3
169.62.153.83   node4.example.com osinfra

# The following lines are desirable for IPv6 capable hosts
::1 node.webconsole.example.com osmaster
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
  • Comment out the conflicting line:
::1 node.webconsole.example.com osmaster
  • Restart dnsmasq on node host
# systemctl restart dnsmasq

Root Cause

DNS may be correct outside of the cluster, but if things are incorrect on node hosts, pods will resolve to where they are instructed to by the locally-defined hosts and resolv.conf file on the node hosts before they look to other name servers outside of the cluster.

In the above example, from a curl to the web console URL from a node host, we can see it resolves to ::1 (IPv6 loopback) first, because of the locally-defined hosts:

[root@master.node.example.com ~]# curl -v https://node.webconsole.example.com:8443/oauth/token
* About to connect() to node.webconsole.example.com port 8443 (#0)
*   Trying ::1...

If you attempt the same curl from within the console pod, the results will be the same:

[root@master.node.example.com ~]# oc exec -it console-asdfasdf1p-podID
bash-4.1$ curl -v https://node.webconsole.example.com:8443/oauth/token
* About to connect() to node.webconsole.example.com port 8443 (#0)
*   Trying ::1...

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.