At last! Authentication & Console. OCP4.7 with Libvirt on a cloud server
Hi all, I have hit a wall when it comes to getting OCP working in my dev environment and wondered if somebody get me past this sticking point. I have been trying to create a demo lab using OCP and my IBM storage setup, but I just can't get the OCP console to work.
I appreciate it is probably (almost certainly) network related, but that's all I can say.
Quick overview of the environment:
- 16vCPU/64GB server in IBM cloud running Centos 8.3
- Libvirt hosts all running RHCOS (version that supports OCP4.7
- OCP 4.7.
- 3 Masters that are also workers
Nodes report ready:
oc get nodes
NAME STATUS ROLES AGE VERSION
okd4-master1 Ready master,worker 16h v1.20.0+87cc9a4
okd4-master2 Ready master,worker 16h v1.20.0+87cc9a4
okd4-master3 Ready master,worker 16h v1.20.0+87cc9a4
Clusteroperator
oc get clusteroperators
NAME VER AVAIL PROG DEG
authentication False True True
baremetal 4.7.19 True False False
console 4.7.19 False True True
ingress 4.7.19 True False True
The DNS is working as is the SDN and the pods can all resolve to outside world
oc get all -n openshift-dns
NAME READY STATUS RESTARTS AGE
pod/dns-default-lflg4 3/3 Running 0 16h
pod/dns-default-pzf2c 3/3 Running 0 15h
pod/dns-default-sw7j4 3/3 Running 0 15h
pod/dns-default-lflg4 querying kubernetes.default.svc.cluster.local to 10.128.0.36 -> 172.30.0.1
pod/dns-default-pzf2c querying kubernetes.default.svc.cluster.local to 10.128.0.36 -> 172.30.0.1
pod/dns-default-lflg4 querying redhat.com to 10.128.0.36 -> 209.132.183.105
pod/dns-default-pzf2c querying redhat.com to 10.128.0.36 -> 209.132.183.105
~~~
I don't need a proxy to access the outside world, but the libvirt hosts don't actually have a public IP as this is attached to the Centos8 server.
Any clues? I will take anything at this stage. Codeready Containers works on this host, but I wanted to use the full-package to demonstrate Spectrum Protect and the IBM storage integration.
Thanks in advance.