At last! Authentication & Console. OCP4.7 with Libvirt on a cloud server

Latest response

Hi all, I have hit a wall when it comes to getting OCP working in my dev environment and wondered if somebody get me past this sticking point. I have been trying to create a demo lab using OCP and my IBM storage setup, but I just can't get the OCP console to work.
I appreciate it is probably (almost certainly) network related, but that's all I can say.
Quick overview of the environment:
- 16vCPU/64GB server in IBM cloud running Centos 8.3
- Libvirt hosts all running RHCOS (version that supports OCP4.7
- OCP 4.7.
- 3 Masters that are also workers

Nodes report ready:

oc get nodes
NAME           STATUS   ROLES           AGE   VERSION
okd4-master1   Ready    master,worker   16h   v1.20.0+87cc9a4
okd4-master2   Ready    master,worker   16h   v1.20.0+87cc9a4
okd4-master3   Ready    master,worker   16h   v1.20.0+87cc9a4

Clusteroperator

oc get clusteroperators
NAME                VER     AVAIL   PROG    DEG
authentication              False       True          True
baremetal         4.7.19    True        False         False
console              4.7.19    False       True          True
ingress               4.7.19    True        False         True


The DNS is working as is the SDN and the pods can all resolve to outside world

oc get all -n openshift-dns
NAME READY STATUS RESTARTS AGE
pod/dns-default-lflg4 3/3 Running 0 16h
pod/dns-default-pzf2c 3/3 Running 0 15h
pod/dns-default-sw7j4 3/3 Running 0 15h

pod/dns-default-lflg4 querying kubernetes.default.svc.cluster.local to 10.128.0.36 -> 172.30.0.1
pod/dns-default-pzf2c querying kubernetes.default.svc.cluster.local to 10.128.0.36 -> 172.30.0.1

pod/dns-default-lflg4 querying redhat.com to 10.128.0.36 -> 209.132.183.105
pod/dns-default-pzf2c querying redhat.com to 10.128.0.36 -> 209.132.183.105
~~~

I don't need a proxy to access the outside world, but the libvirt hosts don't actually have a public IP as this is attached to the Centos8 server.

Any clues? I will take anything at this stage. Codeready Containers works on this host, but I wanted to use the full-package to demonstrate Spectrum Protect and the IBM storage integration.
Thanks in advance.

Responses

Just to update, I was able to get this all working by ripping everything down and using the Redhat Assisted Installer. Once the cluster was built, I recreated the install-config.yaml and found there was a lot missing that would have taken another month to work out what I needed. Big thanks to the Redhat team.

This was the part that was different to a basic install-config.yaml

platform:
  baremetal:
    apiVIP: 192.168.100.150  <- api.<cluster ip>
    externalBridge: baremetal
    hosts:
    - bmc:
        address: ""
        disableCertificateVerification: false
        password: ""
        username: ""
      bootMACAddress: xx.xx.xx.xx.xx.xx < insert your own MAC if you need
      bootMode: legacy
      hardwareProfile: unknown
      name: openshift-master-0
      role: master
    - bmc:
        address: ""
        disableCertificateVerification: false
        password: ""
        username: ""
      bootMACAddress: xx.xx.xx.xx.xx.xx < insert your own MAC if you need
      bootMode: legacy
      hardwareProfile: unknown
      name: openshift-master-1
      role: master
    - bmc:
        address: ""
        disableCertificateVerification: false
        password: ""
        username: ""
      bootMACAddress: xx.xx.xx.xx.xx.xx < insert your own MAC if you need
      bootMode: legacy
      hardwareProfile: unknown
      name: openshift-master-2
      role: master
    ingressVIP: 192.168.100.xx   <- this IP is where *.apps.<cluster> resolves 
    libvirtURI: qemu:///system
    provisioningNetwork: Disabled
    provisioningNetworkInterface: ""