Challenges running CodeReady Containers

Latest response

Few issues I am continuously facing are

1] Never able to successfully execute crc start. Most of the time, it never succeeds at
"Waiting for kube-apiserver availability... [takes around 2min]"

when run in debug mode I get following mesages


level=debug msg="retry loop: attempt 9"
level=debug msg="Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig"
level=debug msg="SSH command results: err: Process exited with status 1, output: "
level=debug msg="The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?\n"
level=debug msg="error: Temporary error: ssh command error:\ncommand : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig\nerr : Process exited with status 1\n - sleeping 1s"


Any mechanism to increase timeout in "timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig"?
when executed separately (without timeout) "oc get nodes" command works fine and output is


 crc-pkjt4-master-0   Ready    master,worker   18d   v1.20.0+df9c838

2] Well even then crc is up and I am able to login but build never succeeds.
In fact, I am able login to host api.crc.testing using command
ssh -i ~/.crc/machines/crc/id_ecdsa core@192.168.130.11
and able to successfully resolve github.com from the shell. Check below output on command line of 192.168.130.11


host github.com
github.com has address 13.234.210.38
github.com mail is handled by 1 aspmx.l.google.com.
github.com mail is handled by 5 alt1.aspmx.l.google.com.
github.com mail is handled by 5 alt2.aspmx.l.google.com.
github.com mail is handled by 10 alt3.aspmx.l.google.com.
github.com mail is handled by 10 alt4.aspmx.l.google.com.

BUT, then why build fails with error =>


Cloning "https://github.com/jboss-openshift/openshift-quickstarts" ...
error: fatal: unable to access 'https://github.com/jboss-openshift/openshift-quickstarts/': Could not resolve host: github.com


Btw. I am running Centos 7 on a System with 32 GB memory & having Intel® Core™ i5-4440S CPU @ 2.80GHz × 4

Responses

Hi Vivek,

Not sure if this works in your case too, but I could resolve it by running the following commands. :)

crc stop
crc delete

sudo chmod 666 /etc/hosts

crc setup
crc start

Regards,
Christian

Thanks.

I am just modifying my response.

I said partially working as-> even now, "crc start" terminate sometimes after "Waiting for kube-apiserver availability... [takes around 2min]" Not sure whether i need to follow above "stop-delete-setup-start" cycle every time (loosing my earlier work) I switch of my machine and restart next day.

Also, Builds fail sometimes with an error - "error":"server_error","error_description":"The authorization server encountered an unexpected condition that prevented it from fulfilling the request.","state":"7bbf7767"

In brief, still waiting for good resolution of these issues.

Btw. I used "crc start -n 8.8.8.8".

Any option to finish rest of the tasks associated with "crc start"?

Btw. I tried to suspend the command (^Z) and restart, but that does not work as the process "crc start" hangs after giving a message -

level=debug msg="RetryAfter timeout after 59 tries"

level=warning msg="Wrapper Docker Machine process exiting due to closed plugin server (read tcp 127.0.0.1:41628->127.0.0.1:42554: read: connection reset by peer)"
sometimes crc start also fails with

level=debug msg="retry loop: attempt 11" level=debug msg="Running SSH command: timeout 5s oc get clusterversion --context admin --cluster crc --kubeconfig /opt/kubeconfig" level=debug msg="SSH command results: err: Process exited with status 124, output: " level=debug level=debug msg="error: Temporary error: ssh command error:\ncommand : timeout 5s oc get clusterversion --context admin --cluster crc --kubeconfig /opt/kubeconfig\nerr : Process exited with status 124\n - sleeping 1s"

Btw. I also want to explore the possibility of some servers (docker repository, openshift ...) inaccessible from India during day time and causing 'crc start' to fail?

My observation is, "crc start" generally works after 7am Eastern Time or after 5pm IST (+5:30). Anyway, that may be coincidence but want to validate the same.

Strange observation. In both scenarios I have completely cleared down the environments, downloaded the latest CRC.

Scenario 1 uses vmWare, and Secnario 2 uses VirtualBox. It seems Scenario 1 always works OK, but Scenario 2 fails.

Snippet from "crc start" debug from vmWare

DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: NAME                 STATUS   ROLES           AGE   VERSION
crc-pkjt4-master-0   Ready    master,worker   18d   v1.20.0+df9c838 
DEBU NAME                 STATUS   ROLES           AGE   VERSION
crc-pkjt4-master-0   Ready    master,worker   18d   v1.20.0+df9c838 
DEBU Waiting for availability of resource type 'secret' 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: NAME                       TYPE                                  DATA   AGE
builder-dockercfg-9rsz6    kubernetes.io/dockercfg               1      18d
builder-token-6vx46        kubernetes.io/service-account-token   4      18

Snippet from "crc start" debug from VirtualBox

DEBU SSH command results: err: Process exited with status 1, output:  
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port? 
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s 
DEBU retry loop: attempt 16                       
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: Process exited with status 1, output:  
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port? 
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s 
DEBU retry loop: attempt 17    

Hi Benet!

I also get such errors sometimes (as I said above). I suspect there are some race conditions. But in brief, my success rate of completing "crc start" successfully has increased after "Christian Labisch"'s suggestion.

In brief, - 1] I use memory settings as -> crc config set memory 10240 2] I ensure that /opt/kubeconfig exists and is same as ~/.crc/machines/crc/kubeconfig << Interestingly it appears to be required but not copied. 3] I have set now - chmod 0666 /etc/hosts ... and also followed few other settings for PATH variable and Firewall (as suggested in Getting Started).

"crc start" finishes successfully after multiple trials (sometimes just 2). I am yet to find exact reason of failure but I suspect "timeout" for few commands as one reason and race conditions as another.

Hi Vivek,

For what it's worth ... with the following settings I'm having a smooth experience. :)

crc config set cpus 8
crc config set memory 16384

sudo chmod 666 /etc/hosts

crc setup
crc start -c 8 -m 16384

Regards,
Christian

Hi Christian,

I am also curious as to the changing chmod from default 644 to 666 on /etc/hosts. Please do you happen to know what process needs to update the hosts file and how you came across the need for this?

Once again, many thanks.

Benet.

Hi Benet,

The CRC installer adds some relevant entries to the /etc/hosts file and 644 doesn't seem to be
sufficient. A kind Red Hat TAM asked the OpenShift team and pointed me to this workaround. :)

Regards,
Christian

Thanks Christian,

Very interesting. This makes we wonder with a final thought before I go away and set this up too on my environment.... If this is so that the non-root account that runs the CRC install can write to the "/etc/hosts" file, perhaps we could use "setfacl" and permit the sole access to the required user via file access control, (of course we'd need a root or sudoer to set that up.) Would the OpenShift Team think this would be a little more secure a workaround, or is it not worth it given this is for DEV anyway? :-) I think I know the answer :-)) but thought I'd ask anyway.

Many thanks, Benet.

You're welcome, Benet ! Well, I really don't know what the OpenShift team would think of this. :)
Workarounds are just what the word says : workarounds. Maybe they find another solution later.

Regards,
Christian

Hi Christian,

Thank you for the suggestion, but I'm afraid I still get the same problem. It's very strange..... I have the same setup on 2 different systems, except that the vmWare VM works OK, and the VirtualBox VM has the issue. I've set the VirtualBox machine to even use "NAT Network" and not "NAT" so that you could use a 192.168.x.x ip address. In the VirtualBox Help Docs it explains the difference between "NAT" and "NAT Network". Both VMs are running the same RHEL release 7.9.

Regards,

Benet.

Hi Benet,

This is not a big surprise to me ... because ORACLE VirtualBox is known to having various issues.
By the way - it is not recommended to run multiple virtualization solutions on the same machine.
Me myself, I am using KVM (nothing else) and everything works without issues out-of-the-box. :)

Regards,
Christian

Hi Christian,

No worries. I don't have running vmWare and VirtualBox on the same machine. That just sounds like asking for trouble. :-)

Regards,

Benet.

Yes - exactly, Benet ... good decision ! :)

If it helps,

When "crc start" hangs, I get Virtual Machine Manager Crashed error,

ssh.py:27:<module>:  File "/usr/lib64/python2.7/site-packages/bcrypt/__init__.py", line 57

version - virt-manager-1.5.0-7.el7

Hi Christian,

Thank you for your reply. Sorry I was not clear about my environment. I have x2 separate laptops, one with vmWare Workstation 15 Pro v15.5.7 build-17171714, and the other with VirtualBox 6.1.22 r144080. Both running Windows 10 and are both the same make and model. I would really like to know what it is about VirtualBox where the "crc start" fails at the same point every time, but on vmWare it works.

To be fare I have not tried "crc" running directly on Windows yet.

Regards,

Benet.

Hi Benet,

I don't have any idea other than it might be related to a difference in the network setup of VMware and VirtualBox. :)

Regards,
Christian

Hi Christian,

Also maybe because vmWare Workstation Pro new costs about $200 and VirtualBox is free. There is that old saying that comes to mind, "You get what you pay for" :-))

Kind regards,

Benet.

HaHaHa ... Nice ... Always good to meet someone who's having a great sense of humor ! :D :D :D

Cheers :)
Christian

Thanks for your help Christian. Much appreciated. :-))

Kind regards,

Benet.

You're welcome, Benet ! :)

There is an issue raised in https://github.com/code-ready/crc/issues/2220 which is also related to points raised by me.

I also raised https://github.com/code-ready/crc/issues/2511

Btw. It appears, Benet is executing crc within a VM of VirtualBox rather than executing crc.exe directly in Windows. Anyway, I am executing crc on a physical machine (OS - CentOS Linux release 7.9.2009 )

Hi Vivek,

You are correct. I am running the "crc" within a VM. Just don't know how/why it fails for VirtualBox and not vmWare..... but then again Christian did mention about VirtualBox having network issues, which I have also heard which is like a "known on-going unwanted feature".

Ok.... I feel like I've been bitten by the "Troubleshooter 101 of Firewalls" when it comes to anything that has a network or connectivity issue.... I am now currently trying with vmWare Player 16 free edition, and for some reason the "crc" environment would not start up here either with a similar message...... It was then I went rogue and decided to drop the firewall (I do not encourage doing this as a workaround or fix, as disabling the firewall is a security "NO NO!!", but this was a test environment and I was just investigating). After temporarily stopping the "firewalld" service the "crc" environment started up fine. My VM guest machine had 16GiB RAM and 120GiB of disk space, with the CPU config using all 6 of the physical cores of my host system (64GiB RAM and 2TiB of disk space). My usual problem of not being able to log into the RHOCP cluster GUI was gone. I am still investigating and so far have come across this information which interests me as I would like to keep the firewall service running and configure it properly. I am now beginning to wonder if this also works with VirualBox VMs too if the firewall is temporarily disabled. I might check that out at a later date if there's time.

Ok..... Maybe not..... Still getting the issues where CRC does not start up, so am looking into why this is intermittent. I have 2 identical host machines. The one running vmWare Pro Workstation 15 seems to have a lot more success than the one running vmWare Player 16 (and VirtualBox, although I will have to try VirtualBox at a later date).

Have tried again today using the latest "crc" download to date, with the command ...

crc start -c 4 -m 16384

... and it ran first time. Cluster up and accessible, no issues.

Not sure why this is intermittent.

Thanks for your updates, Benet ! :) I can confirm the same ... it just works. There must have been
many improvements implemented - the new version is running OCP 4.8.2 and as we can see the
installer size increased a lot. I'm glad to read that you don't experience issues any longer, Benet !

Regards,
Christian

Hi Christian. Thank you for your reply. I would really like to get to the bottom of making this stable. It was rather strange to be honest. It worked for a while but then as I was using the RHOCP Web GUI the networking would just stop, and then after a while it would come back, and then go again. It's still not quite there yet.

Kind regards,

Benet.

Hi Benet,

There's one thing you have to know : OCP is not exactly meant to be used this way. :)
CRC is a one-node-cluster testing scenario, it is not intended to use it in production.

Regards,
Christian

The same issue I'm having on Windows 2019 server.

DEBU CodeReady Containers version: 1.30.1+376cc3c1
DEBU OpenShift version: 4.8.2 (not embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 17169760256 bytes
DEBU No new version available. The latest version is 1.30.1
DEBU Running '( Hyper-V\Get-VM crc ).state'
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
DEBU Checking if an older admin-helper executable is installed
DEBU No older admin-helper executable found
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
DEBU Total memory of system is 17169760256 bytes
INFO Checking if running in a shell with administrator rights
WARN Skipping above check...
INFO Checking Windows 10 release
DEBU Running '(Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion" -Name ReleaseId).ReleaseId'
INFO Checking Windows edition
DEBU Running '(Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion").EditionID'
DEBU Running on Windows ServerStandard edition
INFO Checking if Hyper-V is installed and operational
DEBU Running '@(Get-Wmiobject Win32_ComputerSystem).HypervisorPresent'
DEBU Running '@(Get-Service vmms).Status'
INFO Checking if crc-users group exists
DEBU Running 'Get-LocalGroup -Name crc-users'
INFO Checking if current user is in Hyper-V group and crc-users group
WARN Skipping above check...
INFO Checking if Hyper-V service is enabled
DEBU Running '@(Get-Service vmms).Status'
INFO Checking if the Hyper-V virtual switch exists
DEBU Running 'Get-VMSwitch crc | ForEach-Object { $_.Name }'
INFO Found Virtual Switch to use: crc
INFO Checking if admin-helper daemon is installed
INFO Checking if vsock is correctly configured
DEBU Running 'Get-Item -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\GuestCommunicationServices\00000400-FACB-11E6-BD58-64006A7986D3"'
DEBU Checking file: C:\Users\redhat\.crc\machines\crc\.crc-exist
DEBU Running '( Hyper-V\Get-VM crc ).state'
DEBU Copying 'C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\oc.exe' to 'C:\Users\redhat\.crc\bin\oc\oc.exe'
DEBU Copying 'C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\podman.exe' to 'C:\Users\redhat\.crc\bin\oc\podman.exe'
INFO Starting CodeReady Containers VM for OpenShift 4.8.2...
DEBU Updating CRC VM configuration
DEBU Running 'Hyper-V\Start-VM crc'
DEBU Waiting for machine to be running, this may take a few minutes...
DEBU retry loop: attempt 0
DEBU Running '( Hyper-V\Get-VM crc ).state'
DEBU Machine is up and running!
DEBU Running '( Hyper-V\Get-VM crc ).state'
INFO CodeReady Containers instance is running with IP 127.0.0.1
DEBU Waiting until ssh is available
DEBU retry loop: attempt 0
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:60860->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:60860->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:60862->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:60862->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:60864->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:60864->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:60866->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:60866->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:62120->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:62120->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:63936->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:63936->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 6
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:63938->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:63938->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 7
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:53704->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:53704->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 8
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:63414->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:63414->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 9
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:64300->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:64300->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 10
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:64302->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:64302->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s
DEBU retry loop: attempt 11
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: <nil>, output:
INFO CodeReady Containers VM is running
DEBU Running SSH command: cat /home/core/.ssh/authorized_keys
DEBU SSH command results: err: <nil>, output: ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAHjZ0vWtOrJAH1uzj/EMAFncXcQPP6XgF6u8IOzrhtY4NhOwW6AVkEFS9eVfRPmR2sqfud85ghTGHh7Us+zRSZ0fQEjb83sTs/JzCbUj1yQPUlboJ9lwmkMTNSVriYJgAPr938NZj7kOUwqXxow44qFxQgt0pXYSmNoL/wMIqTsWA4Haw== core
INFO Updating authorized keys...
DEBU Creating /home/core/.ssh/authorized_keys with permissions 0644 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: realpath /dev/disk/by-label/root
DEBU SSH command results: err: <nil>, output: /dev/sda4
DEBU Using root access: Growing /dev/sda4 partition
DEBU Running SSH command: sudo /usr/bin/growpart /dev/sda 4
DEBU SSH command results: err: Process exited with status 1, output: NOCHANGE: partition 4 is size 63961055. it cannot be grown
DEBU No free space after /dev/sda4, nothing to do
DEBU Running SSH command: cat /etc/resolv.conf
DEBU SSH command results: err: <nil>, output: # Generated by CRC
search crc.testing
nameserver 192.168.127.1

INFO Adding 8.8.8.8 as nameserver to the instance...
DEBU Running SSH command: NS=8.8.8.8; cat /etc/resolv.conf |grep -i "^nameserver $NS" || echo "nameserver $NS" | sudo tee -a /etc/resolv.conf
DEBU SSH command results: err: <nil>, output: nameserver 8.8.8.8
DEBU Using root access: make root Podman socket accessible
DEBU Running SSH command: sudo chmod 777 /run/podman/ /run/podman/podman.sock
DEBU SSH command results: err: <nil>, output:
DEBU Creating /etc/resolv.conf with permissions 0644 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU retry loop: attempt 0
DEBU Running SSH command: host -R 3 foo.apps-crc.testing
DEBU SSH command results: err: <nil>, output: foo.apps-crc.testing has address 192.168.127.2
INFO Check internal and public DNS query...
DEBU Running SSH command: host -R 3 quay.io
DEBU SSH command results: err: <nil>, output: quay.io has address 54.208.59.220
quay.io has address 54.156.10.58
quay.io has address 44.197.21.192
quay.io has address 3.216.152.103
quay.io has address 34.224.196.162
quay.io has address 44.193.101.5
quay.io has address 3.233.133.41
quay.io has address 3.213.173.170
INFO Check DNS query from host...
DEBU api.crc.testing resolved to [127.0.0.1]
WARN Failed to query DNS from host: lookup foo.apps-crc.testing: no such host
INFO Verifying validity of the kubelet certificates...
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-22T11:28:06+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-server-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-22T11:28:43+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-22T11:30:18+00:00
INFO Starting OpenShift kubelet service
DEBU Using root access: Executing systemctl daemon-reload command
DEBU Running SSH command: sudo systemctl daemon-reload
DEBU SSH command results: err: <nil>, output:
DEBU Using root access: Executing systemctl start kubelet
DEBU Running SSH command: sudo systemctl start kubelet
DEBU SSH command results: err: <nil>, output:
INFO Waiting for kube-apiserver availability... [takes around 2min]
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 6
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 7
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 8
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 9
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 10
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 11
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 12
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 13
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 14
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 15
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 16
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 17
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 18
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 19
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 20
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 21
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 22
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME                 STATUS   ROLES           AGE   VERSION
crc-txps5-master-0   Ready    master,worker   15d   v1.21.1+051ac4f
DEBU NAME                 STATUS   ROLES           AGE   VERSION
crc-txps5-master-0   Ready    master,worker   15d   v1.21.1+051ac4f
DEBU Waiting for availability of resource type 'secret'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME                       TYPE                                  DATA   AGE
builder-dockercfg-dklgw    kubernetes.io/dockercfg               1      15d
builder-token-8zxxn        kubernetes.io/service-account-token   4      15d
builder-token-xpz24        kubernetes.io/service-account-token   4      15d
default-dockercfg-rpcm6    kubernetes.io/dockercfg               1      15d
default-token-ls5mc        kubernetes.io/service-account-token   4      15d
default-token-rzvz2        kubernetes.io/service-account-token   4      15d
deployer-dockercfg-g6zlb   kubernetes.io/dockercfg               1      15d
deployer-token-5mv84       kubernetes.io/service-account-token   4      15d
deployer-token-dmxl9       kubernetes.io/service-account-token   4      15d
DEBU NAME                       TYPE                                  DATA   AGE
builder-dockercfg-dklgw    kubernetes.io/dockercfg               1      15d
builder-token-8zxxn        kubernetes.io/service-account-token   4      15d
builder-token-xpz24        kubernetes.io/service-account-token   4      15d
default-dockercfg-rpcm6    kubernetes.io/dockercfg               1      15d
default-token-ls5mc        kubernetes.io/service-account-token   4      15d
default-token-rzvz2        kubernetes.io/service-account-token   4      15d
deployer-dockercfg-g6zlb   kubernetes.io/dockercfg               1      15d
deployer-token-5mv84       kubernetes.io/service-account-token   4      15d
deployer-token-dmxl9       kubernetes.io/service-account-token   4      15d
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Waiting for availability of resource type 'machineconfigs'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 6
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 7
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 8
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 9
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 10
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 11
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 12
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 13
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 14
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 15
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 16
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 17
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 18
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU RetryAfter timeout after 19 tries
Failed to update ssh public key to machine config: Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n (x2)
Temporary error: ssh command error:
command : timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n (x17)

Regards, Suhaib

Hi Suhaib,

To make your output better readable, just surround the code with three tildes ... check/click "Formatting Help". :)

Regards,
Christian

Hi Christian, Here is the status info of my installation -

C:\Users\redhat> crc status --log-level debug
DEBU CodeReady Containers version: 1.30.1+376cc3c1
DEBU OpenShift version: 4.8.2 (not embedded in executable)
DEBU Running 'crc status'
DEBU Checking file: C:\Users\redhat\.crc\machines\crc\.crc-exist
DEBU Checking file: C:\Users\redhat\.crc\machines\crc\.crc-exist
DEBU Running '( Hyper-V\Get-VM crc ).state'
DEBU Running SSH command: df -B1 --output=size,used,target /sysroot | tail -1
DEBU Using ssh private keys: [C:\Users\redhat\.crc\machines\crc\id_ecdsa C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:61290->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU Cannot get root partition usage: ssh command error:
command : df -B1 --output=size,used,target /sysroot | tail -1
err     : ssh: handshake failed: read tcp 127.0.0.1:61290->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n
DEBU cannot get OpenShift status: Get "https://api.crc.testing:6443/apis/config.openshift.io/v1/clusteroperators": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
CRC VM:          Running
OpenShift:       Unreachable (v4.8.2)
Disk Usage:      0B of 0B (Inside the CRC VM)
Cache Usage:     15.23GB
Cache Directory: C:\Users\redhat\.crc\cache

In my Hyper-v console I can see that the crc VM is up and running and it has crc adapter (external) assigned with IP address.

I have tried multiple times install/uninstall but in the end it's same, I'm not able to pass the SSH issue that I reported earlier.

DEBU Using ssh private keys: [C:\Users\redhat\.crc\cache\crc_hyperv_4.8.2\id_ecdsa_crc C:\Users\redhat\.crc\machines\crc\id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:60860->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host., output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:60860->127.0.0.1:2222: wsarecv: An existing connection was forcibly closed by the remote host.\n - sleeping 1s

Regards, Suhaib

Hi Suhaib,

A Windows (Hyper-V) related networking issue might be the root cause in this case.
Unfortunately I can't provide you with suggestions - because I don't use Windows. :)

Regards,
Christian

Hi Christian,

This sounds like a long shot but yesterday evening I had my laptop plugged into the mains and I ran the "crc start" command. It ran like a charm. The process went through no problem. Regarding my post from earlier in the day about the network behaviour, dropping in and out, and the RHOCP Web GUI not functioning fully all the time, my laptop was running on battery then, therefore not at the "Better Performance" mode. Given that apps like CRC use a fair amount of resource, perhaps it needs the CPU to be delivering more power. I would prefer to get some logs from somewhere to back this up, but need to dig around should or when this may happen again. I'll do some checks to see if there is any correlation between these scenarios.

Kind regards,

Benet.

Hi Benet,

We have to bear in mind that OpenShift is a very complex environment - and under permanent development.
And yes, it is quite "resource hungry", as I said earlier : OCP is not exactly meant to be running on one node. :)

Regards,
Christian

Hi Christian,

Sounds crazy but I was thinking I could create 3 (or 4) VMs each with RHEL 7 or 8 with 8GiB RAM and have a mini cluster. The host system has 64GiB of physical RAM and x2 NVMe drives. Please do you know if this is possible for a Development use only scenario? I would be curious to try it. I could also try giving CRC 32GiB of RAM to play with given the subject of using much resource.

Kind regards, Benet.

Hi Benet,

As I said, CRC is a one-node testing solution. What you can do is, request a trial of OCP.
Then go to https://console.redhat.com/openshift/install/metal and start the installation.

Cheers :)
Christian

Hey Christian,

I know it sounds crazy but check out this link here within these chats. It is very possible from my test that "crc start" is fine, but more so when the power is connected from the mains, or if the power settings are at "Best Performance".

Best regards,

Benet.

Hi Benet,

Very interesting finding from you indeed. I'm running the Balanced profile on Fedora. Mostly
the notebook is connected to the mains. What you say in the other discussion is true ... CRC
is resource hungry. I have a quite powerful box, maybe that's why it just works ? Well, IDK ... :)

Regards,
Christian

Hi Christian,

Thank you for your reply. :-D I'm fully aware CRC is not meant for Production use (it states this on the docs and Web GUI). I am using it for learning prior to taking the Red Hat EX180 and EX280 exams, and various scenarios outside of this learning. I think this dev tool (CRC) is really great for learning about the RHOCP environment, and personally would like to see it thrive, which is one of the reasons why I am in pursuit of its stability.

Best regards,

Benet.

Absolutely, Benet ... I fully agree with you here, CRC is a great testing possibility offering from Red Hat.
And your attitude and your contributions to improve the stability of the product is much appreciated. :)

Regards,
Christian

Hi Christian,

I know that the running of the virtual machine for CRC on a hypervisor (such as VMware and VirtualBox) is not supported but I like to know it's possible :-)) For those of you who have also tried this, (if you are interested,) it appears that VirtualBox does not work with CRC, and seems to always stop with the below on host images running RHEL 7.9 server and RHEL 8.4 server....

DEBU RetryAfter timeout after 37 tries            
DEBU Making call to close driver server           
DEBU (crc) Calling .Close                         
DEBU Successfully made call to close driver server 
DEBU Making call to close connection to plugin binary 
Error waiting for apiserver: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n (x12)
Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n (x3)
Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n (x2)
Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n (x20)

So I moved onto VMware Player 16 which is still a little flaky. Using RHEL 8.4 server, I've been able to use it for a full day. But occasionally on revisiting starting up CRC it does have the issue where you cannot connect to cluster....

INFO 2 operators are progressing: network, operator-lifecycle-manager-packageserver 
ERRO Cluster is not ready: cluster operators are still not stable after 10m4.701662871s 
INFO Adding crc-admin and crc-developer contexts to kubeconfig... 
ERRO Cannot update kubeconfig: Head "https://oauth-openshift.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 
Started the OpenShift cluster.

So for now I will muddle on with using VMware Player. I would be intrigued about the VM that the CRC is running in. If possible I would like to access it to check where it is having problems, but this is an area I would to read up on more to learn how to do.

Kind regards,

Benet.

Hi Benet,

Thanks for providing these details. As I said earlier, I am using KVM (nothing else) and everything works without
issues out-of-the-box on Fedora and RHEL. For those using Windows ... well, they might have to "fiddle" a bit. :)

Regards,
Christian

Hi Christian,

So referring to your comment from earlier on in this chat sorry to ask (a silly question) but when you say you're using KVM, you mean your running Linux on a physical machine? I just want to get a clear picture of your environment. Thanks.

Kind regards,

Benet.

Hi Benet,

Yes - exactly ... all my physical machines are running the latest editions of Fedora Linux or RHEL. :)

Regards,
Christian