Getting Started Guide

Red Hat CodeReady Containers 1.25.0

Quick-start guide to using and developing with CodeReady Containers

Red Hat Developer Group Documentation Team

Abstract

This guide shows how to get up to speed using CodeReady Containers. Included instructions and examples guide through first steps developing containerized applications using Red Hat OpenShift Container Platform 4 from a host workstation (Microsoft Windows, macOS, or Red Hat Enterprise Linux).

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Introducing Red Hat CodeReady Containers

1.1. About CodeReady Containers

Red Hat CodeReady Containers brings a minimal OpenShift 4 cluster to your local computer. This cluster provides a minimal environment for development and testing purposes. CodeReady Containers is mainly targeted at running on developers' desktops. For other use cases, such as headless or multi-developer setups, use the full OpenShift installer.

Refer to the OpenShift documentation for a full introduction to OpenShift.

CodeReady Containers includes the crc command-line interface (CLI) to interact with the CodeReady Containers virtual machine running the OpenShift cluster.

1.2. Differences from a production OpenShift installation

Red Hat CodeReady Containers is a regular OpenShift installation with the following notable differences:

  • The CodeReady Containers OpenShift cluster is ephemeral and is not intended for production use.
  • It uses a single node which behaves as both a master and worker node.
  • It disables the machine-config and monitoring Operators by default.

    • These disabled Operators cause the corresponding parts of the web console to be non-functional.
    • For the same reason, there is no upgrade path to newer OpenShift versions.
  • The OpenShift instance runs in a virtual machine. This may cause other differences, particularly with external networking.

CodeReady Containers also includes the following non-customizable cluster settings. These settings should not be modified:

  • Use of the *.crc.testing domain.
  • The address range used for internal cluster communication.

    • The cluster uses the 172 address range. This can cause issues when, for example, a proxy is run in the same address space.

Chapter 2. Installation

2.1. Minimum system requirements

CodeReady Containers has the following minimum hardware and operating system requirements.

2.1.1. Hardware requirements

CodeReady Containers requires the following system resources:

  • 4 virtual CPUs (vCPUs)
  • 9 GB of free memory
  • 35 GB of storage space
Note

The OpenShift cluster requires these minimum resources to run in the CodeReady Containers virtual machine. Some workloads may require more resources. To assign more resources to the CodeReady Containers virtual machine, see Configuring the virtual machine.

2.1.2. Operating system requirements

CodeReady Containers requires the following minimum version of a supported operating system:

2.1.2.1. Microsoft Windows

  • On Microsoft Windows, CodeReady Containers requires the Windows 10 Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.

2.1.2.2. macOS

  • On macOS, CodeReady Containers requires macOS 10.14 Mojave or newer. CodeReady Containers does not work on earlier versions of macOS.

2.1.2.3. Linux

  • On Linux, CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer (including 8.x versions) and on the latest two stable Fedora releases.
  • When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
  • Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.
  • See Required software packages to install the required packages for your Linux distribution.

2.2. Required software packages for Linux

CodeReady Containers requires the libvirt and NetworkManager packages to run on Linux. Consult the following table to find the command used to install these packages for your Linux distribution:

Table 2.1. Package installation commands by distribution

Linux DistributionInstallation command

Fedora

sudo dnf install NetworkManager

Red Hat Enterprise Linux/CentOS

su -c 'yum install NetworkManager'

Debian/Ubuntu

sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager

2.3. Installing CodeReady Containers

Prerequisites

Procedure

  1. Download the latest release of CodeReady Containers for your platform and extract the contents of the archive to a location in your PATH.

2.4. Upgrading CodeReady Containers

Newer versions of the CodeReady Containers executable require manual set up to prevent potential incompatibilities with earlier versions.

Procedure

  1. Download the latest release of CodeReady Containers.
  2. Delete the existing CodeReady Containers virtual machine:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the CodeReady Containers virtual machine. Save any desired information stored in the virtual machine before running this command.

  3. Replace the earlier crc executable with the executable of the latest release. Verify that the new crc executable is in use by checking its version:

    $ crc version
  4. Set up the new CodeReady Containers release:

    $ crc setup
  5. Start the new CodeReady Containers virtual machine:

    $ crc start

Chapter 3. Using CodeReady Containers

3.1. Setting up CodeReady Containers

The crc setup command performs operations to set up the environment of your host machine for the CodeReady Containers virtual machine.

This procedure will create the ~/.crc directory if it does not already exist.

Prerequisites

  • On Linux or macOS, ensure that your user account has permission to use the sudo command. On Microsoft Windows, ensure that your user account can elevate to Administrator privileges.
Note
  • Do not run the crc executable as root (or Administrator). Always run the crc executable with your user account.
  • If you are setting up a new version, capture any changes made to the virtual machine before setting up a new CodeReady Containers release.

Procedure

  1. Set up your host machine for CodeReady Containers:

    $ crc setup

Consent for telemetry data collection

The crc setup command prompts you for optional, anonymous usage data collection to assist with development. No personally identifiable information is collected.

  • To manually enable telemetry, run the following command:

    $ crc config set consent-telemetry yes
  • To manually disable telemetry, run the following command:

    $ crc config set consent-telemetry no

For more information about collected data, see the Red Hat Telemetry data collection notice.

3.2. Starting the virtual machine

The crc start command starts the CodeReady Containers virtual machine and OpenShift cluster.

Prerequisites

  • To avoid networking-related issues, ensure that you are not connected to a VPN and that your network connection is reliable.
  • You set up the host machine through the crc setup command. For more information, see Setting up CodeReady Containers.
  • On Microsoft Windows, ensure that your user account can elevate to Administrator privileges.
  • You have a valid OpenShift user pull secret. Copy or download the pull secret from the Pull Secret section of the Install on Laptop: Red Hat CodeReady Containers page on cloud.redhat.com.

    Note

    Accessing the user pull secret requires a Red Hat account.

Procedure

  1. Start the CodeReady Containers virtual machine:

    $ crc start
  2. When prompted, supply your user pull secret.
Note
  • The cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.
  • If you see errors during crc start, check the Troubleshooting CodeReady Containers section for potential solutions.

Additional resources

3.3. Accessing the OpenShift cluster

Access the OpenShift cluster running in the CodeReady Containers virtual machine through the OpenShift web console or client executable (oc).

3.3.1. Accessing the OpenShift web console

Prerequisites

Procedure

To access the OpenShift web console, follow these steps:

  1. Run crc console. This will open your web browser and direct it to the web console.
  2. Log in as the developer user with the password printed in the output of the crc start command.

    Note
    • You can also view the password for the developer and kubeadmin users by running crc console --credentials.
    • You can initially access the cluster through either the kubeadmin or developer user. Use the developer user for creating projects or OpenShift applications and for application deployment. Only use the kubeadmin user for administrative tasks such as creating new users, setting roles, and so on.

See Troubleshooting CodeReady Containers if you cannot access the CodeReady Containers OpenShift cluster.

Additional resources

3.3.2. Accessing the OpenShift cluster with oc

Prerequisites

Procedure

To access the OpenShift cluster through the oc command, follow these steps:

  1. Run the crc oc-env command to print the command needed to add the cached oc executable to your PATH:

    $ crc oc-env
  2. Run the printed command.
  3. Log in as the developer user:

    $ oc login -u developer https://api.crc.testing:6443
    Note

    The crc start command prints the password for the developer user. You can also view it by running the crc console --credentials command.

  4. You can now use oc to interact with your OpenShift cluster. For example, to verify that the OpenShift cluster Operators are available, log in as the kubeadmin user and run the following command:

    $ oc config use-context crc-admin
    $ oc whoami
    kubeadmin
    $ oc get co
    Note
    • CodeReady Containers disables the machine-config and monitoring Operators by default.

See Troubleshooting CodeReady Containers if you cannot access the CodeReady Containers OpenShift cluster.

Additional resources

3.3.3. Accessing the internal OpenShift registry

The OpenShift cluster running in the CodeReady Containers virtual machine includes an internal container image registry by default. This internal container image registry can be used as a publication target for locally developed container images. To access the internal OpenShift registry, follow these steps.

Prerequisites

Procedure

  1. Check which user is logged in to the cluster:

    $ oc whoami
    Note

    For demonstration purposes, the current user is assumed to be kubeadmin.

  2. Log in to the registry as that user with its token:

    $ podman login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing --tls-verify=false
  3. Create a new project:

    $ oc new-project demo
  4. Pull an example container image:

    $ podman pull quay.io/libpod/alpine
  5. Tag the image, including namespace details:

    $ podman tag alpine:latest default-route-openshift-image-registry.apps-crc.testing/demo/alpine:latest
  6. Push the container image to the internal registry:

    $ podman push default-route-openshift-image-registry.apps-crc.testing/demo/alpine:latest --tls-verify=false
  7. Get imagestreams and verify that the pushed image is listed:

    $ oc get is
  8. Enable image lookup in the imagestream:

    $ oc set image-lookup alpine

    This setting allows the imagestream to be the source of images without having to provide the full URL to the internal registry.

  9. Create a pod using the recently pushed image:

    $ oc run demo --image=alpine --command -- sleep 600s

3.4. Deploying a sample application with odo

You can use odo to create OpenShift projects and applications from the command line. This procedure deploys a sample application to the OpenShift cluster running in the CodeReady Containers virtual machine.

Prerequisites

Procedure

To deploy a sample application through odo, follow these steps:

  1. Log in to the running CodeReady Containers OpenShift cluster as the developer user:

    $ odo login -u developer -p developer
  2. Create a project for your application:

    $ odo project create sample-app
  3. Create a directory for your components:

    $ mkdir sample-app
    $ cd sample-app
  4. Create a component from a sample application on GitHub:

    $ odo create nodejs --s2i --git https://github.com/openshift/nodejs-ex
    Note

    Creating a component from a remote Git repository will rebuild the application each time you run the odo push command. To create a component from a local Git repository, see Creating a single-component application with odo in the odo documentation.

  5. Create a URL and add an entry to the local configuration file:

    $ odo url create --port 8080
  6. Push the changes:

    $ odo push

    Your component is now deployed to the cluster with an accessible URL.

  7. List the URLs and check the desired URL for the component:

    $ odo url list
  8. View the deployed application using the generated URL.

Additional resources

3.5. Stopping the virtual machine

The crc stop command stops the running CodeReady Containers virtual machine and OpenShift cluster. The stopping process will take a few minutes while the cluster shuts down.

Procedure

  • Stop the CodeReady Containers virtual machine and OpenShift cluster:

    $ crc stop

3.6. Deleting the virtual machine

The crc delete command deletes an existing CodeReady Containers virtual machine.

Procedure

  • Delete the CodeReady Containers virtual machine:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the CodeReady Containers virtual machine. Save any desired information stored in the virtual machine before running this command.

Chapter 4. Configuring CodeReady Containers

4.1. About CodeReady Containers configuration

Use the crc config command to configure both the crc executable and the CodeReady Containers virtual machine. The crc config command requires a subcommand to act on the configuration. The available subcommands are get, set, unset, and view. The get, set, and unset subcommands operate on named configurable properties. Run the crc config --help command to list the available properties.

You can also use the crc config command to configure the behavior of the startup checks for the crc start and crc setup commands. By default, startup checks report an error and stop execution when their conditions are not met. Set the value of a property starting with skip-check to true to skip the check.

4.2. Viewing CodeReady Containers configuration

The CodeReady Containers executable provides commands to view configurable properties and the current CodeReady Containers configuration.

Procedure

  • To view the available configurable properties:

    $ crc config --help
  • To view the values for a configurable property:

    $ crc config get <property>
  • To view the complete current configuration:

    $ crc config view
    Note

    The crc config view command does not return any information if the configuration consists of default values.

4.3. Configuring the virtual machine

Use the cpus and memory properties to configure the default number of vCPUs and amount of memory available to the CodeReady Containers virtual machine, respectively.

Alternatively, the number of vCPUs and amount of memory can be assigned using the --cpus and --memory flags to the crc start command, respectively.

Important

You cannot change the configuration of a running CodeReady Containers virtual machine. To enable configuration changes, you must stop the running virtual machine and start it again.

Procedure

  • To configure the number of vCPUs available to the virtual machine:

    $ crc config set cpus <number>

    The default value for the cpus property is 4. The number of vCPUs to assign must be greater than or equal to the default.

  • To start the virtual machine with the desired number of vCPUs:

    $ crc start --cpus <number>
  • To configure the memory available to the virtual machine:

    $ crc config set memory <number-in-mib>
    Note

    Values for available memory are set in mebibytes (MiB). One gibibyte (GiB) of memory is equal to 1024 MiB.

    The default value for the memory property is 9216. The amount of memory to assign must be greater than or equal to the default.

  • To start the virtual machine with the desired amount of memory:

    $ crc start --memory <number-in-mib>

Chapter 5. Networking

5.1. DNS configuration details

5.1.1. General DNS setup

The OpenShift cluster managed by CodeReady Containers uses 2 DNS domain names, crc.testing and apps-crc.testing. The crc.testing domain is for core OpenShift services. The apps-crc.testing domain is for accessing OpenShift applications deployed on the cluster.

For example, the OpenShift API server will be exposed as api.crc.testing while the OpenShift console is accessed through console-openshift-console.apps-crc.testing. These DNS domains are served by a dnsmasq DNS container running inside the CodeReady Containers virtual machine.

Running crc setup will detect and adjust your system DNS configuration so that it can resolve these domains. Additional checks are done to verify DNS is properly configured when running crc start.

5.1.2. Linux

On Linux, depending on your distribution, CodeReady Containers expects the following DNS configuration:

5.1.2.1. NetworkManager + systemd-resolved

This configuration is used by default on Fedora 33 or newer, and on Ubuntu Desktop editions.

  • CodeReady Containers expects NetworkManager to manage networking.
  • CodeReady Containers configures systemd-resolved to forward requests for the testing domain to the 192.168.130.11 DNS server. 192.168.130.11 is the IP of the CodeReady Containers virtual machine.
  • systemd-resolved configuration is done through a NetworkManager dispatcher script in /etc/NetworkManager/dispatcher.d/99-crc.sh:

    #!/bin/sh
    
    export LC_ALL=C
    
    systemd-resolve --interface crc --set-dns 192.168.130.11 --set-domain ~testing
    
    exit 0
Note

systemd-resolved is also available as an unsupported Technology Preview on Red Hat Enterprise Linux and CentOS 8.3. After configuring the host to use systemd-resolved, stop any running clusters and rerun crc setup.

5.1.2.2. NetworkManager + dnsmasq

This configuration is used by default on Fedora 32 or older, on Red Hat Enterprise Linux, and on CentOS.

  • CodeReady Containers expects NetworkManager to manage networking.
  • NetworkManager uses dnsmasq through the /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf configuration file.
  • The configuration file for this dnsmasq instance is /etc/NetworkManager/dnsmasq.d/crc.conf:

    server=/crc.testing/192.168.130.11
    server=/apps-crc.testing/192.168.130.11
    • The NetworkManager dnsmasq instance forwards requests for the crc.testing and apps-crc.testing domains to the 192.168.130.11 DNS server.

5.1.3. macOS

On macOS, CodeReady Containers expects the following DNS configuration:

  • CodeReady Containers creates a /etc/resolver/testing file which instructs macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine.
  • CodeReady Containers also adds an api.crc.testing entry to /etc/hosts pointing at the VM IP address. The oc executable requires this entry. See OpenShift issue #23266 for more information.

5.2. Reserved IP subnets

The CodeReady Containers OpenShift cluster reserves IP subnets for internal use which should not collide with your host network. Ensure that the following IP subnets are available for use:

Reserved IP subnets

  • 10.217.0.0/22
  • 10.217.4.0/23
  • 192.168.126.0/24

Additionally, each host hypervisor reserves an additional IP subnet depending on the host operating system. On Microsoft Windows, the hypervisor reserves a randomly generated IP subnet that cannot be determined ahead-of-time. The reserved subnets for Linux and macOS are as follows:

Linux
192.168.130.0/24
macOS
192.168.64.0/24

5.3. Starting CodeReady Containers behind a proxy

Prerequisites

  • To use an existing oc executable on your host machine, export the .testing domain as part of the no_proxy environment variable.
  • The embedded oc executable does not require manual settings. For more information about using the embedded oc executable, see Accessing the OpenShift cluster with oc.

Procedure

  1. Define a proxy using the http_proxy and https_proxy environment variables or using the crc config set command as follows:

    $ crc config set http-proxy http://proxy.example.com:<port>
    $ crc config set https-proxy http://proxy.example.com:<port>
    $ crc config set no-proxy <comma-separated-no-proxy-entries>
  2. If the proxy uses a custom CA certificate file, set it as follows:

    $ crc config set proxy-ca-file <path-to-custom-ca-file>

The crc executable will be able to use the defined proxy once set through environment variables or CodeReady Containers configuration.

Note
  • Proxy-related values set in the configuration for CodeReady Containers have priority over values set through environment variables.
  • SOCKS proxies are not supported by OpenShift Container Platform.

5.4. Setting up CodeReady Containers on a remote server

Follow this procedure to configure a remote server to run a CodeReady Containers OpenShift cluster.

Note
  • It is strongly advised to perform this procedure on a local network. Exposing an insecure server on the internet comes with many security implications.
  • All of the commands in this procedure must be run on the remote server.
  • This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS server.

Prerequisites

Procedure

  1. Start the cluster:

    $ crc start

    Ensure that the cluster remains running throughout this procedure.

  2. Install the haproxy package and other utilities:

    $ sudo dnf install haproxy policycoreutils-python-utils
  3. Modify the firewall to allow communication with the cluster:

    $ sudo systemctl start firewalld
    $ sudo firewall-cmd --add-port=80/tcp --permanent
    $ sudo firewall-cmd --add-port=6443/tcp --permanent
    $ sudo firewall-cmd --add-port=443/tcp --permanent
    $ sudo systemctl restart firewalld
  4. For SELinux, allow listening to TCP port 6443:

    $ sudo semanage port -a -t http_port_t -p tcp 6443
  5. Create a backup of the default haproxy configuration:

    $ sudo cp /etc/haproxy/haproxy.cfg{,.bak}
  6. Configure haproxy for use with the cluster:

    $ export CRC_IP=$(crc ip)
    $ sudo tee /etc/haproxy/haproxy.cfg &>/dev/null <<EOF
    global
        log /dev/log local0
    
    defaults
        balance roundrobin
        log global
        maxconn 100
        mode tcp
        timeout connect 5s
        timeout client 500s
        timeout server 500s
    
    listen apps
        bind 0.0.0.0:80
        server crcvm $CRC_IP:80 check
    
    listen apps_ssl
        bind 0.0.0.0:443
        server crcvm $CRC_IP:443 check
    
    listen api
        bind 0.0.0.0:6443
        server crcvm $CRC_IP:6443 check
    EOF
  7. Start the haproxy service:

    $ sudo systemctl start haproxy

5.5. Connecting to a remote CodeReady Containers instance

Follow this procedure to connect a client machine to a remote server running a CodeReady Containers OpenShift cluster.

Note
  • It is strongly advised to connect to a server that is only exposed on your local network.
  • All of the commands in this procedure must be run on the client.
  • This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS client.

Prerequisites

Procedure

  1. Install the dnsmasq package:

    $ sudo dnf install dnsmasq
  2. Enable the use of dnsmasq for DNS resolution in NetworkManager:

    $ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF
    [main]
    dns=dnsmasq
    EOF
  3. Add DNS entries for CodeReady Containers to the dnsmasq configuration:

    $ sudo tee /etc/NetworkManager/dnsmasq.d/external-crc.conf &>/dev/null <<EOF
    address=/apps-crc.testing/SERVER_IP_ADDRESS
    address=/api.crc.testing/SERVER_IP_ADDRESS
    EOF
    Note

    Comment out any existing entries in /etc/NetworkManager/dnsmasq.d/crc.conf. These entries are created by running a local instance of CodeReady Containers and will conflict with the entries for the remote cluster.

  4. Reload the NetworkManager service:

    $ sudo systemctl reload NetworkManager
  5. Log in to the remote cluster as the developer user with oc:

    $ oc login -u developer -p developer https://api.crc.testing:6443

    The remote OpenShift Web Console is available at https://console-openshift-console.apps-crc.testing.

Chapter 6. Administrative tasks

6.1. Starting Monitoring, Alerting, and Telemetry

To make sure CodeReady Containers can run on a typical laptop, some resource-heavy services get disabled by default. One of these is Prometheus and the related monitoring, alerting, and telemetry functionality. Telemetry functionality is responsible for listing your cluster in the Red Hat OpenShift Cluster Manager.

Prerequisites

  • You must assign additional memory to the CodeReady Containers virtual machine. At least 14 GiB of memory (a value of 14336) is recommended for core functionality. Increased workloads will require more memory. For more information, see Configuring the virtual machine.

Procedure

  1. Set the enable-cluster-monitoring configurable property to true:

    $ crc config set enable-cluster-monitoring true
  2. Start the virtual machine:

    $ crc start
    Warning

    Cluster monitoring cannot be disabled. To remove monitoring, alerting, and telemetry, set the enable-cluster-monitoring configurable property to false and delete the existing CodeReady Containers virtual machine.

Chapter 7. Troubleshooting Red Hat CodeReady Containers

Note

The goal of Red Hat CodeReady Containers is to deliver an OpenShift environment for development and testing purposes. Issues occurring during installation or usage of specific OpenShift applications are outside of the scope of CodeReady Containers. Report such issues to the relevant project. For example, OpenShift tracks issues on GitHub.

7.1. Getting shell access to the OpenShift cluster

Direct access to the OpenShift cluster is not needed for regular use and is strongly discouraged. To access the cluster for troubleshooting or debugging purposes, follow this procedure.

Prerequisites

Procedure

  1. Run oc get nodes. The output will be similar to this:

    $ oc get nodes
    NAME                 STATUS   ROLES           AGE    VERSION
    crc-shdl4-master-0   Ready    master,worker   7d7h   v1.14.6+7e13ab9a7
  2. Run oc debug nodes/<node> where <node> is the name of the node printed in the previous step.

7.2. Troubleshooting expired certificates

Note

As of the CodeReady Containers 1.10.0 release, the certificate renewal process is not working as intended. Follow this procedure to avoid potential errors due to certificate expiration.

The system bundle in each released crc executable expires 30 days after the release. This expiration is due to certificates embedded in the OpenShift cluster. As a result, using an older crc executable or system bundle can result in an expired certificates error.

Starting from CodeReady Containers 1.2.0, the embedded certificates can be automatically renewed by crc. The crc start command triggers the certificate renewal process when needed. Certificate renewal can add up to five minutes to the start time of the cluster.

Procedure

To resolve expired certificate errors that cannot be automatically renewed:

  1. Download the latest CodeReady Containers release and place the crc executable in your $PATH.
  2. Remove the cluster with certificate errors using the crc delete command:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the CodeReady Containers virtual machine. Save any desired information stored in the virtual machine before running this command.

  3. Set up the new release:

    $ crc setup
  4. Start the new virtual machine:

    $ crc start

7.3. Troubleshooting bundle version mismatch

Created CodeReady Containers virtual machines contain bundle information and instance data. Bundle information and instance data is not updated when setting up a new CodeReady Containers release. This information is not updated due to customization in the earlier instance data. This will lead to errors when running the crc start command:

$ crc start
...
FATA Bundle 'crc_hyperkit_4.2.8.crcbundle' was requested, but the existing VM is using
'crc_hyperkit_4.2.2.crcbundle'

Procedure

  1. Issue the crc delete command before attempting to start the instance:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the CodeReady Containers virtual machine. Save any desired information stored in the virtual machine before running this command.

7.4. Troubleshooting unknown issues

Resolve most issues by restarting CodeReady Containers with a clean state. This involves stopping the virtual machine, deleting it, reverting changes made by the crc setup command, reapplying those changes, and restarting the virtual machine.

Prerequisites

Procedure

To troubleshoot CodeReady Containers, perform the following steps:

  1. Stop the CodeReady Containers virtual machine:

    $ crc stop
  2. Delete the CodeReady Containers virtual machine:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the CodeReady Containers virtual machine. Save any desired information stored in the virtual machine before running this command.

  3. Clean up remaining changes from the crc setup command:

    $ crc cleanup
    Note

    The crc cleanup command removes an existing CodeReady Containers virtual machine and reverts changes to DNS entries created by the crc setup command. On macOS, the crc cleanup command also removes the system tray.

  4. Set up your host machine to reapply the changes:

    $ crc setup
  5. Start the CodeReady Containers virtual machine:

    $ crc start
    Note

    The cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.

If your issue is not resolved by this procedure, perform the following steps:

  1. Search open issues for the issue that you are encountering.
  2. If no existing issue addresses the encountered issue, create an issue and attach the ~/.crc/crc.log file to the created issue. The ~/.crc/crc.log file has detailed debugging and troubleshooting information which can help diagnose the problem that you are experiencing.

Legal Notice

Copyright © 2021 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.