Getting Started Guide

Red Hat OpenShift Local 2.2

Quick-start guide to using and developing with Red Hat OpenShift Local

Red Hat Developer Group Documentation Team

Abstract

This guide shows how to get up to speed using Red Hat OpenShift Local. Included instructions and examples guide through first steps developing containerized applications using Red Hat OpenShift Container Platform 4 from a host workstation (Microsoft Windows, macOS, or Red Hat Enterprise Linux).

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Introducing Red Hat OpenShift Local

1.1. About Red Hat OpenShift Local

Red Hat OpenShift Local brings a minimal OpenShift Container Platform 4 cluster and Podman container runtime to your local computer. These runtimes provide minimal environments for development and testing purposes. Red Hat OpenShift Local is mainly targeted at running on developers' desktops. For other OpenShift Container Platform use cases, such as headless or multi-developer setups, use the full OpenShift installer.

See the OpenShift Container Platform documentation for a full introduction to OpenShift Container Platform.

Red Hat OpenShift Local includes the crc command-line interface (CLI) to interact with the Red Hat OpenShift Local instance using the desired container runtime.

1.2. Differences from a production OpenShift Container Platform installation

The OpenShift preset for Red Hat OpenShift Local provides a regular OpenShift Container Platform installation with the following notable differences:

  • The OpenShift Container Platform cluster is ephemeral and is not intended for production use.
  • Red Hat OpenShift Local does not have a supported upgrade path to newer OpenShift Container Platform versions. Upgrading the OpenShift Container Platform version may cause issues that are difficult to reproduce.
  • It uses a single node which behaves as both a control plane and worker node.
  • It disables the Cluster Monitoring Operator by default. This disabled Operator causes the corresponding part of the web console to be non-functional.
  • The OpenShift Container Platform cluster runs in a virtual machine known as an instance. This may cause other differences, particularly with external networking.

The OpenShift Container Platform cluster provided by Red Hat OpenShift Local also includes the following non-customizable cluster settings. These settings should not be modified:

  • Use of the *.crc.testing domain.
  • The address range used for internal cluster communication.

    • The cluster uses the 172 address range. This can cause issues when, for example, a proxy is run in the same address space.

Chapter 2. Installation

2.1. Minimum system requirements

Red Hat OpenShift Local has the following minimum hardware and operating system requirements.

2.1.1. Hardware requirements

Red Hat OpenShift Local is supported only on AMD64 and Intel 64 processor architectures. Red Hat OpenShift Local does not support the ARM-based M1 architecture. Red Hat OpenShift Local does not support nested virtualization.

Depending on the desired container runtime, Red Hat OpenShift Local requires the following system resources:

2.1.1.1. For OpenShift Container Platform

  • 4 physical CPU cores
  • 9 GB of free memory
  • 35 GB of storage space
Note

The OpenShift Container Platform cluster requires these minimum resources to run in the Red Hat OpenShift Local instance. Some workloads may require more resources. To assign more resources to the Red Hat OpenShift Local instance, see Configuring the instance.

2.1.1.2. For the Podman container runtime

  • 2 physical CPU cores
  • 2 GB of free memory
  • 35 GB of storage space

2.1.2. Operating system requirements

Red Hat OpenShift Local requires the following minimum version of a supported operating system:

2.1.2.1. Microsoft Windows

  • On Microsoft Windows, Red Hat OpenShift Local requires the Windows 10 Fall Creators Update (version 1709) or later. Red Hat OpenShift Local does not work on earlier versions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.

2.1.2.2. macOS

  • On macOS, Red Hat OpenShift Local requires macOS 10.14 Mojave or later. Red Hat OpenShift Local does not work on earlier versions of macOS.

2.1.2.3. Linux

  • On Linux, Red Hat OpenShift Local is supported only on Red Hat Enterprise Linux/CentOS 7.5 or later (including 8.x versions) and on the latest two stable Fedora releases.
  • When using Red Hat Enterprise Linux, the machine running Red Hat OpenShift Local must be registered with the Red Hat Customer Portal.
  • Ubuntu 18.04 LTS or later and Debian 10 or later are not supported and may require manual set up of the host machine.
  • See Required software packages to install the required packages for your Linux distribution.

2.2. Required software packages for Linux

Red Hat OpenShift Local requires the libvirt and NetworkManager packages to run on Linux. Consult the following table to find the command used to install these packages for your Linux distribution:

Table 2.1. Package installation commands by distribution

Linux DistributionInstallation command

Fedora

sudo dnf install NetworkManager

Red Hat Enterprise Linux/CentOS

su -c 'yum install NetworkManager'

Debian/Ubuntu

sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager

2.3. Installing Red Hat OpenShift Local

Red Hat OpenShift Local is available as a portable executable for Red Hat Enterprise Linux. On Microsoft Windows and macOS, Red Hat OpenShift Local is available using a guided installer.

Prerequisites

Procedure

  1. Download the latest release of Red Hat OpenShift Local for your platform.
  2. On Microsoft Windows, extract the contents of the archive.
  3. On macOS or Microsoft Windows, run the guided installer and follow the instructions.

    Note

    On Microsoft Windows, you must install Red Hat OpenShift Local to your local C:\ drive. You cannot run Red Hat OpenShift Local from a network drive.

    On Red Hat Enterprise Linux, assuming the archive is in the ~/Downloads directory, follow these steps:

    1. Extract the contents of the archive:

      $ cd ~/Downloads
      $ tar xvf crc-linux-amd64.tar.xz
    2. Create the ~/bin directory if it does not exist and copy the crc executable to it:

      $ mkdir -p ~/bin
      $ cp ~/Downloads/crc-linux-*-amd64/crc ~/bin
    3. Add the ~/bin directory to your $PATH:

      $ export PATH=$PATH:$HOME/bin
      $ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

2.4. About usage data collection

Red Hat OpenShift Local prompts you before use for optional, anonymous usage data collection to assist with development. No personally identifiable information is collected. Consent for usage data collection can be granted or revoked by you at any time.

Additional resources

2.5. Configuring usage data collection

Consent for usage data collection can be granted or revoked by you at any time using the following configuration commands.

Note

Changes to telemetry consent do not modify a running instance. The change will take effect next time you run the crc start command.

Procedure

  • To manually enable telemetry, run the following command:

    $ crc config set consent-telemetry yes
  • To manually disable telemetry, run the following command:

    $ crc config set consent-telemetry no

Additional resources

2.6. Upgrading Red Hat OpenShift Local

Newer versions of the Red Hat OpenShift Local executable require manual set up to prevent potential incompatibilities with earlier versions.

Procedure

  1. Download the latest release of Red Hat OpenShift Local.
  2. Delete the existing Red Hat OpenShift Local instance:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the Red Hat OpenShift Local instance. Save any desired information stored in the instance before running this command.

  3. Replace the earlier crc executable with the executable of the latest release. Verify that the new crc executable is in use by checking its version:

    $ crc version
  4. Set up the new Red Hat OpenShift Local release:

    $ crc setup
  5. Start the new Red Hat OpenShift Local instance:

    $ crc start

Chapter 3. Using Red Hat OpenShift Local

3.1. About presets

Red Hat OpenShift Local presets represent a managed container runtime and the lower bounds of system resources required by the instance to run it. Red Hat OpenShift Local offers presets for OpenShift Container Platform and the Podman container runtime.

On Microsoft Windows and macOS, the Red Hat OpenShift Local guided installer prompts you for your desired preset. On Linux, the OpenShift Container Platform preset is selected by default. You can change this selection using the crc config command before running the crc setup command. You can change your selected preset from the system tray on Microsoft Windows and macOS or from the command line on all supported operating systems. Only one preset can be active at a time.

Additional resources

3.2. Setting up Red Hat OpenShift Local

The crc setup command performs operations to set up the environment of your host machine for the Red Hat OpenShift Local instance.

The crc setup command creates the ~/.crc directory if it does not already exist.

Warning

If you are setting up a new version, capture any changes made to the instance before setting up a new Red Hat OpenShift Local release.

Prerequisites

  • On Linux or macOS, ensure that your user account has permission to use the sudo command. On Microsoft Windows, ensure that your user account can elevate to Administrator privileges.
Note

Do not run the crc executable as the root user or an administrator. Always run the crc executable with your user account.

Procedure

  1. (Optional) On Linux, the OpenShift Container Platform preset is selected by default. To select the the Podman container runtime preset:

    $ crc config set preset podman
  2. Set up your host machine for Red Hat OpenShift Local:

    $ crc setup

Additional resources

  • For more information about the available container runtime presets, see About presets.

3.3. Starting the instance

The crc start command starts the Red Hat OpenShift Local instance and configured container runtime.

Prerequisites

  • To avoid networking-related issues, ensure that you are not connected to a VPN and that your network connection is reliable.
  • You set up the host machine using the crc setup command. For more information, see Setting up Red Hat OpenShift Local.
  • On Microsoft Windows, ensure that your user account can elevate to Administrator privileges.
  • For the OpenShift preset, ensure that you have a valid OpenShift user pull secret. Copy or download the pull secret from the Pull Secret section of the Red Hat OpenShift Local page on the Red Hat Hybrid Cloud Console.

    Note

    Accessing the user pull secret requires a Red Hat account.

Procedure

  1. Start the Red Hat OpenShift Local instance:

    $ crc start
  2. For the OpenShift preset, supply your user pull secret when prompted.

    Note

    The cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.

Additional resources

3.4. Accessing the OpenShift cluster

Access the OpenShift Container Platform cluster running in the Red Hat OpenShift Local instance by using the OpenShift Container Platform web console or OpenShift CLI (oc).

3.4.1. Accessing the OpenShift web console

Access the OpenShift Container Platform web console by using your web browser.

Access the cluster by using either the kubeadmin or developer user. Use the developer user for creating projects or OpenShift applications and for application deployment. Use the kubeadmin user only for administrative tasks such as creating new users or setting roles.

Prerequisites

Procedure

  1. To access the OpenShift Container Platform web console with your default web browser, run the following command:

    $ crc console
  2. Log in as the developer user with the password printed in the output of the crc start command. You can also view the password for the developer and kubeadmin users by running the following command:

    $ crc console --credentials

See Troubleshooting Red Hat OpenShift Local if you cannot access the OpenShift Container Platform cluster managed by Red Hat OpenShift Local.

Additional resources

3.4.2. Accessing the OpenShift cluster with the OpenShift CLI

Access the OpenShift Container Platform cluster managed by Red Hat OpenShift Local by using the OpenShift CLI (oc).

Prerequisites

Procedure

  1. Run the crc oc-env command to print the command needed to add the cached oc executable to your $PATH:

    $ crc oc-env
  2. Run the printed command.
  3. Log in as the developer user:

    $ oc login -u developer https://api.crc.testing:6443
    Note

    The crc start command prints the password for the developer user. You can also view it by running the crc console --credentials command.

  4. You can now use oc to interact with your OpenShift Container Platform cluster. For example, to verify that the OpenShift Container Platform cluster Operators are available, log in as the kubeadmin user and run the following command:

    $ oc config use-context crc-admin
    $ oc whoami
    kubeadmin
    $ oc get co
    Note

    Red Hat OpenShift Local disables the Cluster Monitoring Operator by default.

See Troubleshooting Red Hat OpenShift Local if you cannot access the OpenShift Container Platform cluster managed by Red Hat OpenShift Local.

Additional resources

3.4.3. Accessing the internal OpenShift registry

The OpenShift Container Platform cluster running in the Red Hat OpenShift Local instance includes an internal container image registry by default. This internal container image registry can be used as a publication target for locally developed container images. To access the internal OpenShift Container Platform registry, follow these steps.

Prerequisites

Procedure

  1. Check which user is logged in to the cluster:

    $ oc whoami
    Note

    For demonstration purposes, the current user is assumed to be kubeadmin.

  2. Log in to the registry as that user with its token:

    $ podman login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing --tls-verify=false
  3. Create a new project:

    $ oc new-project demo
  4. Pull an example container image:

    $ podman pull quay.io/libpod/alpine
  5. Tag the image, including namespace details:

    $ podman tag alpine:latest default-route-openshift-image-registry.apps-crc.testing/demo/alpine:latest
  6. Push the container image to the internal registry:

    $ podman push default-route-openshift-image-registry.apps-crc.testing/demo/alpine:latest --tls-verify=false
  7. Get imagestreams and verify that the pushed image is listed:

    $ oc get is
  8. Enable image lookup in the imagestream:

    $ oc set image-lookup alpine

    This setting allows the imagestream to be the source of images without having to provide the full URL to the internal registry.

  9. Create a pod using the recently pushed image:

    $ oc run demo --image=alpine --command -- sleep 600s

3.5. Deploying a sample application with odo

You can use odo to create OpenShift projects and applications from the command line. This procedure deploys a sample application to the OpenShift Container Platform cluster running in the Red Hat OpenShift Local instance.

Prerequisites

  • You have installed odo. For more information, see Installing odo in the odo documentation.
  • Red Hat OpenShift Local is configured to use the OpenShift preset. For more information, see Changing the selected preset.
  • The Red Hat OpenShift Local instance is running. For more information, see Starting the instance.

Procedure

  1. Log in to the running OpenShift Container Platform cluster managed by Red Hat OpenShift Local as the developer user:

    $ odo login -u developer -p developer
  2. Create a project for your application:

    $ odo project create sample-app
  3. Create a directory for your components:

    $ mkdir sample-app
    $ cd sample-app
  4. Create a component from a sample application on GitHub:

    $ odo create nodejs --s2i --git https://github.com/openshift/nodejs-ex
    Note

    Creating a component from a remote Git repository will rebuild the application each time you run the odo push command. To create a component from a local Git repository, see Creating a single-component application with odo in the odo documentation.

  5. Create a URL and add an entry to the local configuration file:

    $ odo url create --port 8080
  6. Push the changes:

    $ odo push

    Your component is now deployed to the cluster with an accessible URL.

  7. List the URLs and check the desired URL for the component:

    $ odo url list
  8. View the deployed application using the generated URL.

Additional resources

3.6. Stopping the instance

The crc stop command stops the running Red Hat OpenShift Local instance and container runtime. The stopping process takes a few minutes while the cluster shuts down.

Procedure

  • Stop the Red Hat OpenShift Local instance and container runtime:

    $ crc stop

3.7. Deleting the instance

The crc delete command deletes an existing Red Hat OpenShift Local instance.

Procedure

  • Delete the Red Hat OpenShift Local instance:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the Red Hat OpenShift Local instance. Save any desired information stored in the instance before running this command.

Chapter 4. Configuring Red Hat OpenShift Local

4.1. About Red Hat OpenShift Local configuration

Use the crc config command to configure both the crc executable and the Red Hat OpenShift Local instance. The crc config command requires a subcommand to act on the configuration. The available subcommands are get, set, unset, and view. The get, set, and unset subcommands operate on named configurable properties. Run the crc config --help command to list the available properties.

You can also use the crc config command to configure the behavior of the startup checks for the crc start and crc setup commands. By default, startup checks report an error and stop execution when their conditions are not met. Set the value of a property starting with skip-check to true to skip the check.

4.2. Viewing Red Hat OpenShift Local configuration

The Red Hat OpenShift Local executable provides commands to view configurable properties and the current Red Hat OpenShift Local configuration.

Procedure

  • To view the available configurable properties:

    $ crc config --help
  • To view the values for a configurable property:

    $ crc config get <property>
  • To view the complete current configuration:

    $ crc config view
    Note

    The crc config view command does not return any information if the configuration consists of default values.

4.3. Changing the selected preset

You can change the container runtime used for the Red Hat OpenShift Local instance by selecting the desired preset.

On Microsoft Windows and macOS, you can change the selected preset using the system tray or command line interface. On Linux, use the command line interface.

Important

You cannot change the preset of an existing Red Hat OpenShift Local instance. Preset changes are only applied when a Red Hat OpenShift Local instance is created. To enable preset changes, you must delete the existing instance and start a new one.

Procedure

  • Change the selected preset from the command line:

    $ crc config set preset <name>

    Valid preset names are openshift for OpenShift Container Platform and podman for the Podman container runtime.

Additional resources

4.4. Configuring the instance

Use the cpus and memory properties to configure the default number of vCPUs and amount of memory available to the Red Hat OpenShift Local instance, respectively.

Alternatively, the number of vCPUs and amount of memory can be assigned using the --cpus and --memory flags to the crc start command, respectively.

Important

You cannot change the configuration of a running Red Hat OpenShift Local instance. To enable configuration changes, you must stop the running instance and start it again.

Procedure

  • To configure the number of vCPUs available to the instance:

    $ crc config set cpus <number>

    The default value for the cpus property is 4. The number of vCPUs to assign must be greater than or equal to the default.

  • To start the instance with the desired number of vCPUs:

    $ crc start --cpus <number>
  • To configure the memory available to the instance:

    $ crc config set memory <number-in-mib>
    Note

    Values for available memory are set in mebibytes (MiB). One gibibyte (GiB) of memory is equal to 1024 MiB.

    The default value for the memory property is 9216. The amount of memory to assign must be greater than or equal to the default.

  • To start the instance with the desired amount of memory:

    $ crc start --memory <number-in-mib>

Chapter 5. Networking

5.1. DNS configuration details

5.1.1. General DNS setup

The OpenShift Container Platform cluster managed by Red Hat OpenShift Local uses 2 DNS domain names, crc.testing and apps-crc.testing. The crc.testing domain is for core OpenShift Container Platform services. The apps-crc.testing domain is for accessing OpenShift applications deployed on the cluster.

For example, the OpenShift Container Platform API server is exposed as api.crc.testing while the OpenShift Container Platform console is accessed as console-openshift-console.apps-crc.testing. These DNS domains are served by a dnsmasq DNS container running inside the Red Hat OpenShift Local instance.

The crc setup command detects and adjusts your system DNS configuration so that it can resolve these domains. Additional checks are done to verify DNS is properly configured when running crc start.

5.1.2. Linux

On Linux, depending on your distribution, Red Hat OpenShift Local expects the following DNS configuration:

5.1.2.1. NetworkManager + systemd-resolved

This configuration is used by default on Fedora 33 or newer, and on Ubuntu Desktop editions.

  • Red Hat OpenShift Local expects NetworkManager to manage networking.
  • Red Hat OpenShift Local configures systemd-resolved to forward requests for the testing domain to the 192.168.130.11 DNS server. 192.168.130.11 is the IP of the Red Hat OpenShift Local instance.
  • systemd-resolved configuration is done with a NetworkManager dispatcher script in /etc/NetworkManager/dispatcher.d/99-crc.sh:

    #!/bin/sh
    
    export LC_ALL=C
    
    systemd-resolve --interface crc --set-dns 192.168.130.11 --set-domain ~testing
    
    exit 0
Note

systemd-resolved is also available as an unsupported Technology Preview on Red Hat Enterprise Linux and CentOS 8.3. After configuring the host to use systemd-resolved, stop any running clusters and rerun crc setup.

5.1.2.2. NetworkManager + dnsmasq

This configuration is used by default on Fedora 32 or older, on Red Hat Enterprise Linux, and on CentOS.

  • Red Hat OpenShift Local expects NetworkManager to manage networking.
  • NetworkManager uses dnsmasq with the /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf configuration file.
  • The configuration file for this dnsmasq instance is /etc/NetworkManager/dnsmasq.d/crc.conf:

    server=/crc.testing/192.168.130.11
    server=/apps-crc.testing/192.168.130.11
    • The NetworkManager dnsmasq instance forwards requests for the crc.testing and apps-crc.testing domains to the 192.168.130.11 DNS server.

5.2. Reserved IP subnets

The OpenShift Container Platform cluster managed by Red Hat OpenShift Local reserves IP subnets for internal use which should not collide with your host network. Ensure that the following IP subnets are available for use:

Reserved IP subnets

  • 10.217.0.0/22
  • 10.217.4.0/23
  • 192.168.126.0/24

Additionally, the host hypervisor may reserve another IP subnet depending on the host operating system. On Microsoft Windows, the hypervisor reserves a randomly generated IP subnet that cannot be determined ahead-of-time. No additional subnet is reserved on macOS. The additional reserved subnet for Linux is 192.168.130.0/24.

5.3. Starting Red Hat OpenShift Local behind a proxy

You can start Red Hat OpenShift Local behind a defined proxy using environment variables or configurable properties.

Note

SOCKS proxies are not supported by OpenShift Container Platform.

Prerequisites

  • To use an existing OpenShift CLI (oc) executable on your host machine, export the .testing domain as part of the no_proxy environment variable. The embedded oc executable does not require manual settings. For more information about using the embedded oc executable, see Accessing the OpenShift cluster with the OpenShift CLI.

Procedure

  1. Define a proxy using the http_proxy and https_proxy environment variables or using the crc config set command as follows:

    $ crc config set http-proxy http://proxy.example.com:<port>
    $ crc config set https-proxy http://proxy.example.com:<port>
    $ crc config set no-proxy <comma-separated-no-proxy-entries>
  2. If the proxy uses a custom CA certificate file, set it as follows:

    $ crc config set proxy-ca-file <path-to-custom-ca-file>
Note

Proxy-related values set in the configuration for Red Hat OpenShift Local have priority over values set with environment variables.

5.4. Setting up Red Hat OpenShift Local on a remote server

Configure a remote server to run an OpenShift Container Platform cluster provided by Red Hat OpenShift Local.

This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS server. Run every command in this procedure on the remote server.

Warning

Perform this procedure only on a local network. Exposing an insecure server on the internet has many security implications.

Prerequisites

Procedure

  1. Start the cluster:

    $ crc start

    Ensure that the cluster remains running during this procedure.

  2. Install the haproxy package and other utilities:

    $ sudo dnf install haproxy /usr/sbin/semanage
  3. Modify the firewall to allow communication with the cluster:

    $ sudo systemctl enable --now firewalld
    $ sudo firewall-cmd --add-service=http --permanent
    $ sudo firewall-cmd --add-service=https --permanent
    $ sudo firewall-cmd --add-service=kube-apiserver --permanent
    $ sudo firewall-cmd --reload
  4. For SELinux, allow HAProxy to listen on TCP port 6443 to serve kube-apiserver on this port:

    $ sudo semanage port -a -t http_port_t -p tcp 6443
  5. Create a backup of the default haproxy configuration:

    $ sudo cp /etc/haproxy/haproxy.cfg{,.bak}
  6. Configure haproxy for use with the cluster:

    $ export CRC_IP=$(crc ip)
    $ sudo tee /etc/haproxy/haproxy.cfg &>/dev/null <<EOF
    global
        log /dev/log local0
    
    defaults
        balance roundrobin
        log global
        maxconn 100
        mode tcp
        timeout connect 5s
        timeout client 500s
        timeout server 500s
    
    listen apps
        bind 0.0.0.0:80
        server crcvm $CRC_IP:80 check
    
    listen apps_ssl
        bind 0.0.0.0:443
        server crcvm $CRC_IP:443 check
    
    listen api
        bind 0.0.0.0:6443
        server crcvm $CRC_IP:6443 check
    EOF
  7. Start the haproxy service:

    $ sudo systemctl start haproxy

5.5. Connecting to a remote Red Hat OpenShift Local instance

Use dnsmasq to connect a client machine to a remote server running an OpenShift Container Platform cluster managed by Red Hat OpenShift Local.

This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS client. Run every command in this procedure on the client.

Important

Connect to a server that is only exposed on your local network.

Prerequisites

Procedure

  1. Install the dnsmasq package:

    $ sudo dnf install dnsmasq
  2. Enable the use of dnsmasq for DNS resolution in NetworkManager:

    $ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF
    [main]
    dns=dnsmasq
    EOF
  3. Add DNS entries for Red Hat OpenShift Local to the dnsmasq configuration:

    $ sudo tee /etc/NetworkManager/dnsmasq.d/external-crc.conf &>/dev/null <<EOF
    address=/apps-crc.testing/SERVER_IP_ADDRESS
    address=/api.crc.testing/SERVER_IP_ADDRESS
    EOF
    Note

    Comment out any existing entries in /etc/NetworkManager/dnsmasq.d/crc.conf. These entries are created by running a local instance of Red Hat OpenShift Local and will conflict with the entries for the remote cluster.

  4. Reload the NetworkManager service:

    $ sudo systemctl reload NetworkManager
  5. Log in to the remote cluster as the developer user with oc:

    $ oc login -u developer -p developer https://api.crc.testing:6443

    The remote OpenShift Container Platform web console is available at https://console-openshift-console.apps-crc.testing.

Chapter 6. Administrative tasks

6.1. Starting monitoring

Red Hat OpenShift Local disables cluster monitoring by default to ensure that Red Hat OpenShift Local can run on a typical notebook. Monitoring is responsible for listing your cluster in the Red Hat Hybrid Cloud Console. Follow this procedure to enable monitoring for your cluster.

Prerequisites

  • You must assign additional memory to the Red Hat OpenShift Local instance. At least 14 GiB of memory, a value of 14336, is recommended for core functionality. Increased workloads will require more memory. For more information, see Configuring the instance.

Procedure

  1. Set the enable-cluster-monitoring configurable property to true:

    $ crc config set enable-cluster-monitoring true
  2. Start the instance:

    $ crc start
    Warning

    Cluster monitoring cannot be disabled. To remove monitoring, set the enable-cluster-monitoring configurable property to false and delete the existing Red Hat OpenShift Local instance.

Chapter 7. Troubleshooting Red Hat OpenShift Local

Note

The goal of Red Hat OpenShift Local is to deliver an OpenShift Container Platform environment for development and testing purposes. Issues occurring during installation or usage of specific OpenShift applications are outside of the scope of Red Hat OpenShift Local. Report such issues to the relevant project.

7.1. Getting shell access to the OpenShift cluster

To access the cluster for troubleshooting or debugging purposes, follow this procedure.

Note

Direct access to the OpenShift Container Platform cluster is not needed for regular use and is strongly discouraged.

Prerequisites

Procedure

  1. Run the oc get nodes command to identify the desired node. The output will be similar to this:

    $ oc get nodes
    NAME                 STATUS   ROLES           AGE    VERSION
    crc-shdl4-master-0   Ready    master,worker   7d7h   v1.14.6+7e13ab9a7
  2. Run oc debug nodes/<node> where <node> is the name of the node printed in the previous step.

7.2. Troubleshooting expired certificates

The system bundle in each released crc executable expires 30 days after the release. This expiration is due to certificates embedded in the OpenShift Container Platform cluster. The crc start command triggers an automatic certificate renewal process when needed. Certificate renewal can add up to five minutes to the start time of the cluster.

To avoid this additional startup time, or in case of failures in the certificate renewal process, use the following procedure:

Procedure

To resolve expired certificate errors that cannot be automatically renewed:

  1. Download the latest Red Hat OpenShift Local release and place the crc executable in your $PATH.
  2. Remove the cluster with certificate errors using the crc delete command:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the Red Hat OpenShift Local instance. Save any desired information stored in the instance before running this command.

  3. Set up the new release:

    $ crc setup
  4. Start the new instance:

    $ crc start

7.3. Troubleshooting bundle version mismatch

Created Red Hat OpenShift Local instances contain bundle information and instance data. Bundle information and instance data is not updated when setting up a new Red Hat OpenShift Local release. This information is not updated due to customization in the earlier instance data. This will lead to errors when running the crc start command:

$ crc start
...
FATA Bundle 'crc_hyperkit_4.2.8.crcbundle' was requested, but the existing VM is using
'crc_hyperkit_4.2.2.crcbundle'

Procedure

  1. Issue the crc delete command before attempting to start the instance:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the Red Hat OpenShift Local instance. Save any desired information stored in the instance before running this command.

7.4. Troubleshooting unknown issues

Resolve most issues by restarting Red Hat OpenShift Local with a clean state. This involves stopping the instance, deleting it, reverting changes made by the crc setup command, reapplying those changes, and restarting the instance.

Prerequisites

  • You set up the host machine with the crc setup command. For more information, see Setting up Red Hat OpenShift Local.
  • You started Red Hat OpenShift Local with the crc start command. For more information, see Starting the instance.
  • You are using the latest Red Hat OpenShift Local release. Using a version earlier than Red Hat OpenShift Local 1.2.0 may result in errors related to expired x509 certificates. For more information, see Troubleshooting expired certificates.

Procedure

To troubleshoot Red Hat OpenShift Local, perform the following steps:

  1. Stop the Red Hat OpenShift Local instance:

    $ crc stop
  2. Delete the Red Hat OpenShift Local instance:

    $ crc delete
    Warning

    The crc delete command results in the loss of data stored in the Red Hat OpenShift Local instance. Save any desired information stored in the instance before running this command.

  3. Clean up remaining changes from the crc setup command:

    $ crc cleanup
    Note

    The crc cleanup command removes an existing Red Hat OpenShift Local instance and reverts changes to DNS entries created by the crc setup command. On macOS, the crc cleanup command also removes the system tray.

  4. Set up your host machine to reapply the changes:

    $ crc setup
  5. Start the Red Hat OpenShift Local instance:

    $ crc start
    Note

    The cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.

If your issue is not resolved by this procedure, perform the following steps:

  1. Search open issues for the issue that you are encountering.
  2. If no existing issue addresses the encountered issue, create an issue and attach the ~/.crc/crc.log file to the created issue. The ~/.crc/crc.log file has detailed debugging and troubleshooting information which can help diagnose the problem that you are experiencing.

Legal Notice

Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.