-
Language:
English
-
Language:
English
Installation Guide
Installing Red Hat CodeReady Workspaces 2.1
Supriya Takkhi
Robert Kratky
rkratky@redhat.com
Michal Maléř
mmaler@redhat.com
Fabrice Flore-Thébault
ffloreth@redhat.com
Yana Hontyk
yhontyk@redhat.com
devtools-docs@redhat.com
Abstract
Chapter 1. Installing CodeReady Workspaces on OpenShift Container Platform 4
1.1. Installing CodeReady Workspaces on OpenShift 4 from OperatorHub
Operators are a method of packaging, deploying, and managing a OpenShift application which also provide the following:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and ISV content.
- A place to encapsulate knowledge from field engineers and spread it to all users.
On OpenShift, Red Hat CodeReady Workspaces can be installed using the OperatorHub Catalog present in the OpenShift web console.
It is possible to use the crwctl
utility script for deploying CodeReady Workspaces on OpenShift Container Platform and OpenShift Dedicated versions 4.4. This method is considered unofficial and serves as a backup installation method for situations where the installation method using OperatorHub is not available.
For information about how to use the crwctl
utility script for deploying CodeReady Workspaces on OpenShift, see the Installing CodeReady Workspaces on OpenShift 3 using the Operator section.
Following steps are described:
- Section 1.1.1, “Creating the CodeReady Workspaces project in OpenShift 4 web console”.
- Section 1.1.2, “Installing the CodeReady Workspaces Operator in OpenShift 4 web console”.
- Section 1.1.3, “Installing CodeReady Workspaces using the CodeReady Workspaces Operator in OpenShift 4 web console”.
- Section 1.1.4, “Viewing the state of the CodeReady Workspaces instance deployment in OpenShift 4 web console”.
- Section 1.1.6, “Viewing the state of the CodeReady Workspaces cluster deployment using OpenShift 4 CLI tools”.
- Section 1.1.5, “Finding CodeReady Workspaces instance URL in OpenShift 4 web console”.
- Section 1.1.7, “Finding CodeReady Workspaces cluster URL using the OpenShift 4 CLI”.
- Section 1.1.8, “Enabling SSL on OpenShift 4”.
- Section 1.1.9, “Logging in to CodeReady Workspaces on OpenShift for the first time using OAuth”.
- Section 1.1.10, “Logging in to CodeReady Workspaces on OpenShift for the first time registering as a new user”.
1.1.1. Creating the CodeReady Workspaces project in OpenShift 4 web console
This section describes how to create the CodeReady Workspaces
project in OpenShift 4 web console.
Prerequisites
- An administrator account on a running instance of OpenShift 4.
Procedure
- Open the OpenShift web console.
- In the left panel, navigate to Projects.
- Click the Create Project button.
Enter the project details:
-
In the Name field, type
codeready
. -
In the Display Name field, type
Red Hat CodeReady Workspaces
.
-
In the Name field, type
- Click the Create button.
1.1.2. Installing the CodeReady Workspaces Operator in OpenShift 4 web console
This section describes how to install the CodeReady Workspaces Operator in OpenShift 4 web console.
Prerequisites
- An administrator account on a running instance of OpenShift 4.
-
Administrative rights on an existing project named
codeready
on this instance of OpenShift 4. See Section 1.1.1, “Creating the CodeReady Workspaces project in OpenShift 4 web console”. - The Red Hat CodeReady Workspaces 2.0 Operator is not installed.
Procedure
- Open the OpenShift web console.
- In the left panel, navigate to the Operators → OperatorHub section.
-
In the Search by keyword field, type
Red Hat CodeReady Workspaces
. - Click on the Red Hat CodeReady Workspaces tile.
- Click the Install button in the Red Hat CodeReady Workspaces pop-up window.
- In the A specific namespace on the cluster field, in the cluster drop-down list, select the namespace into which the previous version of the CodeReady Workspaces Operator was installed.
- Click the Subscribe button.
- In the left panel, navigate to the Operators → Installed Operators section.
- Red Hat CodeReady Workspaces is displayed as an installed Operator having the InstallSucceeded status.
- Click on the Red Hat CodeReady Workspaces name in the list of installed operators.
- Navigate to the Overview tab.
-
In the Conditions sections at the bottom of the page, wait for this message:
install strategy completed with no errors
. - Navigate to the Events tab.
-
Wait for this message:
install strategy completed with no errors
.
1.1.3. Installing CodeReady Workspaces using the CodeReady Workspaces Operator in OpenShift 4 web console
This section describes how to install CodeReady Workspaces using the CodeReady Workspaces Operator in OpenShift 4 web console.
Prerequisites
- An administrator account on a running instance of OpenShift 4.
- At least one OAuth user provisioned on this instance of OpenShift 4.
- The CodeReady Workspaces Operator is installed on this instance of OpenShift 4. See Section 1.1.2, “Installing the CodeReady Workspaces Operator in OpenShift 4 web console”
Procedure
- Open the OpenShift web console.
- Navigate to the Operators → Installed Operators section.
- Click Red Hat CodeReady Workspaces in the list of installed operators.
- Click the Create Instance link in Provided APIs section.
- The Create CodeReady Workspaces Cluster page is displayed.
- Leave the default values as they are.
- Click the Create button in the bottom-left corner of the window.
The
codeready
cluster is created.
1.1.4. Viewing the state of the CodeReady Workspaces instance deployment in OpenShift 4 web console
This section describes how to view the state of the CodeReady Workspaces instance deployment in OpenShift 4 web console.
Prerequisites
- An administrator account on a running instance of OpenShift 4.
- A CodeReady Workspaces is being deployed on this instance of OpenShift 4.
Procedure
- Open the OpenShift web console.
- Navigate to the Operators → Installed Operators section.
- Click Red Hat CodeReady Workspaces in the list of installed operators.
- Navigate to the CodeReady Workspaces Cluster tab.
Click codeready-workspaces
CheCluster
in the table.The Overview tab is displayed.
-
Watch the content of the Message field. The field contain error messages, if any. The expected content is
None
. Navigate to the Resources tab.
The screen displays the state of the resources assigned to the CodeReady Workspaces deployment.
1.1.5. Finding CodeReady Workspaces instance URL in OpenShift 4 web console
This section descibes how to find the CodeReady Workspaces instance URL in OpenShift 4 web console.
Prerequisites
A running Red Hat CodeReady Workspaces instance.
Procedure
- Open the OpenShift web console.
- In the left panel, navigate to the Operators → Installed Operators section.
- Click the Red Hat CodeReady Workspaces Operator tile.
Click codeready-workspaces
CheCluster
in the table.The Overview tab is displayed.
- Read the value of the CodeReady Workspaces URL field.
1.1.6. Viewing the state of the CodeReady Workspaces cluster deployment using OpenShift 4 CLI tools
This section describes how to view the state of the CodeReady Workspaces cluster deployment using OpenShift 4 CLI tools.
Prerequisites
- An installation of an Red Hat CodeReady Workspaces cluster. See Section 1.1.3, “Installing CodeReady Workspaces using the CodeReady Workspaces Operator in OpenShift 4 web console”.
Procedure
Run the following commands to select the
crw
project:$ oc project <project_name>
Run the following commands to get the name and status of the Pods running in the selected project:
$ oc get pods
Check that the status of all the Pods is
Running
.NAME READY STATUS RESTARTS AGE codeready-8495f4946b-jrzdc 0/1 Running 0 86s codeready-operator-578765d954-99szc 1/1 Running 0 42m keycloak-74fbfb9654-g9vp5 1/1 Running 0 4m32s postgres-5d579c6847-w6wx5 1/1 Running 0 5m14s
To see the state of the CodeReady Workspaces cluster deployment, run:
$ oc logs --tail=10 -f `oc get pods -o name | grep operator`
Example output of the command:
time="2019-07-12T09:48:29Z" level=info msg="Exec successfully completed" time="2019-07-12T09:48:29Z" level=info msg="Updating eclipse-che CR with status: provisioned with OpenShift identity provider: true" time="2019-07-12T09:48:29Z" level=info msg="Custom resource eclipse-che updated" time="2019-07-12T09:48:29Z" level=info msg="Creating a new object: ConfigMap, name: che" time="2019-07-12T09:48:29Z" level=info msg="Creating a new object: ConfigMap, name: custom" time="2019-07-12T09:48:29Z" level=info msg="Creating a new object: Deployment, name: che" time="2019-07-12T09:48:30Z" level=info msg="Updating eclipse-che CR with status: CodeReady Workspaces API: Unavailable" time="2019-07-12T09:48:30Z" level=info msg="Custom resource eclipse-che updated" time="2019-07-12T09:48:30Z" level=info msg="Waiting for deployment che. Default timeout: 420 seconds"
1.1.7. Finding CodeReady Workspaces cluster URL using the OpenShift 4 CLI
This section describes how to obtain the CodeReady Workspaces cluster URL using the OpenShift 4 CLI (command line interface). The URL can be retrieved from the OpenShift logs or from the checluster
Custom Resource.
Prerequisites
- An instance of Red Hat CodeReady Workspaces running on OpenShift.
- User is located in a CodeReady Workspaces installation namespace.
Procedure
To retrieve the CodeReady Workspaces cluster URL from the
checluster
CR (Custom Resource), run:$ oc get checluster --output jsonpath='{.items[0].status.cheURL}'
Alternatively, to retrieve the CodeReady Workspaces cluster URL from the OpenShift logs, run:
$ oc logs --tail=10 `(oc get pods -o name | grep operator)` | \ grep "available at" | \ awk -F'available at: ' '{print $2}' | sed 's/"//'
1.1.8. Enabling SSL on OpenShift 4
Prerequisites
A running Red Hat CodeReady Workspaces cluster.
Procedure
- Open the OpenShift web console.
- In the left panel, navigate to the Operators → Installed Operators section.
- Click on the Red Hat CodeReady Workspaces Operator tile.
- Click on eclipse-che in the table.
- Navigate to the Overview tab.
- Toggle the TLS MODE switch to True.
Click Confirm change.
- Navigate to the Resources tab.
- Wait that the Pods are restarted.
- Navigate to the Overview tab.
- Click the Red Hat CodeReady Workspaces URL link.
- Notice that the link is redirected to HTTPS.
- The browser displays the Red Hat CodeReady Workspaces Dashboard using a valid Let’s Encrypt certificate.
1.1.9. Logging in to CodeReady Workspaces on OpenShift for the first time using OAuth
This section describes how to log in to CodeReady Workspaces on OpenShift for the first time using OAuth.
Prerequisites
- Contact the administrator of the OpenShift instance to obtain the Red Hat CodeReady Workspaces URL.
Procedure
- Navigate to the Red Hat CodeReady Workspaces URL to display the Red Hat CodeReady Workspaces login page.
- Choose the OpenShift OAuth option.
- The Authorize Access page is displayed.
- Click on the Allow selected permissions button.
-
Update the account information: specify the
Username
,Email
,First name
andLast name
fields and click the Submit button.
Validation steps
- The browser displays the Red Hat CodeReady Workspaces Dashboard.
1.1.10. Logging in to CodeReady Workspaces on OpenShift for the first time registering as a new user
This section describes how to log in to CodeReady Workspaces on OpenShift for the first time registering as a new user.
Prerequisites
- Contact the administrator of the OpenShift instance to obtain the Red Hat CodeReady Workspaces URL.
Procedure
- Navigate to the Red Hat CodeReady Workspaces URL to display the Red Hat CodeReady Workspaces login page.
- Choose the Register as a new user option.
-
Update the account information: specify the
Username
,Email
,First name
andLast name
field and click the Submit button.
Validation steps
- The browser displays the Red Hat CodeReady Workspaces Dashboard.
1.2. Installing CodeReady Workspaces using CLI management tool
1.2.1. Installing the crwctl CLI management tool
This section describes how to install crwctl, the CodeReady Workspaces CLI management tool.
Procedure
- Navigate to https://developers.redhat.com/products/codeready-workspaces/download.
- Download the CodeReady Workspaces CLI management tool archive for version 2.1.
- Extract the archive.
-
Place the extracted binary in your
$PATH
.
1.2.2. Installing CodeReady Workspaces using CodeReady Workspaces CLI management tool
This sections describes how to install CodeReady Workspaces using the CodeReady Workspaces CLI management tool.
Use CodeReady Workspaces CLI management tool to install CodeReady Workspaces only if OperatorHub is not available. This method is not oficially supported for OpenShift Container Platform 4.1 or later.
Prerequisites
- CodeReady Workspaces CLI management tool is installed.
- OpenShift Container Platform 4 CLI is installed.
- Access to an OpenShift Container Platform instance
1.2.2.1. Installing with default settings
Procedure
Log in to OpenShift Container Platform 4:
$ oc login ${OPENSHIFT_API_URL} -u ${OPENSHIFT_USERNAME} -p ${OPENSHIFT_PASSWORD}
Run this command to install Red Hat CodeReady Workspaces with defaults settings:
$ crwctl server:start
Notecrwctl default namespace is
workspaces
. If you use a namespace with a different name, run the command with the--chenamespace=your namespace
flag, for example:$ {prod-cli} server:start --chenamespace=codeready-workspaces
1.2.2.2. Installing with custom settings
Procedure
To override specific settings of the red-hat-codeready-workspaces installation, provide a dedicated custom resource when running the above crwctl
command:
- Download the default custom resource YAML file.
-
Name the downloaded custom resource
org_v1_che_cr.yaml
, and copy it into the current directory. -
Modify the
org_v1_che_cr.yaml
file to override or add any field. Run the installation using the
org_v1_che_cr.yaml
file to override the CodeReady Workspaces CLI management tool defaults:$ crwctl server:start --che-operator-cr-yaml=org_v1_che_cr.yaml
NoteSome basic installation settings can be overridden in a simpler way by using additional
crwctl
arguments. To display the list of available arguments:$ crwctl server:start --help
Chapter 2. Installing CodeReady Workspaces on OpenShift 3 using the Operator
Operators are a method of packaging, deploying, and managing a OpenShift application which also provide the following:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and ISV content.
- A place to encapsulate knowledge from field engineers and spread it to all users.
This chapter describes how to install CodeReady Workspaces on OpenShift 3 using the CLI management tool and the Operator method.
2.1. Installing CodeReady Workspaces on OpenShift 3 using the Operator
This section describes how to install CodeReady Workspaces on OpenShift 3 with the CLI management tool to install via the Operator with SSL (HTTPS) enabled.
As of 2.1.1, SSL/TLS is enabled by default as it is required by the Che-Theia IDE.
Prerequisites
- A running instance of OpenShift 3.11.
- Administrator rights on this OpenShift 3 instance.
-
The
oc
OpenShift 3.11 CLI management tool is installed and configured. See Installing the OpenShift 3.11 CLI. To check the version of theoc
tool, use theoc version
command. -
The
crwctl
CLI management tool is installed. See Installing thecrwctl
CLI management tool.
Procedure
Log in to OpenShift. See Basic Setup and Login.
$ oc login
Run the following command to create the CodeReady Workspaces instance:
$ crwctl server:start -n <openshift_namespace>
NoteTo create the CodeReady Workspaces instance on OpenShift clusters that have not been configured with a valid certificate for the routes, run the
crwctl
command with the--self-signed-cert
flag.
Verification steps
The output of the previous command ends with:
Command server:start has completed successfully.
-
Navigate to the CodeReady Workspaces cluster instance:
https://codeready-<openshift_deployment_name>.<domain_name>
. The domain uses Let’s Encrypt ACME certificates.
Chapter 3. Installing CodeReady Workspaces in TLS mode with self-signed certificates
The following section describes the deployment and configuration of CodeReady Workspaces with self-signed certificates. Self-signed certificates are certificates that are not signed by a commonly trusted certificate authority (CA), but instead signed by a locally created CA. Self-signed certificates are not trusted by default. For example, when a website owner uses a self-signed certificate to provide HTTPS services, users who visit that website see a warning in their browser.
Self-signed certificates are usually used in development and evaluation environments. Use in production environments is not recommended.
3.1. Generating self-signed TLS certificates
This section describes how to prepare self-signed TLS certificates to use with CodeReady Workspaces on different platforms.
Prerequisites
- The expected domain name where the CodeReady Workspaces deployment is planned.
The location of the
openssl.cnf
file on the target machine.Table 3.1. Usual OpenSSL configuration file locations
Linux distribution File location Fedora, Red Hat Enterprise Linux, CentOS
/etc/pki/tls/openssl.cnf
Debian, Ubuntu, Mint, Arch Linux
/etc/ssl/openssl.cnf
Procedure
Set the necessary environment variables:
$ CA_CN="Local Red Hat CodeReady Workspaces Signer" $ DOMAIN=*.<expected.domain.com> $ OPENSSL_CNF=<path_to_openssl.cnf>
Generate the root Certificate Authority (CA) key. Add the
-des3
parameter to use a passphrase:$ openssl genrsa -out ca.key 4096
Generate the root CA certificate:
$ openssl req -x509 \ -new -nodes \ -key ca.key \ -sha256 \ -days 1024 \ -out ca.crt \ -subj /CN="${CA_CN}" \ -reqexts SAN \ -extensions SAN \ -config <(cat ${OPENSSL_CNF} \ <(printf '[SAN]\nbasicConstraints=critical, CA:TRUE\nkeyUsage=keyCertSign, cRLSign, digitalSignature'))
Generate the domain key:
$ openssl genrsa -out domain.key 2048
Generate the certificate signing request for the domain:
$ openssl req -new -sha256 \ -key domain.key \ -subj "/O=Local Red Hat CodeReady Workspaces/CN=${DOMAIN}" \ -reqexts SAN \ -config <(cat ${OPENSSL_CNF} \ <(printf "\n[SAN]\nsubjectAltName=DNS:${DOMAIN}\nbasicConstraints=critical, CA:FALSE\nkeyUsage=digitalSignature, keyEncipherment, keyAgreement, dataEncipherment\nextendedKeyUsage=serverAuth")) \ -out domain.csr
Generate the domain certificate:
$ openssl x509 \ -req \ -sha256 \ -extfile <(printf "subjectAltName=DNS:${DOMAIN}\nbasicConstraints=critical, CA:FALSE\nkeyUsage=digitalSignature, keyEncipherment, keyAgreement, dataEncipherment\nextendedKeyUsage=serverAuth") \ -days 365 \ -in domain.csr \ -CA ca.crt \ -CAkey ca.key \ -CAcreateserial -out domain.crt
This procedure allows to use domain.crt
and domain.key
for TLS Route and Ingress, and ca.crt
for importing into browsers.
Additional resources
3.2. Deploying CodeReady Workspaces with self-signed TLS certificates on OpenShift 4
This section describes how to deploy CodeReady Workspaces with self-signed TLS certificates on a local OpenShift 4 cluster.
CodeReady Workspaces uses a default router certificate to secure its endpoints. Therefore, it depends on the OpenShift cluster configuration whether a self-signed certificate is used or not. CodeReady Workspaces automatically detects if the OpenShift default router uses a self-signed certificate by analyzing its certificate chain.
Prerequisites
- A running OpenShift 4 instance, version 4.2 or higher.
- All required keys and certificates. See Section 3.1, “Generating self-signed TLS certificates”.
Procedure
Log in to the default OpenShift project:
$ oc login -u <username> -p _<password>
Get the OpenShift 4 self-signed certificate:
$ oc get secret router-ca -n openshift-ingress-operator -o jsonpath="{.data.tls\.crt}" | \ base64 -d > ca.crt
Pre-create a namespace for CodeReady Workspaces:
$ oc create namespace {prod-namespace}
Create a secret from the CA certificate:
$ oc create secret generic self-signed-certificate --from-file=ca.crt -n={prod-namespace}
Deploy CodeReady Workspaces using
crwctl
:$ crwctl server:start --platform=openshift --installer=operator
When using CodeReady Containers, substitute
openshift
in the above command withcrc
.
Additional resources
3.3. Deploying CodeReady Workspaces with self-signed TLS certificates on OpenShift 3
This section describes how to deploy CodeReady Workspaces with self-signed TLS certificates generated by the user on the OpenShift 3 platform.
This method involves reconfiguration of OpenShift router to use user-provided TLS certificates.
Prerequisites
- A running OpenShift 3 instance, version 3.11 or higher.
- All required keys and certificates. See Section 3.1, “Generating self-signed TLS certificates”.
Procedure
Log in to the default OpenShift project:
$ oc login -u system:admin --insecure-skip-tls-verify=true $ oc project default
Reconfigure the router with the generated certificate:
$ oc delete secret router-certs $ cat domain.crt domain.key > openshift.crt $ oc create secret tls router-certs --key=domain.key --cert=openshift.crt $ oc rollout latest router
Create a namespace for CodeReady Workspaces:
$ oc create namespace workspaces
Create a secret from the CA certificate:
$ oc create secret generic self-signed-certificate --from-file=ca.crt -n=workspaces
Deploy CodeReady Workspaces using
crwctl
. Red Hat CodeReady Workspaces is installed with TLS mode by default:$ crwctl server:start --platform=openshift --installer=operator
Additional resources
3.4. Importing self-signed TLS certificates to browsers
This section describes how to import a root certificate authority into a web browser to use CodeReady Workspaces with self-signed TLS certificates.
When a TLS certificate is not trusted, the error message Authorization token is missing. Click here to reload page blocks the login process. To prevent this, add the public part of the self-signed CA certificate into the browser after installing CodeReady Workspaces.
3.4.1. Getting the self-signed CA certificate from CodeReady Workspaces deployment
When crwctl
is used to deploy CodeReady Workspaces, it exports a self-signed CA certificate into a cheCA.crt
file to the current user home directory. To get the certificate, use one of the following two methods:
Exporty the certificate using the crwctl command:
$ crwctl cacert:export
Read the
self-signed-certificate
secret from the CodeReady Workspaces namespace:$ oc get secret self-signed-certificate -n workspaces
3.4.2. Adding certificates to Google Chrome on Linux or Windows
Procedure
- Navigate to URL where CodeReady Workspaces is deployed.
Save the certificate:
- Click the lock icon on the left of the address bar.
- Click Certificates and navigate to the Details tab.
Select the certificate to use and export it:
- On Linux, click the Export button.
- On Windows, click the Save to file button.
- Go to Google Chrome Settings, then to the Authorities tab
- In the left panel, select Advanced and continue to Privacy and security.
- At the center of the screen, click Manage certificates and navigate to Authorities tab.
- Click the Import button and open the saved certificate file.
- Select Trust this certificate for identifying websites and click the OK button.
- After adding the CodeReady Workspaces certificate to the browser, the address bar displays the closed lock icon next to the URL, indicating a secure connection.
3.4.3. Adding certificates to Google Chrome on macOS
Procedure
- Navigate to URL where CodeReady Workspaces is deployed.
Save the certificate:
- Click the lock icon on the left of the address bar.
- Click Certificates.
- Select the certificate to use and drag and drop its displayed large icon to the desktop.
- Double-click the exported certificate to import it into Google Chrome.
3.4.4. Adding certificates to Keychain Access for use with Safari on macOS
Procedure
- Navigate to URL where CodeReady Workspaces is deployed.
Save the certificate:
- Click the lock icon on the right of the window title bar.
- Select the certificate to use and drag and drop its displayed large icon to the desktop.
- Open the Keychain Access application.
- Select the System keychain and drag and drop the saved certificate file to it.
- Double-click the imported CA, then go to Trust and select When using this certificate: Always Trust.
- Restart Safari for the added certificated to take effect.
3.4.5. Adding certificates to Firefox
Procedure
- Navigate to URL where CodeReady Workspaces is deployed.
Save the certificate:
- Click the lock icon on the left of the address bar.
- Click the > button next to the Connection not secure warning.
- Click the More information button.
- Click the View Certificate button on the Security tab.
- Click the PEM (cert) link and save the certificate.
-
Navigate to about:preferences, search for
certificates
, and click View Certificates. - Go to the Authorities tab, click the Import button, and open the saved certificate file.
- Check Trust this CA to identify websites and click OK.
- Restart Firefox for the added certificated to take effect.
- After adding the CodeReady Workspaces certificate to the browser, the address bar displays the closed lock icon next to the URL, indicating a secure connection.
Chapter 4. Installing CodeReady Workspaces in a restricted enviroment
By default, Red Hat CodeReady Workspaces uses various external resources, mainly container images available in public registries.
To deploy CodeReady Workspaces in an environment where these external resources are not available (for example, on a cluster that is not exposed to the public Internet):
- Identify the image registry used by the OpenShift cluster, and ensure you can push to it.
- Push all the images needed for running CodeReady Workspaces to this registry.
- Configure CodeReady Workspaces to use the images that have been pushed to the registry.
- Proceed to the CodeReady Workspaces installation.
The procedure for installing CodeReady Workspaces in restricted environments is different based on the installation method you use:
Notes on network connectivity in restricted environments
Restricted network environments range from a private subnet in a cloud provider to a separate network owned by a company, disconnected from the public Internet. Regardless of the network configuration, CodeReady Workspaces works provided that the Routes that are created for CodeReady Workspaces components (codeready-workspaces-server, identity provider, devfile and plugin registries) are accessible from inside the OpenShift cluster.
Take into account the network topology of the environment to determine how best to accomplish this. For example, on a network owned by a company or an organization, the network administrators must ensure that traffic bound from the cluster can be routed to Route hostnames. In other cases, for example, on AWS, create a proxy configuration allowing the traffic to leave the node to reach an external-facing Load Balancer.
When the restricted network involves a proxy, follow the instructions provided in Section 4.3, “Preparing CodeReady Workspaces Custom Resource for installing behing a proxy”.
4.1. Installing CodeReady Workspaces in a restricted enviroment using OperatorHub
Prerequisites
- A running OpenShift cluster. See the OpenShift Container Platform 4.3 documentation for instructions on how to install an OpenShift cluster on a restricted network.
- Access to the mirror registry used to installed the OpenShift disconnected cluster in restricted network. See the Related OpenShift Container Platform 4.3 documentation about creating a mirror registry for installation in a restricted network.
On disconnected OpenShift 4 clusters running on restricted networks, an Operator can be successfully installed from OperatorHub only if it meets the additional requirements defined in Enabling your Operator for restricted network environments.
The CodeReady Workspaces operator meets these requirements and is therefore compatible with the official documentation about OLM on a restricted network.
Procedure
To install CodeReady Workspaces from OperatorHub:
-
Build a
redhat-operators
catalog image. See Building an Operator catalog image. - Configure OperatorHub to use this catalog image for operator installations. See Configuring OperatorHub for restricted networks.
- Proceed to the CodeReady Workspaces installation as usual as described in Section 1.1, “Installing CodeReady Workspaces on OpenShift 4 from OperatorHub”.
4.2. Installing CodeReady Workspaces in a restricted enviroment using CLI management tool
Use CodeReady Workspaces CLI management tool to install CodeReady Workspaces on restricted networks only if installation through OperatorHub is not available. This method is not officially supported for OpenShift Container Platform 4.1 or later.
Prerequisites
- A running OpenShift cluster. See the OpenShift Container Platform 4.2 documentation for instructions on how to install an OpenShift cluster.
4.2.1. Preparing an image registry for installing CodeReady Workspaces in a restricted environment
Prerequisites
-
The
oc
tool is installed. - An image registry that is accessible from the OpenShift cluster. Ensure you can push to it from a location that has, at least temporarily, access to the Internet.
The
podman
tool is installed.NoteWhen pushing images to other registry than the OpenShift internal registry, and the
podman
tool fails to work, use thedocker
tool instead.
The following placeholders are used in this section.
Table 4.1. Placeholders used in examples
| host name and port of the container-image registry accessible in the restricted environment |
| organization of the container-image registry |
For the OpenShift internal registry, the placeholder values are typically the following:
Table 4.2. Placeholders for the internal OpenShift registry
|
|
|
|
See OpenShift documentation for more details.
Procedure
Define the environment variable with the external endpoint of the image registry:
For the OpenShift internal registry, use:
$ REGISTRY_ENDPOINT=$(oc get route default-route --namespace openshift-image-registry \ --template='{{ .spec.host }}')
For other registries, use the host name and port of the image registry:
$ REGISTRY_ENDPOINT=<internal-registry>
Log into the internal image registry:
$ podman login --username <user> --password <password> <internal-registry>
NoteWhen using the OpenShift internal registry, follow the steps described in the related OpenShift documentation to first expose the internal registry through a route, and then log in to it.
Download, tag, and push the necessary images. Repeat the step for every image in the following lists:
$ podman pull <image_name>:<image_tag> $ podman tag <image_name>:<image_tag> ${REGISTRY_ENDPOINT}/<organization>/<image_name>:<image_tag> $ podman push ${REGISTRY_ENDPOINT}/<organization>/<image_name>:<image_tag>
Essential images
The following infrastructure images are included in every workspace launch:
- registry.redhat.io/codeready-workspaces/crw-2-rhel8-operator:2.1
- registry.redhat.io/codeready-workspaces/server-rhel8:2.1
- registry.redhat.io/codeready-workspaces/pluginregistry-rhel8:2.1
- registry.redhat.io/codeready-workspaces/devfileregistry-rhel8:2.1
- registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8:2.1
- registry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8:2.1
- registry.redhat.io/codeready-workspaces/jwtproxy-rhel8:2.1
- registry.redhat.io/codeready-workspaces/machineexec-rhel8:2.1
- registry.redhat.io/codeready-workspaces/theia-rhel8:2.1
- registry.redhat.io/codeready-workspaces/theia-dev-rhel8:2.1
- registry.redhat.io/codeready-workspaces/theia-endpoint-rhel8:2.1
- registry.redhat.io/rhscl/postgresql-96-rhel7:1-47
- registry.redhat.io/redhat-sso-7/sso73-openshift:1.0-15
- registry.redhat.io/ubi8-minimal:8.1-398
Workspace-specific images
These are images that are required for running a workspace. A workspace generally uses only a subset of the images below. It is only necessary to include the images related to required technology stacks.
- registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8:2.1
- registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8:2.1
- registry.redhat.io/codeready-workspaces/stacks-golang-rhel8:2.1
- registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
- registry.redhat.io/codeready-workspaces/stacks-node-rhel8:2.1
- registry.redhat.io/codeready-workspaces/stacks-php-rhel8:2.1
- registry.redhat.io/codeready-workspaces/stacks-python-rhel8:2.1
- registry.redhat.io/codeready-workspaces/plugin-java11-rhel8:2.1
- registry.redhat.io/codeready-workspaces/plugin-openshift-rhel8:2.1
- registry.redhat.io/codeready-workspaces/plugin-kubernetes-rhel8:2.1
4.2.2. Preparing CodeReady Workspaces Custom Resource for restricted environment
When installing CodeReady Workspaces in a restricted environment using crwctl
or OperatorHub, provide a CheCluster
custom resource with additional information.
4.2.2.1. Downloading the default CheCluster
Custom Resource
Procedure
- Download the default custom resource YAML file.
-
Name the downloaded custom resource
org_v1_che_cr.yaml
. Keep it for further modification and usage.
4.2.2.2. Customizing the CheCluster
Custom Resource for restricted environment
Prerequisites
- All required images available in an image registry that is visible to the OpenShift cluster where CodeReady Workspaces is to be deployed. This is described in Section 4.2.1, “Preparing an image registry for installing CodeReady Workspaces in a restricted environment”, where the placeholders used in the following examples are also defined.
Procedure
In the
CheCluster
Custom Resource, which is managed by the CodeReady Workspaces Operator, add the fields used to facilitate deploying an instance of CodeReady Workspaces in a restricted environment:# [...] spec: server: airGapContainerRegistryHostname: '<internal-registry>' airGapContainerRegistryOrganization: '<organization>' # [...]
Setting these fields in the Custom Resource uses
<internal-registry>
and<organization>
for all images. This means, for example, that the Operator expects the offline plug-in and devfile registries to be available at:<internal-registry>/<organization>/pluginregistry-rhel8:<ver> <internal-registry>/<organization>/pluginregistry-rhel8:<ver>
For example, to use the OpenShift 4 internal registry as the image registry, define the following fields in the
CheCluster
Custom Resource:# [...] spec: server: airGapContainerRegistryHostname: 'image-registry.openshift-image-registry.svc:5000' airGapContainerRegistryOrganization: 'openshift' # [...]
-
In the downloaded
CheCluster
Custom Resource, add the two fields described above with the proper values according to the container-image registry with all the mirrored container images.
4.2.3. Starting CodeReady Workspaces installation in a restricted environment using CodeReady Workspaces CLI management tool
This sections describes how to start the CodeReady Workspaces installation in a restricted environment using the CodeReady Workspaces CLI management tool.
Prerequisites
- CodeReady Workspaces CLI management tool is installed.
-
The
oc
tool is installed. - Access to an OpenShift instance.
Procedure
Log in to OpenShift Container Platform:
$ oc login ${OPENSHIFT_API_URL} --username ${OPENSHIFT_USERNAME} \ --password ${OPENSHIFT_PASSWORD}
Install CodeReady Workspaces with the customized Custom Resource to add fields related to restricted environment:
$ crwctl server:start \ --che-operator-image=<image-registry>/<organization>/server-operator-rhel8:2.1 \ --che-operator-cr-yaml=org_v1_che_cr.yaml
4.3. Preparing CodeReady Workspaces Custom Resource for installing behing a proxy
This procedure describes how to provide necessary additional information to the CheCluster
custom resource when installing CodeReady Workspaces behing a proxy.
Procedure
In the
CheCluster
Custom Resource, which is managed by the CodeReady Workspaces Operator, add the fields used to facilitate deploying an instance of CodeReady Workspaces in a restricted environment:# [...] spec: server: proxyURL: '<URL of the proxy, with the http protocol, and without the port>' proxyPort: '<Port of proxy, typically 3128>' # [...]
In addition to those basic settings, the proxy configuration usually requires adding the host of the external OpenShift cluster API URL in the list of the hosts to be accessed from CodeReady Workspaces without using the proxy.
To retrieve this cluster API host, run the following command against the OpenShift cluster:
$ oc whoami --show-server | sed 's#https://##' | sed 's#:.*$##'
The corresponding field of the
CheCluster
Custom Resource isnonProxyHosts
. If a host already exists in this field, use|
as a delimiter to add the cluster API host:# [...] spec: server: nonProxyHosts: 'anotherExistingHost|<cluster api host>' # [...]
Chapter 5. Upgrading CodeReady Workspaces
This chapter describes how to upgrade a CodeReady Workspaces instance to CodeReady Workspaces 2.1.
5.1. Upgrading CodeReady Workspaces using OperatorHub
This section describes how to upgrade from CodeReady Workspaces 2.0 to CodeReady Workspaces 2.1 on OpenShift 4 using the OpenShift web console. This method is using the Operator from OperatorHub.
Prerequisites
- An administrator account on an OpenShift 4 instance.
- An instance of CodeReady Workspaces 2.0, running on the same instance of OpenShift 4, installed using an Operator from OperatorHub.
Procedure
- Open the OpenShift web console.
- Navigate to the Operators → Installed Operators section.
- Click Red Hat CodeReady Workspaces in the list of installed operators.
Navigate to the Subscription tab and enable the following options:
-
Channel:
latest
-
Approval:
Automatic
-
Channel:
Verification steps
- Log in to the CodeReady Workspaces instance.
- The 2.1 version number is visible at the bottom of the page.
5.2. Upgrading CodeReady Workspaces using CLI management tool on OpenShift 3
This section describes how to upgrade from CodeReady Workspaces 2.0 to CodeReady Workspaces 2.1 on OpenShift 3 using the CLI management tool.
Prerequisites
- An administrative account on an OpenShift 3 instance.
- A running instance of Red Hat CodeReady Workspaces running on OpenShift 3, installed using the CLI management tool.
-
The
crwctl
management tool installed.
Procedure
- In all running workspaces in the CodeReady Workspaces 2.0 instance, save and push changes to Git repositories.
Run the following command:
$ crwctl server:update
Verification steps
- Log in to the CodeReady Workspaces instance.
- The 2.1 version number is visible at the bottom of the page.
5.3. Upgrading CodeReady Workspaces from previous major version
This sections describes how to perform an upgrade from the previous major version of Red Hat CodeReady Workspaces (1.2).
Chapter 6. Advanced configuration options
The following section describes advanced deployment and configuration methods for Red Hat CodeReady Workspaces.
6.1. CodeReady Workspaces configMaps and their behavior
The following section describes CodeReady Workspaces configMaps
and how they behave.
A configMap
is provided as an editable file that lists options to customize the CodeReady Workspaces environment. Based on the CodeReady Workspaces installation method, configMaps
can be used to customize the working environment. The type of configMaps available in your CodeReady Workspaces environment varies based on the method used for installing CodeReady Workspaces.
6.1.1. CodeReady Workspaces installed using an Operator
Operators are software extensions to OpenShift that use custom resources to manage applications and their components.
CodeReady Workspaces installed using the Operator provides the user with an automatically generated configMap
called codeready
.
The codeready
configMap
contains the main properties for the CodeReady Workspaces server, and is in sync with the information stored in the CheCluster Custom Resource file. User modifications of the codeready
configMap
after installing CodeReady Workspaces using the Operator are automatically overwritten by values that the Operator obtains from the CheCluster
Custom Resource.
To edit the codeready
configMap
, edit the Custom Resource manually. The configMap
derives values from the CheCluster
field. User modifications of the CheCluster
Custom Resource field cause the Operator to change the attributes of the codeready
configMap
accordingly. The configMap
changes automatically trigger a restart of the CodeReady Workspaces Pod.
To add custom properties to the CodeReady Workspaces server, such as environment variables that are not automatically generated in the codeready
configMap
by the Operator, or to override automatically generated properties, the CheCluster
Custom Resource has a customCheProperties
field, which expects a map.
For example, to overrride the default memory limit for workspaces, add the CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB
property to customCheProperties
:
apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: eclipse-che namespace: che spec: server: cheImageTag: '' devfileRegistryImage: '' pluginRegistryImage: '' tlsSupport: true selfSignedCert: false customCheProperties: CHE_WORKSPACE_DEFAULT__MEMORY__LIMIT__MB: "2048" auth: ...
Previous versions of the CodeReady Workspaces Operator had a configMap named custom
to fulfill this role. If the CodeReady Workspaces Operator finds a configMap
with the name custom
, it adds the data it contains into the customCheProperties
field, redeploys CodeReady Workspaces, and deletes the custom
configMap
.
6.2. Configuring namespace strategies
The term namespace (Kubernetes) is used interchangeably with project (OpenShift).
The namespace strategies are configured using the CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT
environment variable.
CHE_INFRA_KUBERNETES_NAMESPACE
and CHE_INFRA_OPENSHIFT_PROJECT
are legacy variables. Keep these variables unset for a new installations. Changing these variables during an update can lead to data loss.
6.2.1. One namespace per workspace strategy
The strategy creates a new namespace for each new workspace.
To use the strategy, the CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT
variable value must contain the <workspaceID>
identifier. It can be used alone or combined with other identifiers or any string.
Example 6.1. One namespace per workspace
To assign namespace names composed of a che-ws
prefix and workspace id, set:
CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT=che-ws-<workspaceID>
6.2.2. One namespace for all workspaces strategy
The strategy uses one predefined namespace for all workspaces.
To use the strategy, the CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT
variable value must be the name of the desired namespace to use.
Example 6.2. One namespace for all workspaces
To have all workspaces created in che-workspaces
namespace, set:
CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT=che-workspaces
To run more than one workspace at a time when using this strategy together with the common
PVC strategy, configure persistent volumes to use ReadWriteMany
access mode.
6.2.3. One namespace per user strategy
The strategy isolates each user in their own namespace.
To use the strategy, the CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT
variable value must contain one or more user identifiers. Currently supported identifiers are <username>
and <userId>
.
Example 6.3. One namespace per user
To assign namespace names composed of a che-ws
prefix and individual usernames (che-ws-user1
, che-ws-user2
), set:
CHE_INFRA_KUBERNETES_NAMESPACE_DEFAULT=che-ws-<username>
To run more than one workspace at a time when using this strategy together with the common
PVC strategy, configure persistent volumes to use ReadWriteMany
access mode.
To limit the number of concurrently running workspaces per user to one, set the CHE_LIMITS_USER_WORKSPACES_RUN_COUNT
environment variable to 1
.
To limit the number of concurrently running workspaces per user to one (1):
-
For Operator deployments: set the
spec.server.cheCustomProperties.CHE_LIMITS_USER_WORKSPACE_RUN_COUNT
variable of the CheCluster Custom Resource (CR) to1
.
6.2.4. Allowing user-defined workspace namespaces
CodeReady Workspaces server can be configured to honor the user selection of a namespace when a workspace is created. This feature is disabled by default. To allow user-defined workspace namespaces:
For Operator deployments, set the following field in the CheCluster Custom Resource:
allowUserDefinedWorkspaceNamespaces
6.3. Deploying CodeReady Workspaces with support for Git repositories with self-signed certificates
This procedure describes how to configure CodeReady Workspaces for deployment with support for Git operations on repositories that use self-signed certificates.
Prerequisites
- Git version 2 or later
Procedure
Configuring support for self-signed Git repositories.
Create a new configMap with details about the Git server:
$ oc create configmap che-git-self-signed-cert --from-file=ca.crt \ --from-literal=githost=<host:port> -n {prod-namespace}
In the command, substitute
<host:port>
for the host and port of the HTTPS connection on the Git server (optional).Note-
When
githost
is not specified, the given certificate is used for all HTTPS repositories. -
The certificate file must be named
ca.crt
.
-
When
Configure the workspace exposure strategy:
Update the
gitSelfSignedCert
property. To do that, execute:$ oc patch checluster codeready-workspaces -n workspaces --type=json \ -p '[{"op": "replace", "path": "/spec/server/gitSelfSignedCert", "value": true}]'
Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The repository’s
.git/config
file contains information about the Git server host (its URL) and the path to the certificate in thehttp
section (see Git documentation about git-config). For example:[http "https://10.33.177.118:3000"] sslCAInfo = /etc/che/git/cert/ca.crt
6.4. Adding self-signed SSL certificates to CodeReady Workspaces
When a CodeReady Workspaces user attempts to authenticate with RH-SSO that is using OpenShift OAuth, the authentication fails if the RH-SSO does not know the certificates needed for authorization.
To fix this problem, configure CodeReady Workspaces to authorize HTTPS communication with various components, such as identity and Git servers, by adding information about the self-signed SSL certificates to the CodeReady Workspaces configuration.
Prerequisites
-
The OpenShift command-line tool,
oc
is installed.
Procedure
- Save the desired self-signed certificates to a local file system.
Create a new configMap with the required self-signed SSL certificates:
$ oc create configmap <configMap-name> --from-file=<certificate-file-path> -n=<che-namespace-name>
To apply more than one certificate, add another
--from-file=<certificate-file-path>
option to the above command.Define a name for the newly created configMap.
NoteUse these steps with existing instances of CodeReady Workspaces. To install a new instance of CodeReady Workspaces with self-signed SSL certificates, create a new Che Custom Resource or Helm Chart property, based on the installation method selected, instead of updating the existing configuration.
For a CodeReady Workspaces Operators deployment:
Define a name for the newly created configMap by editing the
spec.server.ServerTrustStoreConfigMapName
Che Custom Resource property to match the previously created configMap:$ oc patch checluster codeready-workspaces -n che --type=json -p '[{"op": "replace", "path": "/spec/server/serverTrustStoreConfigMapName", "value": "<config-map-name>"}]'
Verification
If the certificates have been added correctly, the CodeReady Workspaces server starts and obtains RH-SSO configuration over HTTPS with a self-signed SSL certificate, allowing user to:
- Access the CodeReady Workspaces server.
- Log in using OpenShift OAuth.
- Clone from a Git repository that has a custom self-signed SSL certificate over HTTPS.
6.5. CodeReady Workspaces configMaps fields reference
6.5.1. server
settings related to the CodeReady Workspaces server
Property | Default value | Description |
---|---|---|
| omit | An optional host name or URL to an alternative container registry to pull images from. This value overrides the container registry host name defined in all default container images involved in a CodeReady Workspaces deployment. This is particularly useful to install CodeReady Workspaces in an air-gapped environment. |
| omit | Optional repository name of an alternative container registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a CodeReady Workspaces deployment. This is particularly useful to install CodeReady Workspaces in an air-gapped environment. |
|
| Enables the debug mode for CodeReady Workspaces server. |
|
| Flavor of the installation. |
| The Operator automatically sets the value. | A public host name of the installed CodeReady Workspaces server. |
|
| Overrides the image pull policy used in CodeReady Workspaces deployment. |
| omit | Overrides the tag of the container image used in CodeReady Workspaces deployment. Omit it or leave it empty to use the default image tag provided by the Operator. |
| omit | Overrides the container image used in CodeReady Workspaces deployment. This does not include the container image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
|
|
Log level for the CodeReady Workspaces server: |
| omit | Custom cluster role bound to the user for the workspaces. Omit or leave empty to use the default roles. |
| omit |
Map of additional environment variables that will be applied in the generated |
| omit | Overrides the container image used in the Devfile registry deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
|
| Overrides the memory limit used in the Devfile registry deployment. |
|
| Overrides the memory request used in the Devfile registry deployment. |
|
| Overrides the image pull policy used in the Devfile registry deployment. |
| The Operator automatically sets the value. |
Public URL of the Devfile registry that serves sample, ready-to-use devfiles. Set it if you use an external devfile registry (see the |
|
|
Instructs the Operator to deploy a dedicated Devfile registry server. By default a dedicated devfile registry server is started. If |
|
|
Instructs the Operator to deploy a dedicated Plugin registry server. By default, a dedicated plug-in registry server is started. If |
| omit |
List of hosts that will not use the configured proxy. Use |
| omit | Overrides the container image used in the Plugin registry deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
|
| Overrides the memory limit used in the Plugin registry deployment. |
|
| Overrides the memory request used in the Plugin registry deployment. |
|
| Overrides the image pull policy used in the Plugin registry deployment. |
| the Operator sets the value automatically |
Public URL of the Plugin registry that serves sample ready-to-use devfiles. Set it only when using an external devfile registry (see the |
| omit | Password of the proxy server. Only use when proxy configuration is required. |
| omit |
Port of the proxy server. Only use when configuring a proxy is required (see also the |
| omit |
URL (protocol+host name) of the proxy server. This drives the appropriate changes in the |
| omit |
User name of the proxy server. Only use when configuring a proxy is required (see also the |
|
|
Enables the support of OpenShift clusters with routers that use self-signed certificates. When enabled, the Operator retrieves the default self-signed certificate of OpenShift routes and adds it to the Java trust store of the CodeReady Workspaces server. Required when activating the |
|
| Overrides the memory limit used in the CodeReady Workspaces server deployment. |
|
| Overrides the memory request used in the CodeReady Workspaces server deployment. |
|
|
Instructs the Operator to deploy CodeReady Workspaces in TLS mode. Enabling TLS requires enabling the |
6.5.2. database
configuration settings related to the database used by CodeReady Workspaces
Property | Default value | Description |
---|---|---|
|
| PostgreSQL database name that the CodeReady Workspaces server uses to connect to the database. |
| the Operator sets the value automatically |
PostgreSQL Database host name that the CodeReady Workspaces server uses to connect to. Defaults to |
| auto-generated value | PostgreSQL password that the CodeReady Workspaces server uses to connect to the database. |
|
|
PostgreSQL Database port that the CodeReady Workspaces server uses to connect to. Override this value only when using an external database (see field |
|
| PostgreSQL user that the CodeReady Workspaces server uses to connect to the database. |
|
|
Instructs the Operator to deploy a dedicated database. By default, a dedicated PostgreSQL database is deployed as part of the CodeReady Workspaces installation. If set to |
|
Always` for | Overrides the image pull policy used in the PostgreSQL database deployment. |
| omit | Overrides the container image used in the PostgreSQL database deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
6.5.3. auth
configuration settings related to authentication used by CodeReady Workspaces installation
Property | Default value | Description |
---|---|---|
|
|
By default, a dedicated Identity Provider server is deployed as part of the CodeReady Workspaces installation. But if |
|
| Overrides the name of the Identity Provider admin user. |
| omit |
Name of an Identity provider (Keycloak / RH SSO) |
|
| Overrides the image pull policy used in the Identity Provider (Keycloak / RH SSO) deployment. |
| omit | Overrides the container image used in the Identity Provider (Keycloak / RH SSO) deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. |
| omit |
Overrides the password of Keycloak admin user. Override it only when using an external Identity Provider (see the |
| the Operator sets the value automatically |
Password for The Identity Provider (Keycloak / RH SSO) to connect to the database. This is useful to override it ONLY if you use an external Identity Provider (see the |
| omit |
Name of an Identity provider (Keycloak / RH SSO) realm. Override it only when using an external Identity Provider (see the |
| the Operator sets the value automatically |
Instructs the Operator to deploy a dedicated Identity Provider (Keycloak or RH SSO instance). Public URL of the Identity Provider server (Keycloak / RH SSO server). Set it only when using an external Identity Provider (see the |
| the Operator sets the value automatically |
Name of the OpenShift |
| the Operator sets the value automatically |
Name of the secret set in the OpenShift |
|
|
Enables the integration of the identity provider (Keycloak / RHSSO) with OpenShift OAuth. This allows users to log in with their OpenShift login and have their workspaces created under personal OpenShift namespaces. The |
|
|
Forces the default |
6.5.4. storage
configuration settings related to persistent storage used by CodeReady Workspaces
Property | Default value | Description |
---|---|---|
| omit | Storage class for the Persistent Volume Claim dedicated to the PostgreSQL database. Omitted or leave empty to use a default storage class. |
|
| Instructs the CodeReady Workspaces server to launch a special Pod to pre-create a subpath in the Persistent Volumes. Enable it according to the configuration of your K8S cluster. |
|
| Size of the persistent volume claim for workspaces. |
| omit |
Overrides the container image used to create sub-paths in the Persistent Volumes. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. See also the |
|
|
Available options:`common` (all workspaces PVCs in one volume), |
| omit | Storage class for the Persistent Volume Claims dedicated to the workspaces. Omit or leave empty to use a default storage class. |
6.5.5. k8s
configuration settings specific to CodeReady Workspaces installations on OpenShift
Property | Default value | Description |
---|---|---|
|
| Ingress class that defines which controller manages ingresses. |
| omit |
Global ingress domain for a K8S cluster. This field must be explicitly specified. This drives the |
|
|
Strategy for ingress creation. This can be |
|
| FSGroup the CodeReady Workspaces Pod and Workspace Pods containers will run in. |
|
| ID of the user the CodeReady Workspaces Pod and Workspace Pods containers will run as. |
| omit |
Name of a secret that is used to set ingress TLS termination if TLS is enabled. See also the |
6.5.6. installation
defines the observed state of CodeReady Workspaces installation
Property | Description |
---|---|
|
Status of a CodeReady Workspaces installation. Can be |
| Public URL to the CodeReady Workspaces server. |
| Currently installed CodeReady Workspaces version. |
| Indicates whether a PostgreSQL instance has been correctly provisioned. |
| Public URL to the Devfile registry. |
| A URL to where to find help related to the current Operator status. |
| Indicates whether an Identity Provider instance (Keycloak / RH SSO) has been provisioned with realm, client and user. |
| Public URL to the Identity Provider server (Keycloak / RH SSO). |
| A human-readable message with details about why the Pod is in this state. |
| Indicates whether an Identity Provider instance (Keycloak / RH SSO) has been configured to integrate with the OpenShift OAuth. |
| Public URL to the Plugin registry. |
| A brief CamelCase message with details about why the Pod is in this state. |
6.5.7. Limits for workspaces
Property | Default value | Description |
---|---|---|
|
| The maximum amount of RAM that a user can allocate to a workspace when they create a new workspace. The RAM slider is adjusted to this maximum value. |
|
| The length of time that a user is idle with their workspace when the system will suspend the workspace and then stopping it. Idleness is the length of time that the user has not interacted with the workspace, meaning that one of our agents has not received interaction. Leaving a browser window open counts toward idleness. |
6.5.8. Limits for the workspaces of an user
Property | Default value | Description |
---|---|---|
|
| he total amount of RAM that a single user is allowed to allocate to running workspaces. A user can allocate this RAM to a single workspace or spread it across multiple workspaces. |
|
| The maximum number of workspaces that a user is allowed to create. The user will be presented with an error message if they try to create additional workspaces. This applies to the total number of both running and stopped workspaces. |
|
| The maximum number of running workspaces that a single user is allowed to have. If the user has reached this threshold and they try to start an additional workspace, they will be prompted with an error message. The user will need to stop a running workspace to activate another. |
6.5.9. Limits for for the workspaces of an organization
Property | Default value | Description |
---|---|---|
|
| The total amount of RAM that a single organization (team) is allowed to allocate to running workspaces. An organization owner can allocate this RAM however they see fit across the team’s workspaces. |
|
| The maximum number of workspaces that a organization is allowed to own. The organization will be presented an error message if they try to create additional workspaces. This applies to the total number of both running and stopped workspaces. |
|
| The maximum number of running workspaces that a single organization is allowed. If the organization has reached this threshold and they try to start an additional workspace, they will be prompted with an error message. The organization will need to stop a running workspace to activate another. |
Chapter 7. Uninstalling CodeReady Workspaces
This section describes uninstallation procedures for Red Hat CodeReady Workspaces installed on OpenShift. The uninstallation process leads to a complete removal of CodeReady Workspaces-related user data. The appropriate uninstallation method depends on what method was used to install the CodeReady Workspaces instance.
- For CodeReady Workspaces installed using OperatorHub, see Section 7.1, “Uninstalling CodeReady Workspaces after OperatorHub installation”.
- For CodeReady Workspaces installed using crwctl, see Section 7.2, “Uninstalling CodeReady Workspaces after crwctl installation”
7.1. Uninstalling CodeReady Workspaces after OperatorHub installation
Users have two options for uninstalling CodeReady Workspaces from an OpenShift cluster. The following sections describe the following methods:
- Using the OpenShift Administrator Perspective web UI
-
Using
oc
commands from the terminal
7.1.1. Uninstalling CodeReady Workspaces using the OpenShift web console
This section describes how to uninstall CodeReady Workspaces from a cluster using the OpenShift Administrator Perspective main menu.
Prerequisites
- CodeReady Workspaces was installed on an OpenShift cluster using OperatorHub.
Procedure: deleting the CodeReady Workspaces deployment
- Open the OpenShift web console.
- Navigate to the Operators > Installed Operators section.
- Click Red Hat CodeReady Workspaces in the list of installed operators.
- Navigate to the Red Hat CodeReady Workspaces Cluster tab.
- In the row that displays information about the specific CodeReady Workspaces cluster, delete the CodeReady Workspaces Cluster deployment using the drop-down menu illustrated as three horizontal dots situated on the right side of the screen.
-
Alternatively, delete the CodeReady Workspaces deployment by clicking the displayed Red Hat CodeReady Workspaces Cluster,
red-hat-codeready-workspaces
, and select the Delete cluster option in the Actions drop-down menu on the top right.
Procedure: deleting the CodeReady Workspaces Operator
- Open the OpenShift web console.
- Navigate to the Operators > Installed Operators section in OpenShift Developer Perspective.
- In the row that displays information about the specific Red Hat CodeReady Workspaces Operator, uninstall the CodeReady Workspaces Operator using the drop-down menu illustrated as three horizontal dots situated on the right side of the screen.
- Accept the selected option, Also completely remove the Operator from the selected namespace.
-
Alternatively, uninstall the Red Hat CodeReady Workspaces Operator by clicking the displayed Red Hat CodeReady Workspaces Operator,
Red Hat CodeReady Workspaces
, followed by selecting the Uninstall Operator option in the Actions drop-down menu on the top right.
7.1.2. Uninstalling CodeReady Workspaces using oc
commands
This section provides instructions on how to uninstall a CodeReady Workspaces instance using oc
commands.
Prerequisites
- CodeReady Workspaces was installed on an OpenShift cluster using OperatorHub.
-
OpenShift command-line tools (
oc
) are installed on the local workstation.
Procedure
The following procedure provides command-line outputs as examples. Note that output in the user terminal may differ.
To uninstall a CodeReady Workspaces instance from a cluster:
Sign in to the cluster:
$ oc login -u <username> -p <password> <cluster_URL>
Switch to the project where the CodeReady Workspaces instance is deployed:
$ oc project <codeready-workspaces_project>
Obtain the CodeReady Workspaces cluster name. The following shows a cluster named
red-hat-codeready-workspaces
:$ oc get checluster NAME AGE red-hat-codeready-workspaces 27m
Delete the CodeReady Workspaces cluster:
$ oc delete checluster red-hat-codeready-workspaces checluster.org.eclipse.che "red-hat-codeready-workspaces" deleted
Obtain the name of the CodeReady Workspaces cluster service version (CSV) module. The following detects a CSV module named
red-hat-codeready-workspaces.v2.1
:$ oc get csv NAME DISPLAY VERSION REPLACES PHASE red-hat-codeready-workspaces.v2.1 Red Hat CodeReady Workspaces 2.1 red-hat-codeready-workspaces.v2.0 Succeeded
Delete the CodeReady Workspaces CSV:
$ oc delete csv red-hat-codeready-workspaces.v2.1 clusterserviceversion.operators.coreos.com "red-hat-codeready-workspaces.v2.1" deleted
7.2. Uninstalling CodeReady Workspaces after crwctl installation
This section describes how to uninstall an instance of Red Hat CodeReady Workspaces that was installed using the crwctl
tool.
-
For CodeReady Workspaces installed using the
crwctl server:start
command and the-n
argument (custom namespace specified), use the-n
argument also to uninstall the CodeReady Workspaces instance. -
For installations that did not use the
-n
argument, the created namespace is named workspaces by default.
Prerequisites
-
CodeReady Workspaces was installed on an OpenShift cluster using
crwctl
. -
OpenShift command-line tools (
oc
) andcrwctl
are installed on the local workstation. -
The user is logged in a CodeReady Workspaces cluster using
oc
.
Procedure
Stop the Red Hat CodeReady Workspaces Server:
$ crwctl server:stop
Obtain the name of the CodeReady Workspaces namespace:
$ oc get checluster --all-namespaces -o=jsonpath="{.items[*].metadata.namespace}"
Remove CodeReady Workspaces from the cluster:
$ crwctl server:delete -n <namespace>
This removes all CodeReady Workspaces installations from the cluster.