Applications

OpenShift Dedicated 4

Creating and managing applications on OpenShift Dedicated 4

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for the various ways to create and manage instances of user-provisioned applications running on OpenShift Dedicated.

Chapter 1. Projects

1.1. Working with projects

A project allows a community of users to organize and manage their content in isolation from other communities.

Note

Projects starting with openshift- and kube- are default projects. These projects host cluster components that run as Pods and other infrastructure components. As such, OpenShift Dedicated does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these Projects using the oc adm new-project command.

1.1.1. Creating a project using the web console

If allowed by your cluster administrator, you can create a new project.

Note

Projects starting with openshift- and kube- are considered critical by OpenShift Dedicated. As such, OpenShift Dedicated does not allow you to create Projects starting with openshift- using the web console.

Procedure

  1. Navigate to HomeProjects.
  2. Click Create Project.
  3. Enter your project details.
  4. Click Create.

1.1.2. Creating a project using the Developer perspective in the web console

You can use the Developer perspective in the OpenShift Dedicated web console to create a project in your namespace.

Note

Projects starting with openshift- and kube- host cluster components that run as Pods and other infrastructure components. As such, OpenShift Dedicated does not allow you to create Projects starting with openshift- or kube- using the CLI. Cluster administrators can create these Projects using the oc adm new-project command.

Prerequisites

  • Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Dedicated.

Procedure

You can create a project using the Developer perspective, as follows:

  1. In the Add view, click the Project drop-down menu to see a list of all available projects. Select Create Project.

    Create Project
  2. In the Create Project dialog box, enter a unique name for the Name field. For example, enter myproject as the name of the project in the Name field.
  3. Optional: Add the Display Name and Description details for the Project.
  4. Click Create.
  5. Navigate to the Advanced → Project Details page to see the dashboard for your project.
  6. In the Project drop-down menu at the top of the screen, select all projects to list all of the projects in your cluster. If you have adequate permissions for a project, you can use the Options menu kebab to edit or delete the project.

1.1.3. Creating a project using the CLI

If allowed by your cluster administrator, you can create a new project.

Note

Projects starting with openshift- and kube- are considered critical by OpenShift Dedicated. As such, OpenShift Dedicated does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these Projects using the oc adm new-project command.

Procedure

  1. Run:
$ oc new-project <project_name> \
    --description="<description>" --display-name="<display_name>"

For example:

$ oc new-project hello-openshift \
    --description="This is an example project" \
    --display-name="Hello OpenShift"
Note

The number of projects you are allowed to create may be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one.

1.1.4. Viewing a project using the web console

Procedure

  1. Navigate to HomeProjects.
  2. Select a project to view.

    On this page, click the Workloads button to see workloads in the project.

1.1.5. Viewing a project using the CLI

When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy.

Procedure

  1. To view a list of projects, run:

    $ oc get projects
  2. You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content:

    $ oc project <project_name>

1.1.6. Providing access permissions to your project using the Developer perspective

You can use the Project Access view in the Developer perspective to grant or revoke access permissions to your project.

Procedure

To add users to your project and provide Admin, View, or Edit access to them:

  1. In the Developer perspective, navigate to the Advanced → Project Access page.
  2. In the Project Access page, click Add Access to add a new row.

    Project Permissions
  3. Enter the user name, click the Select a role drop-down list, and select an appropriate role.
  4. Click Save.

You can also use:

  • The Select a role drop-down list, to modify the access permissions of an existing user.
  • The Remove Access icon, to completely remove the access permissions of an existing user to the project.
Note

Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective.

1.1.7. Adding to a project

Procedure

  1. Select Developer from the context selector at the top of the web console navigation menu.
  2. Click +Add
  3. At the top of the page, select the name of the project that you want to add to.
  4. Click on a method for adding to your project, and then follow the workflow.

1.1.8. Checking project status using the web console

Procedure

  1. Navigate to HomeProjects.
  2. Select a project to see its status.

1.1.9. Checking project status using the CLI

Procedure

  1. Run:

    $ oc status

    This command provides a high-level overview of the current project, with its components and their relationships.

1.1.10. Deleting a project using the web console

You can delete a project by using the OpenShift Dedicated web console.

Note

If you do not have permissions to delete the project, the Delete Project option is not available.

Procedure

  1. Navigate to HomeProjects.
  2. Locate the project that you want to delete from the list of projects.
  3. On the far right side of the project listing, select Delete Project from the Options menu kebab .
  4. When the Delete Project pane opens, enter the name of the project that you want to delete in the field.
  5. Click Delete.

1.1.11. Deleting a project using the CLI

When you delete a project, the server updates the project status to Terminating from Active. Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console.

Procedure

  1. Run:

    $ oc delete project <project_name>

Chapter 2. Application life cycle management

2.1. Creating applications using the Developer perspective

The Developer perspective in the web console provides you the following options from the Add view to create applications and associated services and deploy them on OpenShift Dedicated:

Add View
  • From Git: Use this option to import an existing codebase in a Git repository to create, build, and deploy an application on OpenShift Dedicated.
  • Container Image: Use existing images from an image stream or registry to deploy it on to OpenShift Dedicated.
  • From Catalog: Explore the Developer Catalog to select the required applications, services, or source to image builders and add it to your project.
  • From Dockerfile: Import a dockerfile from your Git repository to build and deploy an application.
  • YAML: Use the editor to add YAML or JSON definitions to create and modify resources.
  • Database: See the Developer Catalog to select the required database service and add it to your application.

Prerequisites

To create applications using the Developer perspective ensure that:

2.1.1. Importing a codebase from Git to create an application

The following procedure walks you through the Import from Git option in the Developer perspective to create an application.

Create, build, and deploy an application on OpenShift Dedicated using an existing codebase in GitHub as follows:

Procedure

  1. In the Add view, click From Git to see the Import from git form.

    Import from Git
  2. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex. The URL is then validated.
  3. Optional: You can click Show Advanced Git Options to add details such as:

    • Git Reference to point to code in a specific branch, tag, or commit to be used to build the application.
    • Context Dir to specify the subdirectory for the application source code you want to use to build the application.
    • Source Secret to create a Secret Name with credentials for pulling your source code from a private repository.
  4. In the Builder section, after the URL is validated, an appropriate builder image is detected, indicated by a star, and automatically selected. For the https://github.com/sclorg/nodejs-ex Git URL, the Node.js builder image is selected by default. If required, you can change the version using the Builder Image Version drop-down list.
  5. In the General section:

    1. In the Application field, enter a unique name for the application grouping, for example, myapp. Ensure that the application name is unique in a namespace.
    2. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL.

      Note

      The resource name must be unique in a namespace. Modify the resource name if you get an error.

  6. In the Resources section, select:

    • Deployment, to create an application in plain Kubernetes style.
    • Deployment Config, to create an OpenShift style application.
    • Knative Service, to create a microservice.
    Note

    The Knative Service option is displayed in the Import from git form only if the Serverless Operator is installed in your cluster. For further details refer to documentation on installing OpenShift Serverless.

  7. In the Advanced Options section, the Create a route to the application is selected by default so that you can access your application using a publicly available URL. You can clear the check box if you do not want to expose your application on a public route.
  8. Optional: You can use the following advanced options to further customize your application:

    Routing

    Click the Routing link to:

    • Customize the hostname for the route.
    • Specify the path the router watches.
    • Select the target port for the traffic from the drop-down list.
    • Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists.

      For serverless applications, the Knative Service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used.

    Build and Deployment Configuration
    Click the Build Configuration and Deployment Configuration links to see the respective configuration options. Some of the options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment Configuration option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig.
    Scaling

    Click the Scaling link to define the number of Pods or instances of the application you want to deploy initially.

    For serverless applications, you can:

    • Set the upper and lower limit for the number of pods that can be set by the autoscaler. If the lower limit is not specified, it defaults to zero.
    • Define the soft limit for the required number of concurrent requests per instance of the application at a given time. It is the recommended configuration for autoscaling. If not specified, it takes the value specified in the cluster configuration.
    • Define the hard limit for the number of concurrent requests allowed per instance of the application at a given time. This is configured in the revision template. If not specified, it defaults to the value specified in the cluster configuration.
    Resource Limit
    Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running.
    Labels
    Click the Labels link to add custom labels to your application.
  9. Click Create to create the application and see its build status in the Topology view.

2.2. Creating applications from installed Operators

Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Dedicated using Operators that have been installed by a cluster administrator.

This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Dedicated web console.

Additional resources

  • See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Dedicated.

2.2.1. Creating an etcd cluster using an Operator

This procedure walks through creating a new etcd cluster using the etcd Operator, managed by the Operator Lifecycle Manager (OLM).

Prerequisites

  • Access to an OpenShift Dedicated 4 cluster.
  • The etcd Operator already installed cluster-wide by an administrator.

Procedure

  1. Create a new project in the OpenShift Dedicated web console for this procedure. This example uses a project called my-etcd.
  2. Navigate to the Operators → Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of ClusterServiceVersions (CSVs). CSVs are used to launch and manage the software provided by the Operator.

    Tip

    You can get this list from the CLI using:

    $ oc get csv
  3. On the Installed Operators page, click Copied, and then click the etcd Operator to view more details and available actions:

    Figure 2.1. etcd Operator overview

    etcd operator overview

    As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployments or ReplicaSets, but contain logic specific to managing etcd.

  4. Create a new etcd cluster:

    1. In the etcd Cluster API box, click Create New.
    2. The next screen allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the Pods, Services, and other components of the new etcd cluster.
  5. Click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator.

    Figure 2.2. etcd Operator resources

    etcd operator resources

    Verify that a Kubernetes service has been created that allows you to access the database from other Pods in your project.

  6. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:

    $ oc policy add-role-to-user edit <user> -n <target_project>

You now have an etcd cluster that will react to failures and rebalance data as Pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications.

2.3. Creating applications using the CLI

You can create an OpenShift Dedicated application from components that include source or binary code, images, and templates by using the OpenShift Dedicated CLI.

The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates.

2.3.1. Creating an application from source code

With the new-app command you can create applications from source code in a local or remote Git repository.

The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a deployment configuration to deploy the new image, and a service to provide load-balanced access to the deployment running your image.

OpenShift Dedicated automatically detects whether the Pipeline or Source build strategy should be used, and in the case of Source builds, detects an appropriate language builder image.

2.3.1.1. Local

To create an application from a Git repository in a local directory:

$ oc new-app /<path to source code>
Note

If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Dedicated cluster. If there is no recognized remote, running the new-app command will create a binary build.

2.3.1.2. Remote

To create an application from a remote Git repository:

$ oc new-app https://github.com/sclorg/cakephp-ex

To create an application from a private remote Git repository:

$ oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret
Note

If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your BuildConfig to access the repository.

You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory:

$ oc new-app https://github.com/sclorg/s2i-ruby-container.git \
    --context-dir=2.0/test/puma-test-app

Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL:

$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4

2.3.1.3. Build strategy detection

If a Jenkinsfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Dedicated generates a Pipeline build strategy.

Otherwise, it generates a Source build strategy.

Override the build strategy by setting the --strategy flag to either pipeline or source.

$ oc new-app /home/user/code/myapp --strategy=docker
Note

The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v.

2.3.1.4. Language Detection

If you use the Source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository:

Table 2.1. Languages Detected by new-app

LanguageFiles

dotnet

project.json, *.csproj

jee

pom.xml

nodejs

app.json, package.json

perl

cpanfile, index.pl

php

composer.json, index.php

python

requirements.txt, setup.py

ruby

Gemfile, Rakefile, config.ru

scala

build.sbt

golang

Godeps, main.go

After a language is detected, new-app searches the OpenShift Dedicated server for imagestreamtags that have a supports annotation matching the detected language, or an imagestream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name.

You can override the image the builder uses for a particular source repository by specifying the image, either an imagestream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out.

For example, to use the myproject/my-ruby imagestream with the source in a remote repository:

$ oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git

To use the openshift/ruby-20-centos7:latest container imagestream with the source in a local repository:

$ oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app
Note

Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax.

The -i <image> <repository> invocation requires that new-app attempt to clone repository in order to determine what type of artifact it is, so this will fail if Git is not available.

The -i <image> --code <repository> invocation requires new-app clone repository in order to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image.

2.3.2. Creating an application from an image

You can deploy an application from an existing image. Images can come from imagestreams in the OpenShift Dedicated server, images in a specific registry, or images in the local Docker server.

The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an imagestream using the -i|--image argument.

Note

If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Dedicated cluster nodes.

2.3.2.1. DockerHub MySQL image

Create an application from the DockerHub MySQL image, for example:

$ oc new-app mysql

2.3.2.2. Image in a private registry

Create an application using an image in a private registry, specify the full container image specification:

$ oc new-app myregistry:5000/example/myimage

2.3.2.3. Existing imagestream and optional imagestreamtag

Create an application from an existing imagestream and optional imagestreamtag:

$ oc new-app my-stream:v1

2.3.3. Creating an application from a template

You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application.

Create an application from a stored template, for example:

$ oc create -f examples/sample-app/application-template-stibuild.json
$ oc new-app ruby-helloworld-sample

To directly use a template in your local file system, without first storing it in OpenShift Dedicated, use the -f|--file argument. For example:

$ oc new-app -f examples/sample-app/application-template-stibuild.json

2.3.3.1. Template Parameters

When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template:

$ oc new-app ruby-helloworld-sample \
    -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword

You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=-:

$ cat helloworld.params
ADMIN_USERNAME=admin
ADMIN_PASSWORD=mypassword
$ oc new-app ruby-helloworld-sample --param-file=helloworld.params
$ cat helloworld.params | oc new-app ruby-helloworld-sample --param-file=-

2.3.4. Modifying application creation

The new-app command generates OpenShift Dedicated objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior.

Table 2.2. new-app output objects

ObjectDescription

BuildConfig

A BuildConfig is created for each source repository that is specified in the command line. The BuildConfig specifies the strategy to use, the source location, and the build output location.

ImageStreams

For BuildConfig, two ImageStreams are usually created. One represents the input image. With Source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app, then an imagestream is created for that image as well.

DeploymentConfig

A DeploymentConfig is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig.

Service

The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. In order to expose a different port, after new-app has completed, simply use the oc expose command to generate additional services.

Other

Other objects can be generated when instantiating templates, according to the template.

2.3.4.1. Specifying environment variables

When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time:

$ oc new-app openshift/postgresql-92-centos7 \
    -e POSTGRESQL_USER=user \
    -e POSTGRESQL_DATABASE=db \
    -e POSTGRESQL_PASSWORD=password

The variables can also be read from file using the --env-file argument:

$ cat postgresql.env
POSTGRESQL_USER=user
POSTGRESQL_DATABASE=db
POSTGRESQL_PASSWORD=password
$ oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env

Additionally, environment variables can be given on standard input by using --env-file=-:

$ cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-
Note

Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument.

2.3.4.2. Specifying build environment variables

When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time:

$ oc new-app openshift/ruby-23-centos7 \
    --build-env HTTP_PROXY=http://myproxy.net:1337/ \
    --build-env GEM_HOME=~/.gem

The variables can also be read from a file using the --build-env-file argument:

$ cat ruby.env
HTTP_PROXY=http://myproxy.net:1337/
GEM_HOME=~/.gem
$ oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env

Additionally, environment variables can be given on standard input by using --build-env-file=-:

$ cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-

2.3.4.3. Specifying labels

When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application.

$ oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world

2.3.4.4. Viewing the output without creation

To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Dedicated objects.

To output new-app artifacts to a file, edit them, then create them:

$ oc new-app https://github.com/openshift/ruby-hello-world \
    -o yaml > myapp.yaml
$ vi myapp.yaml
$ oc create -f myapp.yaml

2.3.4.5. Creating objects with different names

Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command:

$ oc new-app https://github.com/openshift/ruby-hello-world --name=myapp

2.3.4.6. Creating objects in a different project

Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument:

$ oc new-app https://github.com/openshift/ruby-hello-world -n myproject

2.3.4.7. Creating multiple objects

The new-app command allows creating multiple applications specifying multiple parameters to new-app. Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images.

To create an application from a source repository and a Docker Hub image:

$ oc new-app https://github.com/openshift/ruby-hello-world mysql
Note

If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator.

2.3.4.8. Grouping images and source in a single Pod

The new-app command allows deploying multiple images together in a single Pod. In order to specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group:

$ oc new-app ruby+mysql

To deploy an image built from source and an external image together:

$ oc new-app \
    ruby~https://github.com/openshift/ruby-hello-world \
    mysql \
    --group=ruby+mysql

2.3.4.9. Searching for images, templates, and other inputs

To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP:

$ oc new-app --search php

2.4. Viewing application composition using the Topology view

The Topology view in the Developer perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them.

Prerequisites

To view your applications in the Topology view and interact with them, ensure that:

2.4.1. Viewing the topology of your application

You can navigate to the Topology view using the left navigation panel in the Developer perspective. After you create an application, you are directed automatically to the Topology view where you can see the status of the application Pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application.

A serverless application is visually indicated with the Knative symbol ( odc serverless app ).

Note

Serverless applications take some time to load and display on the Topology view. When you create a serverless application, it first creates a service resource and then a revision. After that it is deployed and displayed on the Topology view. If it is the only workload, you might be redirected to the Add page. Once the revision is deployed, the serverless application is displayed on the Topology view.

The status or phase of the Pod is indicated by different colors and tooltips as Running ( odc pod running ), Not Ready ( odc pod not ready ), Warning( odc pod warning ), Failed( odc pod failed ), Pending( odc pod pending ), Succeeded( odc pod succeeded ), Terminating( odc pod terminating ), or Unknown( odc pod unknown ). For more information about pod status, see the Kubernetes documentation.

After you create an application and an image is deployed, the status is shown as Pending. After the application is built, it is displayed as Running.

Application Topology

The application resource name is appended with indicators for the different types of resource objects as follows:

  • DC: DeploymentConfigs
  • D: Deployment
  • SS: StatefulSet
  • DS: Daemonset

2.4.2. Interacting with the application and the components

The Topology view in the Developer perspective of the web console provides the following options to interact with the application and the components:

  • Click Open URL ( odc open url ) to see your application exposed by the route on a public URL.
  • Click Edit Source code to access your source code and modify it.

    Note

    This feature is available only when you create applications using the From Git, From Catalog, and the From Dockerfile options.

    If the Eclipse Che Operator is installed in your cluster, a Che workspace ( odc che workspace ) is created and you are directed to the workspace to edit your source code. If it is not installed, you will be directed to the Git repository ( odc git repository ) your source code is hosted in.

  • Hover your cursor over the lower left icon on the Pod to see the name of the latest build and its status. The status of the application build is indicated as New ( odc build new ), Pending ( odc build pending ), Running ( odc build running ), Completed ( odc build completed ), Failed ( odc build failed ), and Canceled ( odc build canceled ).
  • Use the Shortcuts menu listed on the upper-right of the screen to navigate components in the Topology view.
  • Use the List View icon to see a list of all your applications and use the Topology View icon to switch back to the Topology view.

2.4.3. Scaling application pods and checking builds and routes

The Topology view provides the details of the deployed components in the Overview panel. You can use the Overview and Resources tabs to scale the application Pods, check build status, services, and routes as follows:

  • Click on the component node to see the Overview panel to the right. Use the Overview tab to:

    • Scale your Pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the Pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic.
    • Check the Labels, Annotations, and Status of the application.
  • Click the Resources tab to:

    • See the list of all the Pods, view their status, access logs, and click on the Pod to see the Pod details.
    • See the builds, their status, access logs, and start a new build if needed.
    • See the services and routes used by the component.

    For serverless applications, the Resources tab provides information on the revision, routes, and the configurations used for that component.

2.4.4. Grouping multiple components within an application

You can use the Add page to add multiple components or services to your project and use the Topology page to group applications and resources within an application group. The following procedure adds a MongoDB database service to an existing application with a Node.js component.

Prerequisites

  • Ensure that you have created and deployed a Node.js application on OpenShift Dedicated using the Developer perspective.

Procedure

  1. Create and deploy the MongoDB service to your project as follows:

    1. In the Developer perspective, navigate to the Add view and select the Database option to see the Developer Catalog, which has multiple options that you can add as components or services to your application.
    2. Click on the MongoDB option to see the details for the service.
    3. Click Instantiate Template to see an automatically populated template with details for the MongoDB service, and click Create to create the service.
  2. On the left navigation panel, click Topology to see the MongoDB service deployed in your project.
  3. To add the MongoDB service to the existing application group, select the mongodb Pod and drag it to the application; the MongoDB service is added to the existing application group.
  4. Dragging a component and adding it to an application group automatically adds the required labels to the component. Click on the MongoDB service node to see the label app.kubernetes.io/part-of=myapp added to the Labels section in the Overview Panel.

    Application Grouping

Alternatively, you can also add the component to an application as follows:

  1. To add the MongoDB service to your application, click on the mongodb Pod to see the Overview panel to the right.
  2. Click the Actions drop-down menu on the upper right of the panel and select Edit Application Grouping.
  3. In the Edit Application Grouping dialog box, click the Select an Application drop-down list, and select the appropriate application group.
  4. Click Save to see the MongoDB service added to the application group.

You can remove a component from an application group by selecting the component and using Shift+ drag to drag it out of the application group.

2.4.5. Connecting components within an application and across applications

In addition to grouping multiple components within an application, you can also use the Topology view to connect components with each other. You can either use a binding connector or a visual one to connect components.

When an application is connected to a service using a binding connector a ServiceBindingRequest is created with the necessary binding data. After the request is successful, the application is redeployed. A binding connection between the components can be established only if the target node is an Operator-backed service. This is indicated by the Create a binding connector tool-tip which appears when you drag an arrow to such a target node.

A visual connector establishes only a visual connection between the components, depicting an intent to connect. No interaction between the components is established. If the target node is not a binding Operator-backed service the Create a visual connector tool-tip is displayed when you drag an arrow to a target node.

2.4.5.1. Creating a visual connection between components

You can connect a MongoDB service with a Node.js application visually as follows:

Prerequisites

  • Ensure that you have created and deployed a Node.js application using the Developer perspective.
  • Ensure that you have created and deployed a MongoDB service using the Developer perspective.

Procedure

  1. Hover over the MongoDB service to see a dangling arrow on the node.

    Connector
  2. Click and drag the arrow towards the Node.js component to connect the MongoDB service with it.
  3. Click on the MongoDB service to see the Overview Panel. In the Annotations section, click the edit icon to see the Key = app.openshift.io/connects-to and Value = nodejs-ex annotation added to the service.

    Annotation

    Similarly you can create other applications and components and establish connections between them.

    Connecting Multiple Applications

2.4.5.2. Creating a binding connection between components

Important

Service Binding is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Note

Currently, only a few specific Operators like the etcd and the PostgresSQL Database Operator’s service instances are bindable.

To connect the Node.js component using a binding connector:

Prerequisite

  • Ensure that you have created and deployed a Node.js application using the Developer perspective.
  • Ensure that you have installed the Service Binding Operator from OperatorHub.

Procedure

  1. Install the DB Operator using a backing OperatorSource. A backing OperatorSource exposes the binding information in secrets, ConfigMaps, status, and spec attributes.

    1. In the Add view, click the YAML option to see the Import YAML screen.
    2. Add the following YAML file to apply the OperatorSource:

      apiVersion: operators.coreos.com/v1
      kind: OperatorSource
      metadata:
       name: db-operators
       namespace: openshift-marketplace
      spec:
       type: appregistry
       endpoint: https://quay.io/cnr
       registryNamespace: pmacik
    3. Click Create to create the OperatorSource in your cluster.
  2. Install the PostgreSQL Database Operator:

    1. In the Administrator perspective of the console, navigate to the Operators → OperatorHub.
    2. In the Database category, select the PostgreSQL Database Operator and install it.
  3. Create a database (DB) instance for the application:

    1. Switch to the Developer perspective and ensure that you are in the appropriate project.
    2. In the Add view, click the YAML option to see the Import YAML screen.
    3. Add the service instance YAML in the editor and click Create to deploy the service. Following is an example of what the service YAML will look like:

      apiVersion: postgresql.baiju.dev/v1alpha1
      kind: Database
      metadata:
       name: db-demo
       namespace: test-project
      spec:
       image: docker.io/postgres
       imageName: postgres
       dbName: db-demo

      A DB instance is now deployed in the Topology view.

  4. In the Topology view, hover over the Node.js component to see a dangling arrow on the node.
  5. Click and drag the arrow towards the db-demo-postgresql service to make a binding connection with the Node.js application. When an application is connected to a service using the binding connector a ServiceBindingRequest is created and the Service Binding Operator controller injects the DB connection information into the application Deployment as environment variables using an intermediate Secret called binding-request. After the request is successful, the application is redeployed.

    Binding Connector

2.4.6. Labels and annotations used for the Topology view

The Topology view uses the following labels and annotations:

Icon displayed in the node
Icons in the node are defined by looking for matching icons using the app.openshift.io/runtime label, followed by the app.kubernetes.io/name label. This matching is done using a predefined set of icons.
Link to the source code editor or the source
The app.openshift.io/vcs-uri annotation is used to create links to the source code editor.
Node Connector
The app.openshift.io/connects-to annotation is used to connect the nodes.
App grouping
The app.kubernetes.io/part-of=<appname> label is used to group the applications, services, and components.

For detailed information on the labels and annotations OpenShift Dedicated applications must use, see Guidelines for labels and annotations for OpenShift applications.

2.5. Deleting applications

You can delete applications created in your project.

2.5.1. Deleting applications using the Developer perspective

You can delete an application and all of its associated components using the Topology view in the Developer perspective:

  1. Click the application you want to delete to see the side panel with the resource details of the application.
  2. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box.
  3. Enter the name of the application and click Delete to delete it.

You can also right-click the application you want to delete and click Delete Application to delete it.

Chapter 3. Deployments

3.1. Understanding Deployments and DeploymentConfigs

Deployments and DeploymentConfigs in OpenShift Dedicated are API objects that provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects:

  • A DeploymentConfig or a Deployment, either of which describes the desired state of a particular component of the application as a Pod template.
  • DeploymentConfigs involve one or more ReplicationControllers, which contain a point-in-time record of the state of a DeploymentConfig as a Pod template. Similarly, Deployments involve one or more ReplicaSets, a successor of ReplicationControllers.
  • One or more Pods, which represent an instance of a particular version of an application.

3.1.1. Building blocks of a deployment

Deployments and DeploymentConfigs are enabled by the use of native Kubernetes API objects ReplicationControllers and ReplicaSets, respectively, as their building blocks.

Users do not have to manipulate ReplicationControllers, ReplicaSets, or Pods owned by DeploymentConfigs or Deployments. The deployment systems ensures changes are propagated appropriately.

Tip

If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a Custom deployment strategy.

The following sections provide further details on these objects.

3.1.1.1. ReplicationControllers

A ReplicationController ensures that a specified number of replicas of a Pod are running at all times. If Pods exit or are deleted, the ReplicationController acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount.

A ReplicationController configuration consists of:

  • The number of replicas desired (which can be adjusted at runtime).
  • A Pod definition to use when creating a replicated Pod.
  • A selector for identifying managed Pods.

A selector is a set of labels assigned to the Pods that are managed by the ReplicationController. These labels are included in the Pod definition that the ReplicationController instantiates. The ReplicationController uses the selector to determine how many instances of the Pod are already running in order to adjust as needed.

The ReplicationController does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler.

The following is an example definition of a ReplicationController:

apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend-1
spec:
  replicas: 1  1
  selector:    2
    name: frontend
  template:    3
    metadata:
      labels:  4
        name: frontend 5
    spec:
      containers:
      - image: openshift/hello-openshift
        name: helloworld
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
1
The number of copies of the Pod to run.
2
The label selector of the Pod to run.
3
A template for the Pod the controller creates.
4
Labels on the Pod should include those from the label selector.
5
The maximum name length after expanding any parameters is 63 characters.

3.1.1.2. ReplicaSets

Similar to a ReplicationController, a ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. The difference between a ReplicaSet and a ReplicationController is that a ReplicaSet supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.

Note

Only use ReplicaSets if you require custom update orchestration or do not require updates at all. Otherwise, use Deployments. ReplicaSets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their ReplicaSets automatically, provide declarative updates to pods, and do not have to manually manage the ReplicaSets that they create.

The following is an example ReplicaSet definition:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend-1
  labels:
    tier: frontend
spec:
  replicas: 3
  selector: 1
    matchLabels: 2
      tier: frontend
    matchExpressions: 3
      - {key: tier, operator: In, values: [frontend]}
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - image: openshift/hello-openshift
        name: helloworld
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
1
A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined.
2
Equality-based selector to specify resources with labels that match the selector.
3
Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend.

3.1.2. DeploymentConfigs

Building on ReplicationControllers, OpenShift Dedicated adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfigs. In the simplest case, a DeploymentConfig creates a new ReplicationController and lets it start up Pods.

However, OpenShift Dedicated deployments from DeploymentConfigs also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the ReplicationController.

The DeploymentConfig deployment system provides the following capabilities:

  • A DeploymentConfig, which is a template for running applications.
  • Triggers that drive automated deployments in response to events.
  • User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a Pod commonly referred as the deployment process.
  • A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment.
  • Versioning of your application in order to support rollbacks either manually or automatically in case of deployment failure.
  • Manual replication scaling and autoscaling.

When you create a DeploymentConfig, a ReplicationController is created representing the DeploymentConfig’s Pod template. If the DeploymentConfig changes, a new ReplicationController is created with the latest Pod template, and a deployment process runs to scale down the old ReplicationController and scale up the new one.

Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally.

The OpenShift Dedicated DeploymentConfig object defines the following details:

  1. The elements of a ReplicationController definition.
  2. Triggers for creating a new deployment automatically.
  3. The strategy for transitioning between deployments.
  4. Lifecycle hooks.

Each time a deployment is triggered, whether manually or automatically, a deployer Pod manages the deployment (including scaling down the old ReplicationController, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the Deployment in order to retain its logs of the Deployment. When a deployment is superseded by another, the previous ReplicationController is retained to enable easy rollback if needed.

Example DeploymentConfig definition

apiVersion: v1
kind: DeploymentConfig
metadata:
  name: frontend
spec:
  replicas: 5
  selector:
    name: frontend
  template: { ... }
  triggers:
  - type: ConfigChange 1
  - imageChangeParams:
      automatic: true
      containerNames:
      - helloworld
      from:
        kind: ImageStreamTag
        name: hello-openshift:latest
    type: ImageChange  2
  strategy:
    type: Rolling      3

1
A ConfigChange trigger causes a new Deployment to be created any time the ReplicationController template changes.
2
An ImageChange trigger causes a new Deployment to be created each time a new version of the backing image is available in the named imagestream.
3
The default Rolling strategy makes a downtime-free transition between Deployments.

3.1.3. Deployments

Kubernetes provides a first-class, native API object type in OpenShift Dedicated called Deployments. Deployments serve as a descendant of the OpenShift Dedicated-specific DeploymentConfig.

Like DeploymentConfigs, Deployments describe the desired state of a particular component of an application as a Pod template. Deployments create ReplicaSets, which orchestrate Pod lifecycles.

For example, the following Deployment definition creates a ReplicaSet to bring up one hello-openshift Pod:

Deployment definition

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-openshift
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-openshift
  template:
    metadata:
      labels:
        app: hello-openshift
    spec:
      containers:
      - name: hello-openshift
        image: openshift/hello-openshift:latest
        ports:
        - containerPort: 80

3.1.4. Comparing Deployments and DeploymentConfigs

Both Kubernetes Deployments and OpenShift Dedicated-provided DeploymentConfigs are supported in OpenShift Dedicated; however, it is recommended to use Deployments unless you need a specific feature or behavior provided by DeploymentConfigs.

The following sections go into more detail on the differences between the two object types to further help you decide which type to use.

3.1.4.1. Design

One important difference between Deployments and DeploymentConfigs is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfigs prefer consistency, whereas Deployments take availability over consistency.

For DeploymentConfigs, if a node running a deployer Pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding Pod. This means that you can not delete the Pod to unstick the rollout, as the kubelet is responsible for deleting the associated Pod.

However, Deployments rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same Deployment at the same time, but this issue will be reconciled shortly after the failure occurs.

3.1.4.2. DeploymentConfigs-specific features

Automatic rollbacks

Currently, Deployments do not support automatically rolling back to the last successfully deployed ReplicaSet in case of a failure.

Triggers

Deployments have an implicit ConfigChange trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:

$ oc rollout pause deployments/<name>
Lifecycle hooks

Deployments do not yet support any lifecycle hooks.

Custom strategies

Deployments do not support user-specified Custom deployment strategies yet.

3.1.4.3. Deployments-specific features

Rollover

The deployment process for Deployments is driven by a controller loop, in contrast to DeploymentConfigs which use deployer pods for every new rollout. This means that a Deployment can have as many active ReplicaSets as possible, and eventually the deployment controller will scale down all old ReplicaSets and scale up the newest one.

DeploymentConfigs can have at most one deployer pod running, otherwise multiple deployers end up conflicting while trying to scale up what they think should be the newest ReplicationController. Because of this, only two ReplicationControllers can be active at any point in time. Ultimately, this translates to faster rapid rollouts for Deployments.

Proportional scaling

Because the Deployment controller is the sole source of truth for the sizes of new and old ReplicaSets owned by a Deployment, it is able to scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each ReplicaSet.

DeploymentConfigs cannot be scaled when a rollout is ongoing because the DeploymentConfig controller will end up having issues with the deployer process about the size of the new ReplicationController.

Pausing mid-rollout

Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. On the other hand, you cannot pause deployer pods currently, so if you try to pause a DeploymentConfig in the middle of a rollout, the deployer process will not be affected and will continue until it finishes.

3.2. Managing deployment processes

3.2.1. Managing DeploymentConfigs

DeploymentConfigs can be managed from the OpenShift Dedicated web console’s Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated.

3.2.1.1. Starting a deployment

You can start a rollout to begin the deployment process of your application.

Procedure

  1. To start a new deployment process from an existing DeploymentConfig, run the following command:

    $ oc rollout latest dc/<name>
    Note

    If a deployment process is already in progress, the command displays a message and a new ReplicationController will not be deployed.

3.2.1.2. Viewing a deployment

You can view a deployment to get basic information about all the available revisions of your application.

Procedure

  1. To show details about all recently created ReplicationControllers for the provided DeploymentConfig, including any currently running deployment process, run the following command:

    $ oc rollout history dc/<name>
  2. To view details specific to a revision, add the --revision flag:

    $ oc rollout history dc/<name> --revision=1
  3. For more detailed information about a deployment configuration and its latest revision, use the oc describe command:

    $ oc describe dc <name>

3.2.1.3. Retrying a deployment

If the current revision of your DeploymentConfig failed to deploy, you can restart the deployment process.

Procedure

  1. To restart a failed deployment process:

    $ oc rollout retry dc/<name>

    If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not be retried.

    Note

    Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted ReplicationController has the same configuration it had when it failed.

3.2.1.4. Rolling back a deployment

Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.

Procedure

  1. To rollback to the last successful deployed revision of your configuration:

    $ oc rollout undo dc/<name>

    The DeploymentConfig’s template is reverted to match the deployment revision specified in the undo command, and a new ReplicationController is started. If no revision is specified with --to-revision, then the last successfully deployed revision is used.

  2. Image change triggers on the DeploymentConfig are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete.

    To re-enable the image change triggers:

    $ oc set triggers dc/<name> --auto
Note

DeploymentConfigs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations.

3.2.1.5. Executing commands inside a container

You can add a command to a container, which modifies the container’s startup behavior by overruling the image’s ENTRYPOINT. This is different from a lifecycle hook, which instead can be run once per deployment at a specified time.

Procedure

  1. Add the command parameters to the spec field of the DeploymentConfig. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist).

    spec:
      containers:
        -
        name: <container_name>
        image: 'image'
        command:
          - '<command>'
        args:
          - '<argument_1>'
          - '<argument_2>'
          - '<argument_3>'

    For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments:

    spec:
      containers:
        -
        name: example-spring-boot
        image: 'image'
        command:
          - java
        args:
          - '-jar'
          - /opt/app-root/springboots2idemo.jar

3.2.1.6. Viewing deployment logs

Procedure

  1. To stream the logs of the latest revision for a given DeploymentConfig:

    $ oc logs -f dc/<name>

    If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a Pod of your application.

  2. You can also view logs from older failed deployment processes, if and only if these processes (old ReplicationControllers and their deployer Pods) exist and have not been pruned or deleted manually:

    $ oc logs --version=1 dc/<name>

3.2.1.7. Deployment triggers

A DeploymentConfig can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.

Warning

If no triggers are defined on a DeploymentConfig, a ConfigChange trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.

ConfigChange deployment triggers

The ConfigChange trigger results in a new ReplicationController whenever configuration changes are detected in the Pod template of the DeploymentConfig.

Note

If a ConfigChange trigger is defined on a DeploymentConfig, the first ReplicationController is automatically created soon after the DeploymentConfig itself is created and it is not paused.

ConfigChange deployment trigger

triggers:
  - type: "ConfigChange"

ImageChange deployment triggers

The ImageChange trigger results in a new ReplicationController whenever the content of an imagestreamtag changes (when a new version of the image is pushed).

ImageChange deployment trigger

triggers:
  - type: "ImageChange"
    imageChangeParams:
      automatic: true 1
      from:
        kind: "ImageStreamTag"
        name: "origin-ruby-sample:latest"
        namespace: "myproject"
      containerNames:
        - "helloworld"

1
If the imageChangeParams.automatic field is set to false, the trigger is disabled.

With the above example, when the latest tag value of the origin-ruby-sample imagestream changes and the new image value differs from the current image specified in the DeploymentConfig’s helloworld container, a new ReplicationController is created using the new image for the helloworld container.

Note

If an ImageChange trigger is defined on a DeploymentConfig (with a ConfigChange trigger and automatic=false, or with automatic=true) and the ImageStreamTag pointed by the ImageChange trigger does not exist yet, then the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the ImageStreamTag.

3.2.1.7.1. Setting deployment triggers

Procedure

  1. You can set deployment triggers for a DeploymentConfig using the oc set triggers command. For example, to set a ImageChangeTrigger, use the following command:

    $ oc set triggers dc/<dc_name> \
        --from-image=<project>/<image>:<tag> -c <container_name>

3.2.1.8. Setting deployment resources

Note

This resource is available only if a cluster administrator has enabled the ephemeral storage technology preview. This feature is disabled by default.

A deployment is completed by a Pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, Pods consume unbounded node resources. However, if a project specifies default container limits, then Pods consume resources up to those limits.

You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the Recreate, Rolling, or Custom deployment strategies.

Procedure

  1. In the following example, each of resources, cpu, memory, and ephemeral-storage is optional:

    type: "Recreate"
    resources:
      limits:
        cpu: "100m" 1
        memory: "256Mi" 2
        ephemeral-storage: "1Gi" 3
    1
    cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3).
    2
    memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20).
    3
    ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). This applies only if your cluster administrator enabled the ephemeral storage technology preview.

    However, if a quota has been defined for your project, one of the following two items is required:

    • A resources section set with an explicit requests:

        type: "Recreate"
        resources:
          requests: 1
            cpu: "100m"
            memory: "256Mi"
            ephemeral-storage: "1Gi"
      1
      The requests object contains the list of resources that correspond to the list of resources in the quota.
    • A limit range defined in your project, where the defaults from the LimitRange object apply to Pods created during the deployment process.

    To set deployment resources, choose one of the above options. Otherwise, deploy Pod creation fails, citing a failure to satisfy quota.

3.2.1.9. Scaling manually

In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them.

Note

Pods can also be autoscaled using the oc autoscale command.

Procedure

  1. To manually scale a DeploymentConfig, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig to 3.

    $ oc scale dc frontend --replicas=3

    The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig frontend.

3.2.1.10. Accessing private repositories from DeploymentConfigs

You can add a Secret to your DeploymentConfig so that it can access images from a private repository. This procedure shows the OpenShift Dedicated web console method.

Procedure

  1. Create a new project.
  2. From the Workloads page, create a Secret that contains credentials for accessing a private image repository.
  3. Create a DeploymentConfig.
  4. On the DeploymentConfig editor page, set the Pull Secret and save your changes.

3.2.1.11. Running a Pod with a different service account

You can run a Pod with a service account other than the default.

Procedure

  1. Edit the DeploymentConfig:

    $ oc edit dc/<deployment_config>
  2. Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use:

    spec:
      securityContext: {}
      serviceAccount: <service_account>
      serviceAccountName: <service_account>

3.3. Using DeploymentConfig strategies

A deployment strategy is a way to change or upgrade an application. The aim is to make the change without downtime in a way that the user barely notices the improvements.

Because the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on DeploymentConfig features or routing features. Strategies that focus on the DeploymentConfig impact all routes that use the application. Strategies that use router features target individual routes.

Many deployment strategies are supported through the DeploymentConfig, and some additional strategies are supported through router features. DeploymentConfig strategies are discussed in this section.

Choosing a deployment strategy

Consider the following when choosing a deployment strategy:

  • Long-running connections must be handled gracefully.
  • Database conversions can be complex and must be done and rolled back along with the application.
  • If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition.
  • You must have the infrastructure to do this.
  • If you have a non-isolated test environment, you can break both new and old versions.

A deployment strategy uses readiness checks to determine if a new Pod is ready for use. If a readiness check fails, the DeploymentConfig retries to run the Pod until it times out. The default timeout is 10m, a value set in TimeoutSeconds in dc.spec.strategy.*params.

3.3.1. Rolling strategy

A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. The Rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig.

A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.

When to use a Rolling deployment:

  • When you want to take no downtime during an application update.
  • When your application supports having old code and new code running at the same time.

A Rolling deployment means you to have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility.

Example Rolling strategy definition

strategy:
  type: Rolling
  rollingParams:
    updatePeriodSeconds: 1 1
    intervalSeconds: 1 2
    timeoutSeconds: 120 3
    maxSurge: "20%" 4
    maxUnavailable: "10%" 5
    pre: {} 6
    post: {}

1
The time to wait between individual Pod updates. If unspecified, this value defaults to 1.
2
The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1.
3
The time to wait for a scaling event before giving up. Optional; the default is 600. Here, giving up means automatically rolling back to the previous complete deployment.
4
maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure.
5
maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure.
6
pre and post are both lifecycle hooks.

The Rolling strategy:

  1. Executes any pre lifecycle hook.
  2. Scales up the new ReplicationController based on the surge count.
  3. Scales down the old ReplicationController based on the max unavailable count.
  4. Repeats this scaling until the new ReplicationController has reached the desired replica count and the old ReplicationController has been scaled to zero.
  5. Executes any post lifecycle hook.
Important

When scaling down, the Rolling strategy waits for Pods to become ready so it can decide whether further scaling would affect availability. If scaled up Pods never become ready, the deployment process will eventually time out and result in a deployment failure.

The maxUnavailable parameter is the maximum number of Pods that can be unavailable during the update. The maxSurge parameter is the maximum number of Pods that can be scheduled above the original number of Pods. Both parameters can be set to either a percentage (e.g., 10%) or an absolute value (e.g., 2). The default value for both is 25%.

These parameters allow the deployment to be tuned for availability and speed. For example:

  • maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up.
  • maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update).
  • maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss.

Generally, if you want fast rollouts, use maxSurge. If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable.

3.3.1.1. Canary deployments

All Rolling deployments in OpenShift Dedicated are canary deployments; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig will be automatically rolled back.

The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a Custom deployment or using a blue-green deployment strategy.

3.3.1.2. Creating a Rolling deployment

Rolling deployments are the default type in OpenShift Dedicated. You can create a Rolling deployment using the CLI.

Procedure

  1. Create an application based on the example deployment images found in DockerHub:

    $ oc new-app openshift/deployment-example
  2. If you have the router installed, make the application available via a route (or use the service IP directly)

    $ oc expose svc/deployment-example
  3. Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image.
  4. Scale the DeploymentConfig up to three replicas:

    $ oc scale dc/deployment-example --replicas=3
  5. Trigger a new deployment automatically by tagging a new version of the example as the latest tag:

    $ oc tag deployment-example:v2 deployment-example:latest
  6. In your browser, refresh the page until you see the v2 image.
  7. When using the CLI, the following command shows how many Pods are on version 1 and how many are on version 2. In the web console, the Pods are progressively added to v2 and removed from v1:

    $ oc describe dc deployment-example

During the deployment process, the new ReplicationController is incrementally scaled up. After the new Pods are marked as ready (by passing their readiness check), the deployment process continues.

If the Pods do not become ready, the process aborts, and the DeploymentConfig rolls back to its previous version.

3.3.1.3. Starting a Rolling deployment using the Developer perspective

Prerequisites

  • Ensure that you are in the Developer perspective of the web console.
  • Ensure that you have created an application using the Add view and see it deployed in the Topology view.

Procedure

To start a rolling deployment to upgrade an application:

  1. In the Topology view of the Developer perspective, click on the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy.
  2. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one.

    Rolling Update

3.3.2. Recreate strategy

The Recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.

Example Recreate strategy definition

strategy:
  type: Recreate
  recreateParams: 1
    pre: {} 2
    mid: {}
    post: {}

1
recreateParams are optional.
2
pre, mid, and post are lifecycle hooks.

The Recreate strategy:

  1. Executes any pre lifecycle hook.
  2. Scales down the previous deployment to zero.
  3. Executes any mid lifecycle hook.
  4. Scales up the new deployment.
  5. Executes any post lifecycle hook.
Important

During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.

When to use a Recreate deployment:

  • When you must run migrations or other data transformations before your new code starts.
  • When you do not support having new and old versions of your application code running at the same time.
  • When you want to use a RWO volume, which is not supported being shared between multiple replicas.

A Recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.

3.3.3. Starting a Recreate deployment using the Developer perspective

You can switch the deployment strategy from the default Rolling update to a Recreate update using the Developer perspective in the web console.

Prerequisites

  • Ensure that you are in the Developer perspective of the web console.
  • Ensure that you have created an application using the Add view and see it deployed in the Topology view.

Procedure

To switch to a Recreate update strategy and to upgrade an application:

  1. In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application.
  2. In the YAML editor, change the spec.strategy.type to Recreate and click Save.
  3. In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate.
  4. Use the Actions drop-down menu to select Start Rollout to start an update using the Recreate strategy. The Recreate strategy first terminates Pods for the older version of the application and then spins up Pods for the new version.

    Recreate Update

3.3.4. Custom strategy

The Custom strategy allows you to provide your own deployment behavior.

Example Custom strategy definition

strategy:
  type: Custom
  customParams:
    image: organization/strategy
    command: [ "command", "arg1" ]
    environment:
      - name: ENV_1
        value: VALUE_1

In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image’s Dockerfile. The optional environment variables provided are added to the execution environment of the strategy process.

Additionally, OpenShift Dedicated provides the following environment variables to the deployment process:

Environment variableDescription

OPENSHIFT_DEPLOYMENT_NAME

The name of the new deployment (a ReplicationController).

OPENSHIFT_DEPLOYMENT_NAMESPACE

The name space of the new deployment.

The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.

Alternatively, use customParams to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Dedicated deployer image is used instead:

strategy:
  type: Rolling
  customParams:
    command:
    - /bin/sh
    - -c
    - |
      set -e
      openshift-deploy --until=50%
      echo Halfway there
      openshift-deploy
      echo Complete

This results in following deployment:

Started deployment #2
--> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling custom-deployment-2 up to 1
--> Reached 50% (currently 50%)
Halfway there
--> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling custom-deployment-1 down to 1
    Scaling custom-deployment-2 up to 2
    Scaling custom-deployment-1 down to 0
--> Success
Complete

If the custom deployment strategy process requires access to the OpenShift Dedicated API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.

3.3.5. Lifecycle hooks

The Rolling and Recreate strategies support lifecycle hooks, or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:

Example pre lifecycle hook

pre:
  failurePolicy: Abort
  execNewPod: {} 1

1
execNewPod is a Pod-based lifecycle hook.

Every hook has a failurePolicy, which defines the action the strategy should take when a hook failure is encountered:

Abort

The deployment process will be considered a failure if the hook fails.

Retry

The hook execution should be retried until it succeeds.

Ignore

Any hook failure should be ignored and the deployment should proceed.

Hooks have a type-specific field that describes how to execute the hook. Currently, Pod-based hooks are the only supported hook type, specified by the execNewPod field.

Pod-based lifecycle hook

Pod-based lifecycle hooks execute hook code in a new Pod derived from the template in a DeploymentConfig.

The following simplified example DeploymentConfig uses the Rolling strategy. Triggers and some other minor details are omitted for brevity:

kind: DeploymentConfig
apiVersion: v1
metadata:
  name: frontend
spec:
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
        - name: helloworld
          image: openshift/origin-ruby-sample
  replicas: 5
  selector:
    name: frontend
  strategy:
    type: Rolling
    rollingParams:
      pre:
        failurePolicy: Abort
        execNewPod:
          containerName: helloworld 1
          command: [ "/usr/bin/command", "arg1", "arg2" ] 2
          env: 3
            - name: CUSTOM_VAR1
              value: custom_value1
          volumes:
            - data 4
1
The helloworld name refers to spec.template.spec.containers[0].name.
2
This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image.
3
env is an optional set of environment variables for the hook container.
4
volumes is an optional set of volume references for the hook container.

In this example, the pre hook will be executed in a new Pod using the openshift/origin-ruby-sample image from the helloworld container. The hook Pod has the following properties:

  • The hook command is /usr/bin/command arg1 arg2.
  • The hook container has the CUSTOM_VAR1=custom_value1 environment variable.
  • The hook failure policy is Abort, meaning the deployment process fails if the hook fails.
  • The hook Pod inherits the data volume from the DeploymentConfig Pod.

3.3.5.1. Setting lifecycle hooks

You can set lifecycle hooks, or deployment hooks, for a DeploymentConfig using the CLI.

Procedure

  1. Use the oc set deployment-hook command to set the type of hook you want: --pre, --mid, or --post. For example, to set a pre-deployment hook:

    $ oc set deployment-hook dc/frontend \
        --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \
        -v data --failure-policy=abort -- /usr/bin/command arg1 arg2

3.4. Using route-based deployment strategies

Deployment strategies provide a way for the application to evolve. Some strategies use DeploymentConfigs to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with DeploymentConfigs to impact specific routes.

The most common route-based strategy is to use a blue-green deployment. The new version (the blue version) is brought up for testing and evaluation, while the users still use the stable version (the green version). When ready, the users are switched to the blue version. If a problem arises, you can switch back to the green version.

A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users.

A canary deployment tests the new version but when a problem is detected it quickly falls back to the previous version. This can be done with both of the above strategies.

The route-based deployment strategies do not scale the number of Pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled.

3.4.1. Proxy shards and traffic splitting

In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.

In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.

Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Dedicated router with proportional balancing capabilities.

3.4.2. N-1 compatibility

Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem.

This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.

For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.

One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.

3.4.3. Graceful termination

OpenShift Dedicated and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.

On shutdown, OpenShift Dedicated sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM, stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed (or gracefully terminate individual connections at the next opportunity) before exiting.

After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a Pod or Pod template controls the graceful termination period (default 30 seconds) and may be customized per application as necessary.

3.4.4. Blue-green deployments

Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the green version) to the newer version (the blue version). You can use a Rolling strategy or switch services in a route.

Because many applications depend on persistent data, you must have an application that supports N-1 compatibility, which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer.

Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version.

3.4.4.1. Setting up a blue-green deployment

Blue-green deployments use two DeploymentConfigs. Both are running, and the one in production depends on the service the route specifies, with each DeploymentConfig exposed to a different service.

Note

Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications.

You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (blue) version is live.

If necessary, you can roll back to the older (green) version by switching the service back to the previous version.

Procedure

  1. Create two copies of the example application:

    $ oc new-app openshift/deployment-example:v1 --name=example-green
    $ oc new-app openshift/deployment-example:v2 --name=example-blue

    This creates two independent application components: one running the v1 image under the example-green service, and one using the v2 image under the example-blue service.

  2. Create a route that points to the old service:

    $ oc expose svc/example-green --name=bluegreen-example
  3. Browse to the application at example-green.<project>.<router_domain> to verify you see the v1 image.
  4. Edit the route and change the service name to example-blue:

    $ oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-blue"}}}'
  5. To verify that the route has changed, refresh the browser until you see the v2 image.

3.4.5. A/B deployments

The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version.

Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on each version, the number of Pods in each service might have to be scaled as well to provide the expected performance.

In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user’s reaction to the different versions to inform design decisions.

For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together.

OpenShift Dedicated supports N-1 compatibility through the web console as well as the CLI.

3.4.5.1. Load balancing for A/B testing

The user sets up a route with multiple services. Each service handles a version of the application.

Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights. The weight for each service is distributed to the service’s endpoints so that the sum of the endpoint weights is the service weight.

The route can have up to four services. The weight for the service can be between 0 and 256. When the weight is 0, the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight is not 0, each endpoint has a minimum weight of 1. Because of this, a service with a lot of endpoints can end up with higher weight than desired. In this case, reduce the number of Pods to get the desired load balance weight.

Procedure

To set up the A/B environment:

  1. Create the two applications and give them different names. Each creates a DeploymentConfig. The applications are versions of the same program; one is usually the current production version and the other the proposed new version:

    $ oc new-app openshift/deployment-example --name=ab-example-a
    $ oc new-app openshift/deployment-example --name=ab-example-b

    Both applications are deployed and services are created.

  2. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version.

    $ oc expose svc/ab-example-a

    Browse to the application at ab-example-<project>.<router_domain> to verify that you see the desired version.

  3. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route.

    Setting the oc set route-backend to 0 means the service does not participate in load-balancing, but continues to serve existing persistent connections.

    Note

    Changes to the route just change the portion of traffic to the various services. You might have to scale the DeploymentConfigs to adjust the number of Pods to handle the anticipated loads.

    To edit the route, run:

    $ oc edit route <route_name>
    ...
    metadata:
      name: route-alternate-service
      annotations:
        haproxy.router.openshift.io/balance: roundrobin
    spec:
      host: ab-example.my-project.my-domain
      to:
        kind: Service
        name: ab-example-a
        weight: 10
      alternateBackends:
      - kind: Service
        name: ab-example-b
        weight: 15
    ...
3.4.5.1.1. Managing weights using the web console

Procedure

  1. Navigate to the Route details page (Applications/Routes).
  2. Select Edit from the Actions menu.
  3. Check Split traffic across multiple services.
  4. The Service Weights slider sets the percentage of traffic sent to each service.

    For traffic split between more than two services, the relative weights are specified by integers between 0 and 256 for each service.

    Traffic weightings are shown on the Overview in the expanded rows of the applications between which traffic is split.

3.4.5.1.2. Managing weights using the CLI

Procedure

  1. To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command:

    $ oc set route-backends ROUTENAME \
        [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]

    For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2:

    $ oc set route-backends ab-example ab-example-a=198 ab-example-b=2

    This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b.

    This command does not scale the DeploymentConfigs. You might be required to do so to have enough Pods to handle the request load.

  2. Run the command with no flags to verify the current configuration:

    $ oc set route-backends ab-example
    NAME                    KIND     TO           WEIGHT
    routes/ab-example       Service  ab-example-a 198 (99%)
    routes/ab-example       Service  ab-example-b 2   (1%)
  3. To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed.

    For example:

    $ oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10
    $ oc set route-backends ab-example --adjust ab-example-b=5%
    $ oc set route-backends ab-example --adjust ab-example-b=+15%

    The --equal flag sets the weight of all services to 100:

    $ oc set route-backends ab-example --equal

    The --zero flag sets the weight of all services to 0. All requests then return with a 503 error.

    Note

    Not all routers may support multiple or weighted backends.

3.4.5.1.3. One service, multiple DeploymentConfigs

Procedure

  1. Create a new application, adding a label ab-example=true that will be common to all shards:

    $ oc new-app openshift/deployment-example --name=ab-example-a

    The application is deployed and a service is created. This is the first shard.

  2. Make the application available via a route (or use the service IP directly):

    $ oc expose svc/ab-example-a --name=ab-example
  3. Browse to the application at ab-example-<project>.<router_domain> to verify you see the v1 image.
  4. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables:

    $ oc new-app openshift/deployment-example:v2 \
        --name=ab-example-b --labels=ab-example=true \
        SUBTITLE="shard B" COLOR="red"
  5. At this point, both sets of Pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you.

    To force your browser to one or the other shard:

    1. Use the oc scale command to reduce replicas of ab-example-a to 0.

      $ oc scale dc/ab-example-a --replicas=0

      Refresh your browser to show v2 and shard B (in red).

    2. Scale ab-example-a to 1 replica and ab-example-b to 0:

      $ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0

      Refresh your browser to show v1 and shard A (in blue).

  6. If you trigger a deployment on either shard, only the Pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either DeploymentConfig:

    $ oc edit dc/ab-example-a

    or

    $ oc edit dc/ab-example-b

Chapter 4. Monitoring application health

In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Dedicated applications have a number of options to detect and handle unhealthy containers.

4.1. Understanding health checks

A probe is a Kubernetes action that periodically performs diagnostics on a running container. Currently, two types of probes exist, each serving a different purpose.

Readiness Probe
A Readiness check determines if the container in which it is scheduled is ready to service requests. If the readiness probe fails a container, the endpoints controller ensures the container has its IP address removed from the endpoints of all services. A readiness probe can be used to signal to the endpoints controller that even though a container is running, it should not receive any traffic from a proxy.

For example, a Readiness check can control which Pods are used. When a Pod is not ready, it is removed.

Liveness Probe
A Liveness checks determines if the container in which it is scheduled is still running. If the liveness probe fails due to a condition such as a deadlock, the kubelet kills the container The container then responds based on its restart policy.

For example, a liveness probe on a node with a restartPolicy of Always or OnFailure kills and restarts the Container on the node.

Sample Liveness Check

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness-http
    image: k8s.gcr.io/liveness 1
    args:
    - /server
    livenessProbe: 2
      httpGet:   3
        # host: my-host
        # scheme: HTTPS
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 15  4
      timeoutSeconds: 1   5
    name: liveness   6

1
Specifies the image to use for the liveness probe.
2
Specifies the type of heath check.
3
Specifies the type of Liveness check:
  • HTTP Checks. Specify httpGet.
  • Container Execution Checks. Specify exec.
  • TCP Socket Check. Specify tcpSocket.
4
Specifies the number of seconds before performing the first probe after the container starts.
5
Specifies the number of seconds between probes.

Sample Liveness check output wth unhealthy container

$ oc describe pod pod1

....

FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason      Message
--------- --------    -----   ----            -------------           --------    ------      -------
37s       37s     1   {default-scheduler }                            Normal      Scheduled   Successfully assigned liveness-exec to worker0
36s       36s     1   {kubelet worker0}   spec.containers{liveness}   Normal      Pulling     pulling image "k8s.gcr.io/busybox"
36s       36s     1   {kubelet worker0}   spec.containers{liveness}   Normal      Pulled      Successfully pulled image "k8s.gcr.io/busybox"
36s       36s     1   {kubelet worker0}   spec.containers{liveness}   Normal      Created     Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
36s       36s     1   {kubelet worker0}   spec.containers{liveness}   Normal      Started     Started container with docker id 86849c15382e
2s        2s      1   {kubelet worker0}   spec.containers{liveness}   Warning     Unhealthy   Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

4.1.1. Understanding the types of health checks

Liveness checks and Readiness checks can be configured in three ways:

HTTP Checks
The kubelet uses a web hook to determine the healthiness of the container. The check is deemed successful if the HTTP response code is between 200 and 399.

A HTTP check is ideal for applications that return HTTP status codes when completely initialized.

Container Execution Checks
The kubelet executes a command inside the container. Exiting the check with status 0 is considered a success.
TCP Socket Checks
The kubelet attempts to open a socket to the container. The container is only considered healthy if the check can establish a connection. A TCP socket check is ideal for applications that do not start listening until initialization is complete.

4.2. Configuring health checks

To configure health checks, create a pod for each type of check you want.

Procedure

To create health checks:

  1. Create a Liveness Container Execution Check:

    1. Create a YAML file similar to the following:

      apiVersion: v1
      kind: Pod
      metadata:
        labels:
          test: liveness
        name: liveness-exec
      spec:
        containers:
        - args:
          image: k8s.gcr.io/liveness
          livenessProbe:
            exec:  1
              command: 2
              - cat
              - /tmp/health
            initialDelaySeconds: 15 3
      ...
      1
      Specify a Liveness check and the type of Liveness check.
      2
      Specify the commands to use in the container.
      3
      Specify the number of seconds before performing the first probe after the container starts.
    2. Verify the state of the health check pod:

      $ oc describe pod liveness-exec
      
      Events:
        Type    Reason     Age   From                                  Message
        ----    ------     ----  ----                                  -------
        Normal  Scheduled  9s    default-scheduler                     Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal
        Normal  Pulling    2s    kubelet, ip-10-0-143-40.ec2.internal  pulling image "k8s.gcr.io/liveness"
        Normal  Pulled     1s    kubelet, ip-10-0-143-40.ec2.internal  Successfully pulled image "k8s.gcr.io/liveness"
        Normal  Created    1s    kubelet, ip-10-0-143-40.ec2.internal  Created container
        Normal  Started    1s    kubelet, ip-10-0-143-40.ec2.internal  Started container
      Note

      The timeoutSeconds parameter has no effect on the Readiness and Liveness probes for Container Execution Checks. You can implement a timeout inside the probe itself, as OpenShift Dedicated cannot time out on an exec call into the container. One way to implement a timeout in a probe is by using the timeout parameter to run your liveness or readiness probe:

      spec:
        containers:
          livenessProbe:
            exec:
              command:
                - /bin/bash
                - '-c'
                - timeout 60 /opt/eap/bin/livenessProbe.sh 1
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
      1
      Timeout value and path to the probe script.
    3. Create the check:

      $ oc create -f <file-name>.yaml
  2. Create a Liveness TCP Socket Check:

    1. Create a YAML file similar to the following:

      apiVersion: v1
      kind: Pod
      metadata:
        labels:
          test: liveness
        name: liveness-tcp
      spec:
        containers:
        - name: contaier1 1
          image: k8s.gcr.io/liveness
          ports:
          - containerPort: 8080 2
          livenessProbe:  3
            tcpSocket:
              port: 8080
            initialDelaySeconds: 15 4
            timeoutSeconds: 1  5
      1 2
      Specify the container name and port for the check to connect to.
      3
      Specify the Liveness heath check and the type of Liveness check.
      4
      Specify the number of seconds before performing the first probe after the container starts.
      5
      Specify the number of seconds between probes.
    2. Create the check:

      $ oc create -f <file-name>.yaml
  3. Create an Readiness HTTP Check:

    1. Create a YAML file similar to the following:

      apiVersion: v1
      kind: Pod
      metadata:
        labels:
          test: readiness
        name: readiness-http
      spec:
        containers:
        - args:
          image: k8s.gcr.io/readiness 1
          readinessProbe: 2
          httpGet:
          # host: my-host 3
          # scheme: HTTPS 4
            path: /healthz
            port: 8080
          initialDelaySeconds: 15  5
          timeoutSeconds: 1  6
      1
      Specify the image to use for the liveness probe.
      2
      Specify the Readiness heath check and the type of Readiness check.
      3
      Specify a host IP address. When host is not defined, the PodIP is used.
      4
      Specify HTTP or HTTPS. When scheme is not defined, the HTTP scheme is used.
      5
      Specify the number of seconds before performing the first probe after the container starts.
      6
      Specify the number of seconds between probes.
    2. Create the check:

      $ oc create -f <file-name>.yaml

Chapter 5. Working with quotas

A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project.

An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources.

This guide describes how resource quotas work and how developers can work with and view them.

5.1. Viewing a quota

You can view usage statistics related to any hard limits defined in a project’s quota by navigating in the web console to the project’s Quota page.

You can also use the CLI to view quota details.

Procedure

  1. Get the list of quotas defined in the project. For example, for a project called demoproject:

    $ oc get quota -n demoproject
    NAME                AGE
    besteffort          11m
    compute-resources   2m
    core-object-counts  29m
  2. Describe the quota you are interested in, for example the core-object-counts quota:

    $ oc describe quota core-object-counts -n demoproject
    Name:			core-object-counts
    Namespace:		demoproject
    Resource		Used	Hard
    --------		----	----
    configmaps		3	10
    persistentvolumeclaims	0	4
    replicationcontrollers	3	20
    secrets			9	10
    services		2	10

5.2. Resources managed by quotas

The following describes the set of compute resources and object types that can be managed by a quota.

Note

A pod is in a terminal state if status.phase in (Failed, Succeeded) is true.

Table 5.1. Compute resources managed by quota

Resource NameDescription

cpu

The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably.

memory

The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably.

ephemeral-storage

The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

requests.cpu

The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably.

requests.memory

The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably.

requests.ephemeral-storage

The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

limits.cpu

The sum of CPU limits across all pods in a non-terminal state cannot exceed this value.

limits.memory

The sum of memory limits across all pods in a non-terminal state cannot exceed this value.

limits.ephemeral-storage

The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

Table 5.2. Storage resources managed by quota

Resource NameDescription

requests.storage

The sum of storage requests across all persistent volume claims in any state cannot exceed this value.

persistentvolumeclaims

The total number of persistent volume claims that can exist in the project.

<storage-class-name>.storageclass.storage.k8s.io/requests.storage

The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value.

<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims

The total number of persistent volume claims with a matching storage class that can exist in the project.

Table 5.3. Object counts managed by quota

Resource NameDescription

pods

The total number of pods in a non-terminal state that can exist in the project.

replicationcontrollers

The total number of ReplicationControllers that can exist in the project.

resourcequotas

The total number of resource quotas that can exist in the project.

services

The total number of services that can exist in the project.

services.loadbalancers

The total number of services of type LoadBalancer that can exist in the project.

services.nodeports

The total number of services of type NodePort that can exist in the project.

secrets

The total number of secrets that can exist in the project.

configmaps

The total number of ConfigMap objects that can exist in the project.

persistentvolumeclaims

The total number of persistent volume claims that can exist in the project.

openshift.io/imagestreams

The total number of imagestreams that can exist in the project.

5.3. Quota scopes

Each quota can have an associated set of scopes. A quota only measures usage for a resource if it matches the intersection of enumerated scopes.

Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error.

Scope

Description

Terminating

Match pods where spec.activeDeadlineSeconds >= 0.

NotTerminating

Match pods where spec.activeDeadlineSeconds is nil.

BestEffort

Match pods that have best effort quality of service for either cpu or memory.

NotBestEffort

Match pods that do not have best effort quality of service for cpu and memory.

A BestEffort scope restricts a quota to limiting the following resources:

  • pods

A Terminating, NotTerminating, and NotBestEffort scope restricts a quota to tracking the following resources:

  • pods
  • memory
  • requests.memory
  • limits.memory
  • cpu
  • requests.cpu
  • limits.cpu
  • ephemeral-storage
  • requests.ephemeral-storage
  • limits.ephemeral-storage
Note

Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

5.4. Quota enforcement

After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics.

After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource.

When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value.

If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system.

5.5. Requests versus limits

When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.

If the quota has a value specified for requests.cpu or requests.memory, then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory, then it requires that every incoming container specify an explicit limit for those resources.

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.