Images

OpenShift Container Platform 4.9

Creating and managing images and imagestreams in OpenShift Container Platform

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for creating and managing images and imagestreams in OpenShift Container Platform. It also provides instructions on using templates.

Chapter 1. Overview of images

1.1. Understanding containers, images, and image streams

Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name.

1.2. Images

Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images. An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities.

You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image.

You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images.

Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f…​, which is usually shortened to 12 characters, such as fd44297e2ddb.

You can create, manage, and use container images.

1.3. Image registry

An image registry is a content server that can store and serve container images. For example:

registry.redhat.io

A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own internal registry for managing custom container images.

1.4. Image repository

An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository:

docker.io/openshift/jenkins-2-centos7

1.5. Image tags

An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag:

registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2

You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest.

OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images.

1.6. Image IDs

An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example:

docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324

1.7. Containers

The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image.

Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads.

The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations.

Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers.

Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman, you can experiment with containers separately from OpenShift Container Platform.

1.8. Why use imagestreams

An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes.

Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository.

You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively.

For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image.

However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the previous, presumably known good image.

The source images can be stored in any of the following:

  • OpenShift Container Platform’s integrated registry.
  • An external registry, for example registry.redhat.io or quay.io.
  • Other image streams in the OpenShift Container Platform cluster.

When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image.

The image stream metadata is stored in the etcd instance along with other cluster information.

Using image streams has several significant benefits:

  • You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line.
  • You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects.
  • You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration.
  • You can share images using fine-grained access control and quickly distribute images across your teams.
  • If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly.
  • You can configure security around who can view and use the images through permissions on the image stream objects.
  • Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams.

You can manage image streams, use image streams with Kubernetes resources, and trigger updates on image stream updates.

1.9. Image stream tags

An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag.

1.10. Image stream images

An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier.

1.11. Image stream triggers

An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those.

1.12. How you can use the Cluster Samples Operator

During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace.

As a cluster administrator, you can use the Cluster Samples Operator to:

1.13. About templates

A template is a definition of an object to be replicated. You can use templates to build and deploy configurations.

1.14. How you can use Ruby on Rails

As a developer, you can use Ruby on Rails to:

  • Write your application:

    • Set up a database.
    • Create a welcome page.
    • Configure your application for OpenShift Container Platform.
    • Store your application in Git.
  • Deploy your application in OpenShift Container Platform:

    • Create the database service.
    • Create the frontend service.
    • Create a route for your application.

Chapter 2. Configuring the Cluster Samples Operator

The Cluster Samples Operator, which operates in the openshift namespace, installs and updates the Red Hat Enterprise Linux (RHEL)-based OpenShift Container Platform image streams and OpenShift Container Platform templates.

2.1. Understanding the Cluster Samples Operator

During installation, the Operator creates the default configuration object for itself and then creates the sample image streams and templates, including quick start templates.

Note

To facilitate image stream imports from other registries that require credentials, a cluster administrator can create any additional secrets that contain the content of a Docker config.json file in the openshift namespace needed for image import.

The Cluster Samples Operator configuration is a cluster-wide resource, and the deployment is contained within the openshift-cluster-samples-operator namespace.

The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. When each sample is created or updated, the Cluster Samples Operator includes an annotation that denotes the version of OpenShift Container Platform. The Operator uses this annotation to ensure that each sample matches the release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator, where that version annotation is modified or deleted, are reverted automatically.

Note

The Jenkins images are part of the image payload from installation and are tagged into the image streams directly.

The Cluster Samples Operator configuration resource includes a finalizer which cleans up the following upon deletion:

  • Operator managed image streams.
  • Operator managed templates.
  • Operator generated configuration resources.
  • Cluster status resources.

Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration.

2.1.1. Cluster Samples Operator’s use of management state

The Cluster Samples Operator is bootstrapped as Managed by default or if global proxy is configured. In the Managed state, the Cluster Samples Operator is actively managing its resources and keeping the component active in order to pull sample image streams and images from the registry and ensure that the requisite sample templates are installed.

Certain circumstances result in the Cluster Samples Operator bootstrapping itself as Removed including:

Note

For OpenShift Container Platform, the default image registry is registry.redhat.io.

However, if the Cluster Samples Operator detects that it is on an IPv6 network and an OpenShift Container Platform global proxy is configured, then IPv6 check supersedes all the checks. As a result, the Cluster Samples Operator bootstraps itself as Removed.

Important

IPv6 installations are not currently supported by registry.redhat.io. The Cluster Samples Operator pulls most of the sample image streams and images from registry.redhat.io.

2.1.1.1. Restricted network installation

Boostrapping as Removed when unable to access registry.redhat.io facilitates restricted network installations when the network restriction is already in place. Bootstrapping as Removed when network access is restricted allows the cluster administrator more time to decide if samples are desired, because the Cluster Samples Operator does not submit alerts that sample image stream imports are failing when the management state is set to Removed. When the Cluster Samples Operator comes up as Managed and attempts to install sample image streams, it starts alerting two hours after initial installation if there are failing imports.

2.1.1.2. Restricted network installation with initial network access

Conversely, if a cluster that is intended to be a restricted network or disconnected cluster is first installed while network access exists, the Cluster Samples Operator installs the content from registry.redhat.io since it can access it. If you want the Cluster Samples Operator to still bootstrap as Removed in order to defer samples installation until you have decided which samples are desired, set up image mirrors, and so on, then follow the instructions for using the Samples Operator with an alternate registry and customizing nodes, both linked in the additional resources section, to override the Cluster Samples Operator default configuration and initially come up as Removed.

You must put the following additional YAML file in the openshift directory created by openshift-install create manifest:

Example Cluster Samples Operator YAML file with managementState: Removed

apiVersion: samples.operator.openshift.io/v1
kind: Config
metadata:
  name: cluster
spec:
  architectures:
  - x86_64
  managementState: Removed

2.1.2. Cluster Samples Operator’s tracking and error recovery of image stream imports

After creation or update of a samples image stream, the Cluster Samples Operator monitors the progress of each image stream tag’s image import.

If an import fails, the Cluster Samples Operator retries the import through the image stream image import API, which is the same API used by the oc import-image command, approximately every 15 minutes until it sees the import succeed, or if the Cluster Samples Operator’s configuration is changed such that either the image stream is added to the skippedImagestreams list, or the management state is changed to Removed.

Additional resources

  • If the Cluster Samples Operator is removed during installation, you can use the Cluster Samples Operator with an alternate registry so content can be imported, and then set the Cluster Samples Operator to Managed to get the samples.
  • To ensure the Cluster Samples Operator bootstraps as Removed in a restricted network installation with initial network access to defer samples installation until you have decided which samples are desired, follow the instructions for customizing nodes to override the Cluster Samples Operator default configuration and initially come up as Removed.

2.1.3. Cluster Samples Operator assistance for mirroring

During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag.

The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>.

During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed. If you choose to change it to Managed, it installs samples.

Note

The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.

You can use this config map as a reference for which images need to be mirrored for your image streams to import.

  • While the Cluster Samples Operator is set to Removed, you can create your mirrored registry, or determine which existing mirrored registry you want to use.
  • Mirror the samples you want to the mirrored registry using the new config map as your guide.
  • Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object.
  • Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry.
  • Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored.

See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.

2.2. Cluster Samples Operator configuration parameters

The samples resource offers the following configuration fields:

ParameterDescription

managementState

Managed: The Cluster Samples Operator updates the samples as the configuration dictates.

Unmanaged: The Cluster Samples Operator ignores updates to its configuration resource object and any image streams or templates in the openshift namespace.

Removed: The Cluster Samples Operator removes the set of Managed image streams and templates in the openshift namespace. It ignores new samples created by the cluster administrator or any samples in the skipped lists. After the removals are complete, the Cluster Samples Operator works like it is in the Unmanaged state and ignores any watch events on the sample resources, image streams, or templates.

samplesRegistry

Allows you to specify which registry is accessed by image streams for their image content. samplesRegistry defaults to registry.redhat.io for OpenShift Container Platform.

Note

Creation or update of RHEL content does not commence if the secret for pull access is not in place when either Samples Registry is not explicitly set, leaving an empty string, or when it is set to registry.redhat.io. In both cases, image imports work off of registry.redhat.io, which requires credentials.

Creation or update of RHEL content is not gated by the existence of the pull secret if the Samples Registry is overridden to a value other than the empty string or registry.redhat.io.

architectures

Placeholder to choose an architecture type.

skippedImagestreams

Image streams that are in the Cluster Samples Operator’s inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example, ["httpd","perl"].

skippedTemplates

Templates that are in the Cluster Samples Operator’s inventory, but that the cluster administrator wants the Operator to ignore or not manage.

Secret, image stream, and template watch events can come in before the initial samples resource object is created, the Cluster Samples Operator detects and re-queues the event.

2.2.1. Configuration restrictions

When the Cluster Samples Operator starts supporting multiple architectures, the architecture list is not allowed to be changed while in the Managed state.

To change the architectures values, a cluster administrator must:

  • Mark the Management State as Removed, saving the change.
  • In a subsequent change, edit the architecture and change the Management State back to Managed.

The Cluster Samples Operator still processes secrets while in Removed state. You can create the secret before switching to Removed, while in Removed before switching to Managed, or after switching to Managed state. There are delays in creating the samples until the secret event is processed if you create the secret after switching to Managed. This helps facilitate the changing of the registry, where you choose to remove all the samples before switching to insure a clean slate. Removing all samples before switching is not required.

2.2.2. Conditions

The samples resource maintains the following conditions in its status:

ConditionDescription

SamplesExists

Indicates the samples are created in the openshift namespace.

ImageChangesInProgress

True when image streams are created or updated, but not all of the tag spec generations and tag status generations match.

False when all of the generations match, or unrecoverable errors occurred during import, the last seen error is in the message field. The list of pending image streams is in the reason field.

This condition is deprecated in OpenShift Container Platform.

ConfigurationValid

True or False based on whether any of the restricted changes noted previously are submitted.

RemovePending

Indicator that there is a Management State: Removed setting pending, but the Cluster Samples Operator is waiting for the deletions to complete.

ImportImageErrorsExist

Indicator of which image streams had errors during the image import phase for one of their tags.

True when an error has occurred. The list of image streams with an error is in the reason field. The details of each error reported are in the message field.

MigrationInProgress

True when the Cluster Samples Operator detects that the version is different than the Cluster Samples Operator version with which the current samples set are installed.

This condition is deprecated in OpenShift Container Platform.

2.3. Accessing the Cluster Samples Operator configuration

You can configure the Cluster Samples Operator by editing the file with the provided parameters.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  • Access the Cluster Samples Operator configuration:

    $ oc edit configs.samples.operator.openshift.io/cluster -o yaml

    The Cluster Samples Operator configuration resembles the following example:

    apiVersion: samples.operator.openshift.io/v1
    kind: Config
    ...

2.4. Removing deprecated image stream tags from the Cluster Samples Operator

The Cluster Samples Operator leaves deprecated image stream tags in an image stream because users can have deployments that use the deprecated image stream tags.

You can remove deprecated image stream tags by editing the image stream with the oc tag command.

Note

Deprecated image stream tags that the samples providers have removed from their image streams are not included on initial installations.

Prerequisites

  • You installed the oc CLI.

Procedure

  • Remove deprecated image stream tags by editing the image stream with the oc tag command.

    $ oc tag -d <image_stream_name:tag>

    Example output

    Deleted tag default/<image_stream_name:tag>.

Additional resources

Chapter 3. Using the Cluster Samples Operator with an alternate registry

You can use the Cluster Samples Operator with an alternate registry by first creating a mirror registry.

Important

You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet.

3.1. About the mirror registry

You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, a small-scale container registry included with OpenShift Container Platform subscriptions.

You can use any container registry that supports Docker v2-2, such as Red Hat Quay, the mirror registry for Red Hat OpenShift, Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.

Important

The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.

If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters.

When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.

For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location.

Note

Red Hat does not test third party registries with OpenShift Container Platform.

Additional information

For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source.

3.1.1. Preparing the mirror host

Before you create the mirror registry, you must prepare the mirror host.

3.1.2. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc.

Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.9 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>
Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.9 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>
Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.9 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

3.2. Configuring credentials that allow images to be mirrored

Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.

Prerequisites

  • You configured a mirror registry to use in your disconnected environment.

Procedure

Complete the following steps on the installation host:

  1. Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager and save it to a .json file.
  2. Generate the base64-encoded user name and password or token for your mirror registry:

    $ echo -n '<user_name>:<password>' | base64 -w0 1
    BGVtbYk3ZHAtqXs=
    1
    For <user_name> and <password>, specify the user name and password that you configured for your registry.
  3. Make a copy of your pull secret in JSON format:

    $ cat ./pull-secret.text | jq .  > <path>/<pull_secret_file_in_json>1
    1
    Specify the path to the folder to store the pull secret in and a name for the JSON file that you create.
  4. Save the file either as ~/.docker/config.json or $XDG_RUNTIME_DIR/containers/auth.json.

    The contents of the file resemble the following example:

    {
      "auths": {
        "cloud.openshift.com": {
          "auth": "b3BlbnNo...",
          "email": "you@example.com"
        },
        "quay.io": {
          "auth": "b3BlbnNo...",
          "email": "you@example.com"
        },
        "registry.connect.redhat.com": {
          "auth": "NTE3Njg5Nj...",
          "email": "you@example.com"
        },
        "registry.redhat.io": {
          "auth": "NTE3Njg5Nj...",
          "email": "you@example.com"
        }
      }
    }
  5. Edit the new file and add a section that describes your registry to it:

      "auths": {
        "<mirror_registry>": { 1
          "auth": "<credentials>", 2
          "email": "you@example.com"
        }
      },
    1
    For <mirror_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443
    2
    For <credentials>, specify the base64-encoded user name and password for the mirror registry.

    The file resembles the following example:

    {
      "auths": {
        "registry.example.com": {
          "auth": "BGVtbYk3ZHAtqXs=",
          "email": "you@example.com"
        },
        "cloud.openshift.com": {
          "auth": "b3BlbnNo...",
          "email": "you@example.com"
        },
        "quay.io": {
          "auth": "b3BlbnNo...",
          "email": "you@example.com"
        },
        "registry.connect.redhat.com": {
          "auth": "NTE3Njg5Nj...",
          "email": "you@example.com"
        },
        "registry.redhat.io": {
          "auth": "NTE3Njg5Nj...",
          "email": "you@example.com"
        }
      }
    }

3.3. Mirroring the OpenShift Container Platform image repository

Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade.

Prerequisites

  • Your mirror host has access to the internet.
  • You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured.
  • You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository.
  • If you use self-signed certificates that do not set a Subject Alternative Name, you must precede the oc commands in this procedure with GODEBUG=x509ignoreCN=0. If you do not set this variable, the oc commands will fail with the following error:

    x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0

Procedure

Complete the following steps on the mirror host:

  1. Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page.
  2. Set the required environment variables:

    1. Export the release version:

      $ OCP_RELEASE=<release_version>

      For <release_version>, specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4.

    2. Export the local registry name and host port:

      $ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'

      For <local_registry_host_name>, specify the registry domain name for your mirror repository, and for <local_registry_host_port>, specify the port that it serves content on.

    3. Export the local repository name:

      $ LOCAL_REPOSITORY='<local_repository_name>'

      For <local_repository_name>, specify the name of the repository to create in your registry, such as ocp4/openshift4.

    4. Export the name of the repository to mirror:

      $ PRODUCT_REPO='openshift-release-dev'

      For a production release, you must specify openshift-release-dev.

    5. Export the path to your registry pull secret:

      $ LOCAL_SECRET_JSON='<path_to_pull_secret>'

      For <path_to_pull_secret>, specify the absolute path to and file name of the pull secret for your mirror registry that you created.

    6. Export the release mirror:

      $ RELEASE_NAME="ocp-release"

      For a production release, you must specify ocp-release.

    7. Export the type of architecture for your server, such as x86_64:

      $ ARCHITECTURE=<server_architecture>
    8. Export the path to the directory to host the mirrored images:

      $ REMOVABLE_MEDIA_PATH=<path> 1
      1
      Specify the full path, including the initial forward slash (/) character.
  3. Mirror the version images to the mirror registry:

    • If your mirror host does not have internet access, take the following actions:

      1. Connect the removable media to a system that is connected to the internet.
      2. Review the images and configuration manifests to mirror:

        $ oc adm release mirror -a ${LOCAL_SECRET_JSON}  \
             --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
             --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
             --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run
      3. Record the entire imageContentSources section from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation.
      4. Mirror the images to a directory on the removable media:

        $ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}
      5. Take the media to the restricted network environment and upload the images to the local container registry.

        $ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} 1
        1
        For REMOVABLE_MEDIA_PATH, you must use the same path that you specified when you mirrored the images.
    • If the local container registry is connected to the mirror host, take the following actions:

      1. Directly push the release images to the local registry by using following command:

        $ oc adm release mirror -a ${LOCAL_SECRET_JSON}  \
             --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
             --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
             --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}

        This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster.

      2. Record the entire imageContentSources section from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation.

        Note

        The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine.

  4. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release:

    • If your mirror host does not have internet access, run the following command:

      $ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}"
    • If the local container registry is connected to the mirror host, run the following command:

      $ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"
      Important

      To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content.

      You must perform this step on a machine with an active internet connection.

      If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image.

  5. For clusters using installer-provisioned infrastructure, run the following command:

    $ openshift-install

3.4. Using Cluster Samples Operator image streams with alternate or mirrored registries

Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io.

Important

The jenkins, jenkins-agent-maven, and jenkins-agent-nodejs image streams come from the install payload and are managed by the Samples Operator.

Setting the samplesRegistry field in the Sample Operator configuration file to registry.redhat.io is redundant because it is already directed to registry.redhat.io for everything but Jenkins images and image streams.

Note

The cli, installer, must-gather, and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure.

Important

The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • Create a pull secret for your mirror registry.

Procedure

  1. Access the images of a specific image stream to mirror, for example:

    $ oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io
  2. Mirror images from registry.redhat.io associated with any image streams you need

    $ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest
  3. Create the cluster’s image configuration object:

    $ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
  4. Add the required trusted CAs for the mirror in the cluster’s image configuration object:

    $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
  5. Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration:

    $ oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator
    Note

    This is required because the image stream import process does not use the mirror or search mechanism at this time.

  6. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object.

    Note

    The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them.

    Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams.

3.4.1. Cluster Samples Operator assistance for mirroring

During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag.

The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>.

During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed. If you choose to change it to Managed, it installs samples.

Note

The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.

You can use this config map as a reference for which images need to be mirrored for your image streams to import.

  • While the Cluster Samples Operator is set to Removed, you can create your mirrored registry, or determine which existing mirrored registry you want to use.
  • Mirror the samples you want to the mirrored registry using the new config map as your guide.
  • Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object.
  • Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry.
  • Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored.

See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.

Chapter 4. Creating images

Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with OpenShift Container Platform. After you create an image, you can push it to the internal registry.

4.1. Learning container best practices

When creating container images to run on OpenShift Container Platform there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift Container Platform.

4.1.1. General container image guidelines

The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Container Platform.

Reuse images

Wherever possible, base your image on an appropriate upstream image using the FROM statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly.

In addition, use tags in the FROM instruction, for example, rhel:rhel7, to make it clear to users exactly which version of an image your image is based on. Using a tag other than latest ensures your image is not subjected to breaking changes that might go into the latest version of an upstream image.

Maintain compatibility within tags

When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named foo and it currently includes version 1.0, you might provide a tag of foo:v1. When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image foo:v1, and downstream consumers of this tag are able to get updates without being broken.

If you later release an incompatible update, then switch to a new tag, for example foo:v2. This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using foo:latest takes on the risk of any incompatible changes being introduced.

Avoid multiple processes

Do not start multiple services, such as a database and SSHD, inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift Container Platform allows you to easily colocate and co-manage related images by grouping them into a single pod.

This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes.

Use exec in wrapper scripts

Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses exec so that the script’s process is replaced by your software. If you do not use exec, then signals sent by your container runtime go to your wrapper script instead of your software’s process. This is not what you want.

If you have a wrapper script that starts a process for some server. You start your container, for example, using podman run -i, which runs the wrapper script, which in turn starts your process. If you want to close your container with CTRL+C. If your wrapper script used exec to start the server process, podman sends SIGINT to the server process, and everything works as you expect. If you did not use exec in your wrapper script, podman sends SIGINT to the process for the wrapper script and your process keeps running like nothing happened.

Also note that your process runs as PID 1 when running in a container. This means that if your main process terminates, the entire container is stopped, canceling any child processes you launched from your PID 1 process.

Clean temporary files

Remove all temporary files you create during the build process. This also includes any files added with the ADD command. For example, run the yum clean command after performing yum install operations.

You can prevent the yum cache from ending up in an image layer by creating your RUN statement as follows:

RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y

Note that if you instead write:

RUN yum -y install mypackage
RUN yum -y install myotherpackage && yum clean all -y

Then the first yum invocation leaves extra files in that layer, and these files cannot be removed when the yum clean operation is run later. The extra files are not visible in the final image, but they are present in the underlying layers.

The current container build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an rm command in a later layer, although the files are hidden it does not reduce the overall size of the image to be downloaded. Therefore, as with the yum clean example, it is best to remove files in the same command that created them, where possible, so they do not end up written to a layer.

In addition, performing multiple commands in a single RUN statement reduces the number of layers in your image, which improves download and extraction time.

Place instructions in the proper order

The container builder reads the Dockerfile and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the next time this or another image is built. It is very important to place instructions that rarely change at the top of your Dockerfile. Doing so ensures the next builds of the same image are very fast because the cache is not invalidated by upper layer changes.

For example, if you are working on a Dockerfile that contains an ADD command to install a file you are iterating on, and a RUN command to yum install a package, it is best to put the ADD command last:

FROM foo
RUN yum -y install mypackage && yum clean all -y
ADD myfile /test/myfile

This way each time you edit myfile and rerun podman build or docker build, the system reuses the cached layer for the yum command and only generates the new layer for the ADD operation.

If instead you wrote the Dockerfile as:

FROM foo
ADD myfile /test/myfile
RUN yum -y install mypackage && yum clean all -y

Then each time you changed myfile and reran podman build or docker build, the ADD operation would invalidate the RUN layer cache, so the yum operation must be rerun as well.

Mark important ports

The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a podman run invocation, using the EXPOSE instruction in a Dockerfile makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run:

  • Exposed ports show up under podman ps associated with containers created from your image.
  • Exposed ports are present in the metadata for your image returned by podman inspect.
  • Exposed ports are linked when you link one container to another.
Set environment variables

It is good practice to set environment variables with the ENV instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the Dockerfile. Another example is advertising a path on the system that could be used by another process, such as JAVA_HOME.

Avoid default passwords

Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead.

If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set.

Avoid sshd

It is best to avoid running sshd in your image. You can use the podman exec or docker exec command to access containers that are running on the local host. Alternatively, you can use the oc exec command or the oc rsh command to access containers that are running on the OpenShift Container Platform cluster. Installing and running sshd in your image opens up additional vectors for attack and requirements for security patching.

Use volumes for persistent data

Images use a volume for persistent data. This way OpenShift Container Platform mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved.

All data that needs to be preserved even after the container is destroyed must be written to a volume. Container engines support a readonly flag for containers, which can be used to strictly enforce good practices about not writing data to ephemeral storage in a container. Designing your image around that capability now makes it easier to take advantage of it later.

Explicitly defining volumes in your Dockerfile makes it easy for consumers of the image to understand what volumes they must define when running your image.

See the Kubernetes documentation for more information on how volumes are used in OpenShift Container Platform.

Note

Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster.

4.1.2. OpenShift Container Platform-specific guidelines

The following are guidelines that apply when creating container images specifically for use on OpenShift Container Platform.

4.1.2.1. Enable images for source-to-image (S2I)

For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (S2I) build tool. S2I is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.

4.1.2.2. Support arbitrary user ids

By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node.

For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions.

Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:

RUN chgrp -R 0 /some/directory && \
    chmod -R g=u /some/directory

Because the container user is always a member of the root group, the container user can read and write these files.

Warning

Care must be taken when altering the directories and file permissions of sensitive areas of a container, which is no different than to a normal system.

If applied to sensitive areas, such as /etc/passwd, this can allow the modification of such files by unintended users potentially exposing the container or host. CRI-O supports the insertion of arbitrary user IDs into the container’s /etc/passwd, so changing permissions is never required.

In addition, the processes running in the container must not listen on privileged ports, ports below 1024, since they are not running as a privileged user.

Important

If your S2I image does not include a USER declaration with a numeric user, your builds fail by default. To allow images that use either named users or the root 0 user to build in OpenShift Container Platform, you can add the project’s builder service account, system:serviceaccount:<your-project>:builder, to the anyuid security context constraint (SCC). Alternatively, you can allow all images to run as any user.

4.1.2.3. Use services for inter-image communication

For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image consumes an OpenShift Container Platform service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests.

4.1.2.4. Provide common libraries

For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met.

4.1.2.5. Use environment variables for configuration

Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file.

It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry.

Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image.

For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present.

This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources are defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift Container Platform environment without modifying the application image.

In addition, tuning is done by inspecting the cgroups settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images tune their heap based on the cgroup maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error.

4.1.2.6. Set image metadata

Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that are needed.

4.1.2.7. Clustering

You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information to perform leader election or failover state; for example, in session replication.

Consider how your instances accomplish this communication when running in OpenShift Container Platform. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic.

4.1.2.8. Logging

It is best to send all logging to standard out. OpenShift Container Platform collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages.

If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file.

4.1.2.9. Liveness and readiness probes

Document example liveness and readiness probes that can be used with your image. These probes allow users to deploy your image with confidence that traffic is not be routed to the container until it is prepared to handle it, and that the container is restarted if the process gets into an unhealthy state.

4.1.2.10. Templates

Consider providing an example template with your image. A template gives users an easy way to quickly get your image deployed with a working configuration. Your template must include the liveness and readiness probes you documented with the image, for completeness.

4.2. Including metadata in images

Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed.

This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future.

4.2.1. Defining image metadata

You can use the LABEL instruction in a Dockerfile to define image metadata. Labels are similar to environment variables in that they are key value pairs attached to an image or a container. Labels are different from environment variable in that they are not visible to the running application and they can also be used for fast look-up of images and containers.

Docker documentation for more information on the LABEL instruction.

The label names are typically namespaced. The namespace is set accordingly to reflect the project that is going to pick up the labels and use them. For OpenShift Container Platform the namespace is set to io.openshift and for Kubernetes the namespace is io.k8s.

See the Docker custom metadata documentation for details about the format.

Table 4.1. Supported Metadata

VariableDescription

io.openshift.tags

This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process.

LABEL io.openshift.tags   mongodb,mongodb24,nosql

io.openshift.wants

Specifies a list of tags that the generation tools and the UI uses to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants mysql and redis and you do not have the container image with redis tag, then UI can suggest you to add this image into your deployment.

LABEL io.openshift.wants   mongodb,redis

io.k8s.description

This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users.

LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support

io.openshift.non-scalable

An image can use this variable to suggest that it does not support scaling. The UI then communicates this to consumers of that image. Being not-scalable means that the value of replicas should initially not be set higher than 1.

LABEL io.openshift.non-scalable     true

io.openshift.min-memory and io.openshift.min-cpu

This label suggests how much resources the container image needs to work properly. The UI can warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity.

LABEL io.openshift.min-memory 16Gi
LABEL io.openshift.min-cpu     4

4.3. Creating images from source code with source-to-image

Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.

The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts.

4.3.1. Understanding the source-to-image build process

The build process consists of the following three fundamental elements, which are combined into a final container image:

  • Sources
  • Source-to-image (S2I) scripts
  • Builder image

S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah.

4.3.2. How to write source-to-image scripts

You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble/run/save-artifacts scripts. All of these locations are checked on each build in the following order:

  1. A script specified in the build configuration.
  2. A script found in the application source .s2i/bin directory.
  3. A script found at the default image URL with the io.openshift.s2i.scripts-url label.

Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms:

  • image:///path_to_scripts_dir: absolute path inside the image to a directory where the S2I scripts are located.
  • file:///path_to_scripts_dir: relative or absolute path to a directory on the host where the S2I scripts are located.
  • http(s)://path_to_scripts_dir: URL to a directory where the S2I scripts are located.

Table 4.2. S2I scripts

ScriptDescription

assemble

The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is:

  1. Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well.
  2. Place the application source in the desired location.
  3. Build the application artifacts.
  4. Install the artifacts into locations appropriate for them to run.

run

The run script executes your application. This script is required.

save-artifacts

The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example:

  • For Ruby, gems installed by Bundler.
  • For Java, .m2 contents.

These dependencies are gathered into a tar file and streamed to the standard output.

usage

The usage script allows you to inform the user how to properly use your image. This script is optional.

test/run

The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is:

  1. Build the image.
  2. Run the image to verify the usage script.
  3. Run s2i build to verify the assemble script.
  4. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality.
  5. Run the image to verify the test application is working.
Note

The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository.

Example S2I scripts

The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory.

assemble script:

#!/bin/bash

# restore build artifacts
if [ "$(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then
    mv /tmp/s2i/artifacts/* $HOME/.
fi

# move the application source
mv /tmp/s2i/src $HOME/src

# build application artifacts
pushd ${HOME}
make all

# install the artifacts
make install
popd

run script:

#!/bin/bash

# run the application
/opt/application/run.sh

save-artifacts script:

#!/bin/bash

pushd ${HOME}
if [ -d deps ]; then
    # all deps contents to tar stream
    tar cf - deps
fi
popd

usage script:

#!/bin/bash

# inform the user how to use the image
cat <<EOF
This is a S2I sample builder image, to use it, install
https://github.com/openshift/source-to-image
EOF

Additional resources

4.4. About testing source-to-image images

As an Source-to-Image (S2I) builder image author, you can test your S2I image locally and use the OpenShift Container Platform build system for automated testing and continuous integration.

S2I requires the assemble and run scripts to be present to successfully run the S2I build. Providing the save-artifacts script reuses the build artifacts, and providing the usage script ensures that usage information is printed to console when someone runs the container image outside of the S2I.

The goal of testing an S2I image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated.

4.4.1. Understanding testing requirements

The standard location for the test script is test/run. This script is invoked by the OpenShift Container Platform S2I image builder and it could be a simple Bash script or a static Go binary.

The test/run script performs the S2I build, so you must have the S2I binary available in your $PATH. If required, follow the installation instructions in the S2I README.

S2I combines the application source code and builder image, so to test it you need a sample application source to verify that the source successfully transforms into a runnable container image. The sample application should be simple, but it should exercise the crucial steps of assemble and run scripts.

4.4.2. Generating scripts and tools

The S2I tooling comes with powerful generation tools to speed up the process of creating a new S2I image. The s2i create command produces all the necessary S2I scripts and testing tools along with the Makefile:

$ s2i create _<image name>_ _<destination directory>_

The generated test/run script must be adjusted to be useful, but it provides a good starting point to begin developing.

Note

The test/run script produced by the s2i create command requires that the sample application sources are inside the test/test-app directory.

4.4.3. Testing locally

The easiest way to run the S2I image tests locally is to use the generated Makefile.

If you did not use the s2i create command, you can copy the following Makefile template and replace the IMAGE_NAME parameter with your image name.

Sample Makefile

IMAGE_NAME = openshift/ruby-20-centos7
CONTAINER_ENGINE := $(shell command -v podman 2> /dev/null | echo docker)

build:
	${CONTAINER_ENGINE} build -t $(IMAGE_NAME) .

.PHONY: test
test:
	${CONTAINER_ENGINE} build -t $(IMAGE_NAME)-candidate .
	IMAGE_NAME=$(IMAGE_NAME)-candidate test/run

4.4.4. Basic testing workflow

The test script assumes you have already built the image you want to test. If required, first build the S2I image. Run one of the following commands:

  • If you use Podman, run the following command:

    $ podman build -t <builder_image_name>
  • If you use Docker, run the following command:

    $ docker build -t <builder_image_name>

The following steps describe the default workflow to test S2I image builders:

  1. Verify the usage script is working:

    • If you use Podman, run the following command:

      $ podman run <builder_image_name> .
    • If you use Docker, run the following command:

      $ docker run <builder_image_name> .
  2. Build the image:

    $ s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_
  3. Optional: if you support save-artifacts, run step 2 once again to verify that saving and restoring artifacts works properly.
  4. Run the container:

    • If you use Podman, run the following command:

      $ podman run <output_application_image_name>
    • If you use Docker, run the following command:

      $ docker run <output_application_image_name>
  5. Verify the container is running and the application is responding.

Running these steps is generally enough to tell if the builder image is working as expected.

4.4.5. Using OpenShift Container Platform for building the image

Once you have a Dockerfile and the other artifacts that make up your new S2I builder image, you can put them in a git repository and use OpenShift Container Platform to build and push the image. Define a Docker build that points to your repository.

If your OpenShift Container Platform instance is hosted on a public IP address, the build can be triggered each time you push into your S2I builder image GitHub repository.

You can also use the ImageChangeTrigger to trigger a rebuild of your applications that are based on the S2I builder image you updated.

Chapter 5. Managing images

5.1. Managing images overview

With OpenShift Container Platform you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave.

5.1.1. Images overview

An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository.

By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively.

5.2. Tagging images

The following sections provide an overview and instructions for using image tags in the context of container images for working with OpenShift Container Platform image streams and their tags.

5.2.1. Image tags

An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag:

registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2

You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest.

OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images.

5.2.2. Image tag conventions

Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built.

If there is too much information embedded in a tag name, like v2.0.1-may-2019, the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. In very large clusters, the schema of creating new tags for every revised image could eventually fill up the etcd datastore with excess tag metadata for images that are long outdated.

If the tag is named v2.0, image revisions are more likely. This results in longer tag history and, therefore, the image pruner is more likely to remove old and unused images.

Although tag naming convention is up to you, here are a few examples in the format <image_name>:<image_tag>:

Table 5.1. Image tag naming conventions

DescriptionExample

Revision

myimage:v2.0.1

Architecture

myimage:v2.0-x86_64

Base image

myimage:v1.2-centos7

Latest (potentially unstable)

myimage:latest

Latest stable

myimage:stable

If you require dates in tag names, periodically inspect old and unsupported images and istags and remove them. Otherwise, you can experience increasing resource usage caused by retaining old images.

5.2.3. Adding tags to image streams

An image stream in OpenShift Container Platform comprises zero or more container images identified by tags.

There are different types of tags available. The default behavior uses a permanent tag, which points to a specific image in time. If the permanent tag is in use and the source changes, the tag does not change for the destination.

A tracking tag means the destination tag’s metadata is updated during the import of the source tag.

Procedure

  • You can add tags to an image stream using the oc tag command:

    $ oc tag <source> <destination>

    For example, to configure the ruby image stream static-2.0 tag to always refer to the current image for the ruby image stream 2.0 tag:

    $ oc tag ruby:2.0 ruby:static-2.0

    This creates a new image stream tag named static-2.0 in the ruby image stream. The new tag directly references the image id that the ruby:2.0 image stream tag pointed to at the time oc tag was run, and the image it points to never changes.

  • To ensure the destination tag is updated when the source tag changes, use the --alias=true flag:

    $ oc tag --alias=true <source> <destination>
Note

Use a tracking tag for creating permanent aliases, for example, latest or stable. The tag only works correctly within a single image stream. Trying to create a cross-image stream alias produces an error.

  • You can also add the --scheduled=true flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level.
  • The --reference flag creates an image stream tag that is not imported. The tag points to the source location, permanently.

    If you want to instruct OpenShift Container Platform to always fetch the tagged image from the integrated registry, use --reference-policy=local. The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the next time they are needed. The flag also allows for pulling from insecure registries without a need to supply --insecure-registry to the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy.

5.2.4. Removing tags from image streams

You can remove tags from an image stream.

Procedure

  • To remove a tag completely from an image stream run:

    $ oc delete istag/ruby:latest

    or:

    $ oc tag -d ruby:latest

5.2.5. Referencing images in imagestreams

You can use tags to reference images in image streams using the following reference types.

Table 5.2. Imagestream reference types

Reference typeDescription

ImageStreamTag

An ImageStreamTag is used to reference or retrieve an image for a given image stream and tag.

ImageStreamImage

An ImageStreamImage is used to reference or retrieve an image for a given image stream and image sha ID.

DockerImage

A DockerImage is used to reference or retrieve an image for a given external registry. It uses standard Docker pull specification for its name.

When viewing example image stream definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage, but nothing related to ImageStreamImage.

This is because the ImageStreamImage objects are automatically created in OpenShift Container Platform when you import or tag an image into the image stream. You should never have to explicitly define an ImageStreamImage object in any image stream definition that you use to create image streams.

Procedure

  • To reference an image for a given image stream and tag, use ImageStreamTag:

    <image_stream_name>:<tag>
  • To reference an image for a given image stream and image sha ID, use ImageStreamImage:

    <image_stream_name>@<id>

    The <id> is an immutable identifier for a specific image, also called a digest.

  • To reference or retrieve an image for a given external registry, use DockerImage:

    openshift/ruby-20-centos7:2.0
    Note

    When no tag is specified, it is assumed the latest tag is used.

    You can also reference a third-party registry:

    registry.redhat.io/rhel7:latest

    Or an image with a digest:

    centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e

5.3. Image pull policy

Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod.

5.3.1. Image pull policy overview

When OpenShift Container Platform creates containers, it uses the container imagePullPolicy to determine if the image should be pulled prior to starting the container. There are three possible values for imagePullPolicy:

Table 5.3. imagePullPolicy values

ValueDescription

Always

Always pull the image.

IfNotPresent

Only pull the image if it does not already exist on the node.

Never

Never pull the image.

If a container imagePullPolicy parameter is not specified, OpenShift Container Platform sets it based on the image tag:

  1. If the tag is latest, OpenShift Container Platform defaults imagePullPolicy to Always.
  2. Otherwise, OpenShift Container Platform defaults imagePullPolicy to IfNotPresent.

5.4. Using image pull secrets

If you are using the OpenShift Container Platform internal registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required.

However, for other scenarios, such as referencing images across OpenShift Container Platform projects or from secured registries, then additional configuration steps are required.

You can obtain the image pull secret from the Red Hat OpenShift Cluster Manager. This pull secret is called pullSecret.

You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io, which serve the container images for OpenShift Container Platform components.

5.4.1. Allowing pods to reference images across projects

When using the internal registry, to allow pods in project-a to reference images in project-b, a service account in project-a must be bound to the system:image-puller role in project-b.

Note

When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift Container Platform internal registry.

Procedure

  1. To allow pods in project-a to reference images in project-b, bind a service account in project-a to the system:image-puller role in project-b:

    $ oc policy add-role-to-user \
        system:image-puller system:serviceaccount:project-a:default \
        --namespace=project-b

    After adding that role, the pods in project-a that reference the default service account are able to pull images from project-b.

  2. To allow access for any service account in project-a, use the group:

    $ oc policy add-role-to-group \
        system:image-puller system:serviceaccounts:project-a \
        --namespace=project-b

5.4.2. Allowing pods to reference images from other secured registries

The .dockercfg $HOME/.docker/config.json file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry.

To pull a secured container image that is not from OpenShift Container Platform’s internal registry, you must create a pull secret from your Docker credentials and add it to your service account.

The Docker credentials file and the associated pull secret can contain multiple references to the same registry, each with its own set of credentials.

Example config.json file

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io/repository-main":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

Example pull secret

apiVersion: v1
data:
  .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg==
kind: Secret
metadata:
  creationTimestamp: "2021-09-09T19:10:11Z"
  name: pull-secret
  namespace: default
  resourceVersion: "37676"
  uid: e2851531-01bc-48ba-878c-de96cfe31020
type: Opaque

Procedure

  • If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockercfg=<path/to/.dockercfg> \
        --type=kubernetes.io/dockercfg
  • Or if you have a $HOME/.docker/config.json file:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
        --type=kubernetes.io/dockerconfigjson
  • If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:

    $ oc create secret docker-registry <pull_secret_name> \
        --docker-server=<registry_server> \
        --docker-username=<user_name> \
        --docker-password=<password> \
        --docker-email=<email>
  • To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default:

    $ oc secrets link default <pull_secret_name> --for=pull

5.4.2.1. Pulling from private registries with delegated authentication

A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints.

Procedure

  1. Create a secret for the delegated authentication server:

    $ oc create secret docker-registry \
        --docker-server=sso.redhat.com \
        --docker-username=developer@example.com \
        --docker-password=******** \
        --docker-email=unused \
        redhat-connect-sso
    
    secret/redhat-connect-sso
  2. Create a secret for the private registry:

    $ oc create secret docker-registry \
        --docker-server=privateregistry.example.com \
        --docker-username=developer@example.com \
        --docker-password=******** \
        --docker-email=unused \
        private-registry
    
    secret/private-registry

5.4.3. Updating the global cluster pull secret

You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret.

Important

To transfer your cluster to another owner, you must first initiate the transfer in OpenShift Cluster Manager, and then update the pull secret on the cluster. Updating a cluster’s pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager.

For more information about transferring cluster ownership, see "Transferring cluster ownership" in the Red Hat OpenShift Cluster Manager documentation.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Optional: To append a new pull secret to the existing pull secret, complete the following steps:

    1. Enter the following command to download the pull secret:

      $ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1
      1
      Provide the path to the pull secret file.
    2. Enter the following command to add the new pull secret:

      $ oc registry login --registry="<registry>" \ 1
      --auth-basic="<username>:<password>" \ 2
      --to=<pull_secret_location> 3
      1
      Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>".
      2
      Provide the credentials of the new registry.
      3
      Provide the path to the pull secret file.

      Alternatively, you can perform a manual update to the pull secret file.

  2. Enter the following command to update the global pull secret for your cluster:

    $ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1
    1
    Provide the path to the new pull secret file.

    This update is rolled out to all nodes, which can take some time depending on the size of your cluster.

    Note

    As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.

Chapter 6. Managing image streams

Image streams provide a means of creating and updating container images in an on-going way. As improvements are made to an image, tags can be used to assign new version numbers and keep track of changes. This document describes how image streams are managed.

6.1. Why use imagestreams

An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes.

Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository.

You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively.

For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image.

However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the previous, presumably known good image.

The source images can be stored in any of the following:

  • OpenShift Container Platform’s integrated registry.
  • An external registry, for example registry.redhat.io or quay.io.
  • Other image streams in the OpenShift Container Platform cluster.

When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image.

The image stream metadata is stored in the etcd instance along with other cluster information.

Using image streams has several significant benefits:

  • You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line.
  • You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects.
  • You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration.
  • You can share images using fine-grained access control and quickly distribute images across your teams.
  • If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly.
  • You can configure security around who can view and use the images through permissions on the image stream objects.
  • Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams.

6.2. Configuring image streams

An ImageStream object file contains the following elements.

Imagestream object definition

apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  labels:
    app: ruby-sample-build
    template: application-template-stibuild
  name: origin-ruby-sample 1
  namespace: test
spec: {}
status:
  dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2
  tags:
  - items:
    - created: 2017-09-02T10:15:09Z
      dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3
      generation: 2
      image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4
    - created: 2017-09-01T13:40:11Z
      dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
      generation: 1
      image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
    tag: latest 5

1
The name of the image stream.
2
Docker repository path where new images can be pushed to add or update them in this image stream.
3
The SHA identifier that this image stream tag currently references. Resources that reference this image stream tag use this identifier.
4
The SHA identifier that this image stream tag previously referenced. Can be used to rollback to an older image.
5
The image stream tag name.

6.3. Image stream images

An image stream image points from within an image stream to a particular image ID.

Image stream images allow you to retrieve metadata about an image from a particular image stream where it is tagged.

Image stream image objects are automatically created in OpenShift Container Platform whenever you import or tag an image into the image stream. You should never have to explicitly define an image stream image object in any image stream definition that you use to create image streams.

The image stream image consists of the image stream name and image ID from the repository, delimited by an @ sign:

<image-stream-name>@<image-id>

To refer to the image in the ImageStream object example, the image stream image looks like:

origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d

6.4. Image stream tags

An image stream tag is a named pointer to an image in an image stream. It is abbreviated as istag. An image stream tag is used to reference or retrieve an image for a given image stream and tag.

Image stream tags can reference any local or externally managed image. It contains a history of images represented as a stack of all images the tag ever pointed to. Whenever a new or existing image is tagged under particular image stream tag, it is placed at the first position in the history stack. The image previously occupying the top position is available at the second position. This allows for easy rollbacks to make tags point to historical images again.

The following image stream tag is from an ImageStream object:

Image stream tag with two images in its history

  tags:
  - items:
    - created: 2017-09-02T10:15:09Z
      dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
      generation: 2
      image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
    - created: 2017-09-01T13:40:11Z
      dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
      generation: 1
      image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
    tag: latest

Image stream tags can be permanent tags or tracking tags.

  • Permanent tags are version-specific tags that point to a particular version of an image, such as Python 3.5.
  • Tracking tags are reference tags that follow another image stream tag and can be updated to change which image they follow, like a symlink. These new levels are not guaranteed to be backwards-compatible.

    For example, the latest image stream tags that ship with OpenShift Container Platform are tracking tags. This means consumers of the latest image stream tag are updated to the newest level of the framework provided by the image when a new level becomes available. A latest image stream tag to v3.10 can be changed to v3.11 at any time. It is important to be aware that these latest image stream tags behave differently than the Docker latest tag. The latest image stream tag, in this case, does not point to the latest image in the Docker repository. It points to another image stream tag, which might not be the latest version of an image. For example, if the latest image stream tag points to v3.10 of an image, when the 3.11 version is released, the latest tag is not automatically updated to v3.11, and remains at v3.10 until it is manually updated to point to a v3.11 image stream tag.

    Note

    Tracking tags are limited to a single image stream and cannot reference other image streams.

You can create your own image stream tags for your own needs.

The image stream tag is composed of the name of the image stream and a tag, separated by a colon:

<imagestream name>:<tag>

For example, to refer to the sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d image in the ImageStream object example earlier, the image stream tag would be:

origin-ruby-sample:latest

6.5. Image stream change triggers

Image stream triggers allow your builds and deployments to be automatically invoked when a new version of an upstream image is available.

For example, builds and deployments can be automatically started when an image stream tag is modified. This is achieved by monitoring that particular image stream tag and notifying the build or deployment when a change is detected.

6.6. Image stream mapping

When the integrated registry receives a new image, it creates and sends an image stream mapping to OpenShift Container Platform, providing the image’s project, name, tag, and image metadata.

Note

Configuring image stream mappings is an advanced feature.

This information is used to create a new image, if it does not already exist, and to tag the image into the image stream. OpenShift Container Platform stores complete metadata about each image, such as commands, entry point, and environment variables. Images in OpenShift Container Platform are immutable and the maximum name length is 63 characters.

The following image stream mapping example results in an image being tagged as test/origin-ruby-sample:latest:

Image stream mapping object definition

apiVersion: image.openshift.io/v1
kind: ImageStreamMapping
metadata:
  creationTimestamp: null
  name: origin-ruby-sample
  namespace: test
tag: latest
image:
  dockerImageLayers:
  - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
    size: 0
  - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29
    size: 196634330
  - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
    size: 0
  - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
    size: 0
  - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092
    size: 177723024
  - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e
    size: 55679776
  - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966
    size: 11939149
  dockerImageMetadata:
    Architecture: amd64
    Config:
      Cmd:
      - /usr/libexec/s2i/run
      Entrypoint:
      - container-entrypoint
      Env:
      - RACK_ENV=production
      - OPENSHIFT_BUILD_NAMESPACE=test
      - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git
      - EXAMPLE=sample-app
      - OPENSHIFT_BUILD_NAME=ruby-sample-build-1
      - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      - STI_SCRIPTS_URL=image:///usr/libexec/s2i
      - STI_SCRIPTS_PATH=/usr/libexec/s2i
      - HOME=/opt/app-root/src
      - BASH_ENV=/opt/app-root/etc/scl_enable
      - ENV=/opt/app-root/etc/scl_enable
      - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable
      - RUBY_VERSION=2.2
      ExposedPorts:
        8080/tcp: {}
      Labels:
        build-date: 2015-12-23
        io.k8s.description: Platform for building and running Ruby 2.2 applications
        io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest
        io.openshift.build.commit.author: Ben Parees <bparees@users.noreply.github.com>
        io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500
        io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442
        io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti'
        io.openshift.build.commit.ref: master
        io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e
        io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git
        io.openshift.builder-base-version: 8d95148
        io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df
        io.openshift.s2i.scripts-url: image:///usr/libexec/s2i
        io.openshift.tags: builder,ruby,ruby22
        io.s2i.scripts-url: image:///usr/libexec/s2i
        license: GPLv2
        name: CentOS Base Image
        vendor: CentOS
      User: "1001"
      WorkingDir: /opt/app-root/src
    Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76
    ContainerConfig:
      AttachStdout: true
      Cmd:
      - /bin/sh
      - -c
      - tar -C /tmp -xf - && /usr/libexec/s2i/assemble
      Entrypoint:
      - container-entrypoint
      Env:
      - RACK_ENV=production
      - OPENSHIFT_BUILD_NAME=ruby-sample-build-1
      - OPENSHIFT_BUILD_NAMESPACE=test
      - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git
      - EXAMPLE=sample-app
      - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      - STI_SCRIPTS_URL=image:///usr/libexec/s2i
      - STI_SCRIPTS_PATH=/usr/libexec/s2i
      - HOME=/opt/app-root/src
      - BASH_ENV=/opt/app-root/etc/scl_enable
      - ENV=/opt/app-root/etc/scl_enable
      - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable
      - RUBY_VERSION=2.2
      ExposedPorts:
        8080/tcp: {}
      Hostname: ruby-sample-build-1-build
      Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e
      OpenStdin: true
      StdinOnce: true
      User: "1001"
      WorkingDir: /opt/app-root/src
    Created: 2016-01-29T13:40:00Z
    DockerVersion: 1.8.2.fc21
    Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43
    Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd
    Size: 441976279
    apiVersion: "1.0"
    kind: DockerImage
  dockerImageMetadataVersion: "1.0"
  dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d

6.7. Working with image streams

The following sections describe how to use image streams and image stream tags.

6.7.1. Getting information about image streams

You can get general information about the image stream and detailed information about all the tags it is pointing to.

Procedure

  • Get general information about the image stream and detailed information about all the tags it is pointing to:

    $ oc describe is/<image-name>

    For example:

    $ oc describe is/python

    Example output

    Name:			python
    Namespace:		default
    Created:		About a minute ago
    Labels:			<none>
    Annotations:		openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z
    Docker Pull Spec:	docker-registry.default.svc:5000/default/python
    Image Lookup:		local=false
    Unique Images:		1
    Tags:			1
    
    3.5
      tagged from centos/python-35-centos7
    
      * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
          About a minute ago

  • Get all the information available about particular image stream tag:

    $ oc describe istag/<image-stream>:<tag-name>

    For example:

    $ oc describe istag/python:latest

    Example output

    Image Name:	sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
    Docker Image:	centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
    Name:		sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
    Created:	2 minutes ago
    Image Size:	251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB)
    Image Created:	2 weeks ago
    Author:		<none>
    Arch:		amd64
    Entrypoint:	container-entrypoint
    Command:	/bin/sh -c $STI_SCRIPTS_PATH/usage
    Working Dir:	/opt/app-root/src
    User:		1001
    Exposes Ports:	8080/tcp
    Docker Labels:	build-date=20170801

Note

More information is output than shown.

6.7.2. Adding tags to an image stream

You can add additional tags to image streams.

Procedure

  • Add a tag that points to one of the existing tags by using the `oc tag`command:

    $ oc tag <image-name:tag1> <image-name:tag2>

    For example:

    $ oc tag python:3.5 python:latest

    Example output

    Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.

  • Confirm the image stream has two tags, one, 3.5, pointing at the external container image and another tag, latest, pointing to the same image because it was created based on the first tag.

    $ oc describe is/python

    Example output

    Name:			python
    Namespace:		default
    Created:		5 minutes ago
    Labels:			<none>
    Annotations:		openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z
    Docker Pull Spec:	docker-registry.default.svc:5000/default/python
    Image Lookup:		local=false
    Unique Images:		1
    Tags:			2
    
    latest
      tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
    
      * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
          About a minute ago
    
    3.5
      tagged from centos/python-35-centos7
    
      * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25
          5 minutes ago

6.7.3. Adding tags for an external image

You can add tags for external images.

Procedure

  • Add tags pointing to internal or external images, by using the oc tag command for all tag-related operations:

    $ oc tag <repository/image> <image-name:tag>

    For example, this command maps the docker.io/python:3.6.0 image to the 3.6 tag in the python image stream.

    $ oc tag docker.io/python:3.6.0 python:3.6

    Example output

    Tag python:3.6 set to docker.io/python:3.6.0.

    If the external image is secured, you must create a secret with credentials for accessing that registry.

6.7.4. Updating image stream tags

You can update a tag to reflect another tag in an image stream.

Procedure

  • Update a tag:

    $ oc tag <image-name:tag> <image-name:latest>

    For example, the following updates the latest tag to reflect the 3.6 tag in an image stream:

    $ oc tag python:3.6 python:latest

    Example output

    Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.

6.7.5. Removing image stream tags

You can remove old tags from an image stream.

Procedure

  • Remove old tags from an image stream:

    $ oc tag -d <image-name:tag>

    For example:

    $ oc tag -d python:3.5

    Example output

    Deleted tag default/python:3.5.

See Removing deprecated image stream tags from the Cluster Samples Operator for more information on how the Cluster Samples Operator handles deprecated image stream tags.

6.7.6. Configuring periodic importing of image stream tags

When working with an external container image registry, to periodically re-import an image, for example to get latest security updates, you can use the --scheduled flag.

Procedure

  1. Schedule importing images:

    $ oc tag <repository/image> <image-name:tag> --scheduled

    For example:

    $ oc tag docker.io/python:3.6.0 python:3.6 --scheduled

    Example output

    Tag python:3.6 set to import docker.io/python:3.6.0 periodically.

    This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default.

  2. Remove the periodic check, re-run above command but omit the --scheduled flag. This will reset its behavior to default.

    $ oc tag <repositiory/image> <image-name:tag>

6.8. Importing images and image streams from private registries

An image stream can be configured to import tag and image metadata from private image registries requiring authentication. This procedures applies if you change the registry that the Cluster Samples Operator uses to pull content from to something other than registry.redhat.io.

Note

When importing from insecure or secure registries, the registry URL defined in the secret must include the :80 port suffix or the secret is not used when attempting to import from the registry.

Procedure

  1. You must create a secret object that is used to store your credentials by entering the following command:

    $ oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson
  2. After the secret is configured, create the new image stream or enter the oc import-image command:

    $ oc import-image <imagestreamtag> --from=
          <privileged>false</privileged>
          <alwaysPullImage>true</alwaysPullImage>
          <workingDir>/tmp</workingDir>
          <command></command>
          <args>${computer.jnlpmac} ${computer.name}</args>
          <ttyEnabled>false</ttyEnabled>
          <resourceRequestCpu></resourceRequestCpu>
          <resourceRequestMemory></resourceRequestMemory>
          <resourceLimitCpu></resourceLimitCpu>
          <resourceLimitMemory></resourceLimitMemory>
          <envVars/>
        </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      </containers>
      <envVars/>
      <annotations/>
      <imagePullSecrets/>
      <nodeProperties/>
    </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>

Note

If you log in to the Jenkins console and make further changes to the pod template configuration after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the config map has changed, it will replace the pod template and overwrite those configuration changes. You cannot merge a new configuration with the existing configuration.

Do not log in to the Jenkins console and modify the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration.

Consider the config map approach if you have more complex configuration needs.

After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin.

The following rules apply:

  • Removing the label or annotation from the config map, image stream, or image stream tag results in the deletion of any existing PodTemplate from the configuration of the Kubernetes plugin.
  • If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin.
  • Either creating appropriately labeled or annotated ConfigMap, ImageStream, or ImageStreamTag objects, or the adding of labels after their initial creation, leads to creating of a PodTemplate in the Kubernetes-plugin configuration.
  • In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration and overrides any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map.

To use a container image as a Jenkins agent, the image must run the agent as an entrypoint. For more details about this, refer to the official Jenkins documentation.

12.2.7. Jenkins permissions

If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod.

Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image.

If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted.

The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master.

  • Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account.
  • For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template.
  • If you do not specify a value for the service account, the default service account is used.
  • Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod.

12.2.8. Creating a Jenkins service from a template

Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup.

The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether or not the Jenkins content persists across a pod restart.

Note

A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment.

  • jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing.
  • jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart.

To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment.

After you select which template you want, you must instantiate the template to be able to use Jenkins.

Procedure

  1. Create a new Jenkins application using one of the following methods:

    • A PV:

      $ oc new-app jenkins-persistent
    • Or an emptyDir type volume where configuration does not persist across pod restarts:

      $ oc new-app jenkins-ephemeral

12.2.9. Using the Jenkins Kubernetes plugin

In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig, openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image.

Sample BuildConfig that uses the Jenkins Kubernetes plugin

kind: List
apiVersion: v1
items:
- kind: ImageStream
  apiVersion: image.openshift.io/v1
  metadata:
    name: openshift-jee-sample
- kind: BuildConfig
  apiVersion: build.openshift.io/v1
  metadata:
    name: openshift-jee-sample-docker
  spec:
    strategy:
      type: Docker
    source:
      type: Docker
      dockerfile: |-
        FROM openshift/wildfly-101-centos7:latest
        COPY ROOT.war /wildfly/standalone/deployments/ROOT.war
        CMD $STI_SCRIPTS_PATH/run
      binary:
        asFile: ROOT.war
    output:
      to:
        kind: ImageStreamTag
        name: openshift-jee-sample:latest
- kind: BuildConfig
  apiVersion: build.openshift.io/v1
  metadata:
    name: openshift-jee-sample
  spec:
    strategy:
      type: JenkinsPipeline
      jenkinsPipelineStrategy:
        jenkinsfile: |-
          node("maven") {
            sh "git clone https://github.com/openshift/openshift-jee-sample.git ."
            sh "mvn -B -Popenshift package"
            sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war"
          }
    triggers:
    - type: ConfigChange

It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the previous example, which overrides the container memory and specifies an environment variable.

Sample BuildConfig that uses the Jenkins Kubernetes Plugin, specifying memory limit and environment variable

kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
  name: openshift-jee-sample
spec:
  strategy:
    type: JenkinsPipeline
    jenkinsPipelineStrategy:
      jenkinsfile: |-
        podTemplate(label: "mypod", 1
                    cloud: "openshift", 2
                    inheritFrom: "maven", 3
                    containers: [
            containerTemplate(name: "jnlp", 4
                              image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5
                              resourceRequestMemory: "512Mi", 6
                              resourceLimitMemory: "512Mi", 7
                              envVars: [
              envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8
            ])
          ]) {
          node("mypod") { 9
            sh "git clone https://github.com/openshift/openshift-jee-sample.git ."
            sh "mvn -B -Popenshift package"
            sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war"
          }
        }
  triggers:
  - type: ConfigChange

1
A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza.
2
The cloud value must be set to openshift.
3
The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform.
4
This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp.
5
Specify the container image name again. This is a known issue.
6
A memory request of 512 Mi is specified.
7
A memory limit of 512 Mi is specified.
8
An environment variable CONTAINER_HEAP_PERCENT, with value 0.25, is specified.
9
The node stanza references the name of the defined pod template.

By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile.

12.2.10. Jenkins memory requirements

When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi.

By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible.

And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master.

It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis.

You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template.

12.2.11. Additional resources

12.3. Jenkins agent

OpenShift Container Platform provides three images that are suitable for use as Jenkins agents: the Base, Maven, and Node.js images.

The first is a base image for Jenkins agents:

  • It pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones including git, tar, zip, and nss among others.
  • It establishes the JNLP agent as the entrypoint.
  • It includes the oc client tooling for invoking command line operations from within Jenkins jobs.
  • It provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images.

Two more images that extend the base image are also provided:

  • Maven v3.5 image
  • Node.js v10 image and Node.js v12 image

The Maven and Node.js Jenkins agent images provide Dockerfiles for the Universal Base Image (UBI) that you can reference when building new agent images. Also note the contrib and contrib/bin subdirectories. They allow for the insertion of configuration files and executable scripts for your image.

Important

Use and extend an appropriate agent image version for the your of OpenShift Container Platform. If the oc client version that is embedded in the agent image is not compatible with the OpenShift Container Platform version, unexpected behavior can result.

12.3.1. Jenkins agent images

The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io.

Jenkins images are available through the Red Hat Registry:

$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>

To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry.

12.3.2. Jenkins agent environment variables

Each Jenkins agent container can be configured with the following environment variables.

VariableDefinitionExample values and settings

JAVA_MAX_HEAP_PARAM, CONTAINER_HEAP_PERCENT, JENKINS_MAX_HEAP_UPPER_BOUND_MB

These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB.

By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap.

JAVA_MAX_HEAP_PARAM example setting: -Xmx512m

CONTAINER_HEAP_PERCENT default: 0.5, or 50%

JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB

JAVA_INITIAL_HEAP_PARAM, CONTAINER_INITIAL_PERCENT

These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size.

By default, the JVM sets the initial heap size.

JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m

CONTAINER_INITIAL_PERCENT example setting: 0.1, or 10%

CONTAINER_CORE_LIMIT

If set, specifies an integer number of cores used for sizing numbers of internal JVM threads.

Example setting: 2

JAVA_TOOL_OPTIONS

Specifies options to apply to all JVMs running in this container. It is not recommended to override this value.

Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true

JAVA_GC_OPTS

Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value.

Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90

JENKINS_JAVA_OVERRIDES

Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash.

Example settings: -Dfoo -Dbar; -Dfoo=first\ value -Dbar=second\ value

USE_JAVA_VERSION

Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0. If you extend the container base image, you can specify any alternative version of java using its associated suffix.

The default value is java-11.

Example setting: java-1.8.0

12.3.3. Jenkins agent memory requirements

A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac, Maven, or Gradle.

By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely.

By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill.

By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads.

12.3.4. Jenkins agent Gradle builds

Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified.

The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required.

  • Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file.
  • Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument.
  • To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file.
  • Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file.
  • Override the Gradle JVM memory parameters by the GRADLE_OPTS, JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables.
  • Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle, or through the -Dorg.gradle.jvmargs command line argument.

12.3.5. Jenkins agent pod retention

Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported:

  • Always keeps the build pod regardless of build result.
  • Default uses the plugin value, which is the pod template only.
  • Never always deletes the pod.
  • On Failure keeps the pod if it fails during the build.

You can override pod retention in the pipeline Jenkinsfile:

podTemplate(label: "mypod",
  cloud: "openshift",
  inheritFrom: "maven",
  podRetention: onFailure(), 1
  containers: [
    ...
  ]) {
  node("mypod") {
    ...
  }
}
1
Allowed values for podRetention are never(), onFailure(), always(), and default().
Warning

Pods that are kept might continue to run and count against resource quotas.

12.4. Source-to-image

You can use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. You can use the Red Hat Java Source-to-Image for OpenShift documentation as a reference for runtime environments that use Java. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code.

S2I images include:

  • .NET
  • Java
  • Go
  • Node.js
  • Perl
  • PHP
  • Python
  • Ruby

S2I images are available for you to use directly from the OpenShift Container Platform web console by following procedure:

  1. Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective.
  2. Use the perspective switcher to switch to the Developer perspective.
  3. In the +Add view, select an existing project from the list or use the Project drop-down list to create a new project.
  4. Choose All services under the tile Developer Catalog.
  5. Select the Type Builder Images then can see the available S2I images.

S2I images are also available though the Configuring the Cluster Samples Operator.

12.4.1. Source-to-image build process overview

Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. It performs the following steps:

  1. Runs the FROM <builder image> command
  2. Copies the source code to a defined location in the builder image
  3. Runs the assemble script in the builder image
  4. Sets the run script in the builder image as the default command

Buildah then creates the container image.

12.4.2. Additional resources

12.5. Customizing source-to-image images

Source-to-image (S2I) builder images include assemble and run scripts, but the default behavior of those scripts is not suitable for all users. You can customize the behavior of an S2I builder that includes default scripts.

12.5.1. Invoking scripts embedded in an image

Builder images provide their own version of the source-to-image (S2I) scripts that cover the most common use-cases. If these scripts do not fulfill your needs, S2I provides a way of overriding them by adding custom ones in the .s2i/bin directory. However, by doing this, you are completely replacing the standard scripts. In some cases, replacing the scripts is acceptable, but, in other scenarios, you can run a few commands before or after the scripts while retaining the logic of the script provided in the image. To reuse the standard scripts, you can create a wrapper script that runs custom logic and delegates further work to the default scripts in the image.

Procedure

  1. Look at the value of the io.openshift.s2i.scripts-url label to determine the location of the scripts inside of the builder image:

    $ podman inspect --format='{{ index .Config.Labels "io.openshift.s2i.scripts-url" }}' wildfly/wildfly-centos7

    Example output

    image:///usr/libexec/s2i

    You inspected the wildfly/wildfly-centos7 builder image and found out that the scripts are in the /usr/libexec/s2i directory.

  2. Create a script that includes an invocation of one of the standard scripts wrapped in other commands:

    .s2i/bin/assemble script

    #!/bin/bash
    echo "Before assembling"
    
    /usr/libexec/s2i/assemble
    rc=$?
    
    if [ $rc -eq 0 ]; then
        echo "After successful assembling"
    else
        echo "After failed assembling"
    fi
    
    exit $rc

    This example shows a custom assemble script that prints the message, runs the standard assemble script from the image, and prints another message depending on the exit code of the assemble script.

    Important

    When wrapping the run script, you must use exec for invoking it to ensure signals are handled properly. The use of exec also precludes the ability to run additional commands after invoking the default image run script.

    .s2i/bin/run script

    #!/bin/bash
    echo "Before running application"
    exec /usr/libexec/s2i/run

Legal Notice

Copyright © 2023 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.