Chapter 3. Architectural Overview

3.1. Image Promotion Pipeline with Centralized Registry

Image tagging and imagestreams are used to track build deployments across the lifecycle environment. The following high-level diagram follows a single image through the application lifecycle.

Figure 3.1. Pipeline Promotion

Pipeline Promotion
  1. The pipeline may be triggered by either:

    1. A base image change, e.g. registry.access.redhat.com/openshiftv3/nodejs-mongo-persistent.
    2. A source code commit.
  2. An OpenShift build is initiated.
  3. The resulting container image is pushed to the registry, tagged with the current version number plus a build increment, e.g. v1.1-3. The dev environment moves the app:latest imagestream tag to point to the new image in the registry app:v1.1-3.
  4. A test deployment is run to test the functioning API, pulling the built image using the latest tag.
  5. If passed, the stage environment moves the app:latest imagestream tag to point to the registry tag app:v1.1-3. Since the ‘latest’ imagestream tag has changed a new deployment is initiated to run the user acceptance test (UAT) phase.

The dev/stage pipeline ends at this phase. Pass/Fail may be posted to internal systems. Many builds may run through to this stage until a release candidate is identified. When a specific build is planned for release, an authorized person initiates the release pipeline.

  1. An authorized user logs into OpenShift to initiate the release by providing the specific build number, e.g. v1.1-3.
  2. The release candidate image is tagged in the registry as v1.1 and latest. This convention ensures the registry is always serving the latest release with tag ‘latest’ and the released version can also be pulled using the <majorVersion.minorVersion> format. The previously released version remains available as ‘v1.0’.
  3. The production environment tags an imagestream to point to tag v1.1 in the registry, updates the deployment config and rolls out the application.

3.2. Using Image Tags Through the SDLC

Tags are the primary method for identifying images. They are mutable objects so they must be handled carefully. An OpenShift-based registry allows automation to remotely tag images without re-pushing.

There are several conventions used to help users and automation identify the correct image.

  1. The ‘latest’ tag is a convenience to reference the latest image that has passed dev environment testing.
  2. The application version, <majorVersion.minorVersion> references the latest release of that particular version.
  3. A build number is appended to identify a specific image under test. These images take the form of <majorVersion.minorVersion-buildNumber>.

The following graphic depicts an example of how two images are tagged in a single registry throughout the SDLC. The boldface image tags identify which tag is referenced in the deployment.

Figure 3.2. Registry tagging

Registry tagging
  1. Test image tag ‘v1.2-6’ is pushed to the registry and deployed to the development environment. The ‘6’ is the Jenkins build number.
  2. If tests pass in the development environment the image tag ‘v1.2-6’ is deployed to the stage environment.
  3. If tag ‘v1.2-6’ is chosen for release, the image is tagged ‘v1.2’, which identifies this is the released version. The image is also tagged ‘latest’. Tag ‘v1.1’ is still available but it is no longer ‘latest’.

3.3. Pipelines and Triggers

Both OpenShift and Jenkins provide methods to trigger builds and deployments. Centralizing most of the workflow triggers through Jenkins reduces the complexity of understanding deployments and why they have occurred.

The pipelines buildconfigs are created in OpenShift. The OpenShift sync plugin ensures Jenkins has the same pipelines defined. This simplifies Jenkins pipeline bootstrapping. Pipeline builds may be initiated from either OpenShift or Jenkins.

The following table describes the pipelines in this project.

Table 3.1. Jenkins Pipelines

PipelinePurposeTrigger

app-pipeline

Manage app deployment across dev and stage clusters

Poll SCM

release-pipeline

Tag image for release and rollout to production environment

Manual

jenkins-lifecycle

Manage Jenkins Master

Poll SCM

app-base-image jenkins-base-image

Notify the application base image has changed

Base imagestream:latest change

3.3.1. app-pipeline

The app-pipeline is an example that manages the nodejs-ex application deployment across the projects or clusters. There are a number of stages to complete the promotion process described in Section 3.1, “Image Promotion Pipeline with Centralized Registry”, and the app-pipeline handles the Dev and Stage part. Let’s see an overview of the steps the example app-pipeline performs.

First, the stage cluster authentication is synchronized into Jenkins which will be used in a later stage. As previously mentioned, nodejs-ex is being used as the example application. Fortunately, it contains testing that can be reused, which is executed next. If testing passes then the application’s OpenShift template is processed and applied. The next stage is to build the image. Once complete the tag is incremented as described in Section 3.2, “Using Image Tags Through the SDLC”. After the image becomes available it can be rolled out. If that completes successfully, the process is duplicated for the stage cluster.

3.3.2. release-pipeline

The release-pipeline is very similar to the Section 3.3.1, “app-pipeline” and addresses the remaining steps in Section 3.1, “Image Promotion Pipeline with Centralized Registry”. Credentials for the production and registry clusters must be synchronized and the OpenShift template must also be applied. The only major difference is prompting for the specific image tag to be promoted to production. This stage also performs the necessary tagging described in Section 3.2, “Using Image Tags Through the SDLC”. After the pipeline is complete, an email will be sent to the configured recipients.

3.3.3. jenkins-lifecycle

If there are changes within the source repository’s jenkins directory Jenkins will initiate a rebuild of the image with the appropriate modifications.

3.3.4. app-base-image and jenkins-base-image

The base-image-pipeline is embedded in an OpenShift BuildConfig and Template object. Using an OpenShift Template simplifies the deployment and configuration of multiple pipelines that will monitor base image changes. In the example configuration, the two pipelines created will monitor for updates to the Jenkins or nodejs source-to-image (S2I) images. When detected, the pipeline will prompt to upgrade which will trigger a Jenkins job to rebuild the image.

3.4. Topology

This graphic represents the major components of the system.

Figure 3.3. Topology

Topology

All artifacts are in source control. These artifacts are used by ansible-playbook and oc remote clients. The clients deploy these artifacts to the development cluster via API. Jenkins deploys to the other clusters via API.

The dev cluster performs S2I container builds, pushing them to the registry. All environments pull these images to the clusters.

3.5. Configuration via Inventory

To reference multiple clusters (or projects) a specific Ansible inventory pattern may be used. Since OpenShift projects may be configured remotely, no SSH connection is required. In Ansible this is a “local” connection. An inventory file may be used that references arbitrary hosts all using the “local” connection. This means all commands will be executed on the local machine but the hostvars will cause the API calls to be made against the target cluster. In this example inventory file, four groups and hosts are defined: dev/dev-1, stage/stage-1, prod/prod-1 and registry/registry-1. Each group and host corresponds to an OpenShift cluster or project. The playbook can be executed against each individually using a specific group (e.g. "stage") or hostname "all" to apply a playbook against all clusters. The example group and host variable files provide group and host-specific connection details.

3.5.1. Ansible Roles

Ansible is designed to be self-documenting. However, additional explanation may be beneficial.

3.5.1.1. auth

  • Configure service accounts.
  • Bind and unbind roles for listed users and service accounts.
  • Retrieve registry token for push and pull.

3.5.1.2. jenkins

  • Create secrets on Jenkins host with tokens from other clusters.
  • Start custom Jenkins build.
  • Create Jenkins service.
  • Create pipeline buildconfigs.

3.5.1.3. puller

  • Create dockercfg secret so appropriate clusters may pull from a central registry.
  • Link dockercfg secret with default service account.

3.5.1.4. pusher

  • Create dockercfg secret so appropriate clusters may push builds to the central registry.
  • Link dockercfg secret with builder service account.

3.5.2. Playbook overview

The reference Ansible playbook performs the following tasks:

  1. Bootstrap the projects involved by creating the projects, if needed, and obtaining an authentication token for the admin users to each project to be used by subsequent oc client operations.
  2. Configure authorization in all projects by invoking the Section 3.5.1.1, “auth” role.
  3. Prepare the dev environment by setting up Jenkins through the Section 3.5.1.2, “jenkins” role and preparing registry push credentials through the Section 3.5.1.4, “pusher” role.
  4. Prepare the stage and production environments by setting up image pull permissions using the Section 3.5.1.3, “puller” role.