Chapter 4. Deploying the Automation

The automation used in this reference implementation can be found here. The code should be checked out to the local workstation:

$ git clone https://github.com/RHsyseng/jenkins-on-openshift.git
$ cd jenkins-on-openshift/ansible

The repository contains a reference Ansible playbook in main.yml to configure a set of application project namespaces on a set of OpenShift clusters, covering an application’s life-cycle through separate projects for development, staging, and production, using a shared container image registry.

Tip

When using the oc client to work against multiple clusters it is important to learn how to switch between contexts. See the command line documentation for information on cluster context.

The reference playbook deploys a Jenkins instance in the development cluster to drive the life-cycle of the application through the environments.

4.1. Initial Configuration

  1. Configure the environments: using the group_vars/*.yml.example files, rename each file to remove '.example', e.g.

    for i in group_vars/*.example; do mv "${i}" "${i/.example}"; done

    The directory should look like this:

    .
    └── group_vars
        ├── all.yml
        ├── dev.yml
        ├── prod.yml
        ├── registry.yml
        └── stage.yml

    Edit these files and adjust the variables accordingly. At the very least, these settings must be customized (see the variables section below for more details):

    • all.yml: central_registry_hostname
    • [group].yml: clusterhost
  2. Configure authentication to each environment using the host_vars/*-1.yml.example files as a guide. Rename each file to remove '.example'. The directory should look like this:

    .
    └── host_vars
        ├── dev-1.yml
        ├── prod-1.yml
        ├── registry-1.yml
        └── stage-1.yml

    Then edit each of the files and set the respective authentication information. See the host variables section for more details.

  3. Run the playbook:

    ansible-playbook -i inventory.yml main.yml
Note

A number of Ansible actions may appear as failed while executing the playbook. The playbook is operating normally if the end result has no failed hosts. See ignoring failed commands for additional information.

4.1.1. Variables

Below is a description of the variables used by the playbooks. Adjust the values of these variables to suit the given environment, clusters, and application.

The various variables are stored in different files depending on the scope they have, and therefore are meant to be configured through the group or host variable files in {group,host}_vars/*.yml.

However, Ansible’s variable precedence rules apply here, so it is possible to set or override some variable values in different places.

For example, to disable TLS certificate validation for the staging environment/cluster only, validate_certs: false may be set in group_vars/stage.yml while also keeping validate_certs: true in the group_vars/all.yml file.

It is also possible to override values in the inventory file and via the --extra-vars option of the ansible-playbook command. For example, in a single cluster environment it may be better to set the value of clusterhost just once as an inventory variable (i.e. in the vars section of inventory.yml) instead of using a group variable. Moreover, if the same user is the administrator of all the projects, instead of configuring authentication details in each environment’s file inside host_vars a single token can be passed directly to the playbook with:

ansible-playbook --extra-vars token=$(oc whoami -t) ...

See the Ansible documentation for more details.

4.1.1.1. group_vars/all.yml

This file specifies variables that are common through all the environments:

Table 4.1. Variables common through all environments

Variable NameRequired ReviewDescription

central_registry_hostname

Yes

The hostname[:port] of the central registry where all images will be stored.

source_repo_url

No

git repository URL of the pipelines to deploy

source_repo_branch

No

git branch to use for pipeline deployment

app_template_path

No

Relative path within the git repo where theapplication template is stored

app_name

No

Name of the application

app_base_tag

No

Base ImageStreamTag that the application will use

validate_certs

Yes

Whether to validate the TLS certificates during cluster/registry communications

notify_email_list

Yes

Email notifications from pipelines: destination

notify_email_from

Yes

Email notifications from pipelines: from

notify_email_replyto

Yes

Email notifications from pipelines: reply-to

oc_url

No

URL location of OpenShift Origin client

oc_extract_dest

Yes

Disk location that the client will be downloaded and extracted

oc_path

Yes

Path to OpenShift client (used if workaround is False)

oc_no_log

No

Disables logging of oc commands to hide OpenShift token.

enable_dockercfg_workaround

Yes

Implements workaround described in the Appendix Known issues

4.1.1.2. group_vars/[group].yml

There is a .yml file for each of the environments: development (dev), staging (stage), production (prod), and shared registry. Each of these files contains variables describing their respective cluster and project details:

Table 4.2. Variables common through all environments

Variable NameRequired ReviewDescription

clusterhost

Yes

Specifies the hostname[:port] to contact the OpenShift cluster where the environment is hosted. Do not include the protocol (http[s]://).

project_name

No

Describe the project where the respective environment is hosted.

project_display_name

No

Describe the project where the respective environment is hosted.

project_description

No

Describe the project where the respective environment is hosted.

{admin,editor,viewer}_{users,groups}

No

A set of lists of users/groups that need permissions on the project. Users listed in the users role get [role] permissions granted.

deprecated_{admin,editor,viewer}_{users,groups}

No

Add users/groups that must have their permissions revoked.

4.1.1.3. host_vars/[environment]-1.yml

These files contain authentication credentials for each of the environments.

Depending on the authentication method of the OpenShift cluster, authentication credentials can be provided either as openshift_username and openshift_password or as an authentication token.

A token (for example obtained from oc whoami -t) always works. Moreover, it is the only option if the available authentication method requires an external login (for example GitHub), where a username/password combination can not be used from the command line.

Tip

Since we are dealing with authentication information, ansible-vault can help protect the information in these files through encryption.

4.1.2. Example setups

Here are some sample values for the configuration variables to address specific needs.

4.1.2.1. TLS/SSL certificate validation

The validate_certs variable is a Boolean that enables or disables TLS certificate validation for the clusters and the registry.

It is important to keep in mind the playbooks provided here only interact with the configured OpenShift clusters through an API, and do not interfere with the cluster’s own configuration.

Therefore, if for any reason TLS certificate validation is disabled for a cluster, the cluster administrator must also take measures to ensure the cluster operates accordingly.

In particular, image push/pull is performed by the container runtime in each of the nodes in the cluster. If validate_certs is disabled for the registry being used (central_registry_hostname), the nodes also require the registry to be configured as an insecure registry.

Note

To configure a node to use an insecure registry edit either /etc/containers/registries.conf or /etc/sysconfig/docker and restart docker.

Note

Disabling certificate validation is not ideal; properly managed TLS certificates are preferable. OpenShift documentation has sections on Securing the Container Platform itself as well as Securing the Registry.

4.1.2.2. Single cluster / shared clusters

The playbook is designed to operate on four separate OpenShift clusters (one project on each), each hosting one of the environments: development, staging, production, registry.

It is possible to share the same cluster among various environments (potentially all 4 running on the same cluster, on separate projects) by pointing them to the same clusterhost. This is particularly useful during local testing, where the whole stack can run on a single all-in-one cluster powered by minishift or oc cluster up.

However, if the registry project shares a cluster with other projects extra care must be taken to ensure the images in the registry’s namespace are accessible from the other projects sharing the cluster.

With regard to the ability of the other projects to use images belonging to the registry’s namespace, relevant project’s service accounts must be granted view permissions to the registry’s project. This is achieved by adding their service accounts' groups to the viewer_groups in group_vars/registry.yml (see environment group_vars):

    viewer_groups:
      - system:serviceaccounts:dev
      - system:serviceaccounts:stage
      - system:serviceaccounts:prod
Note

These are groups of service accounts, so it is system:serviceaccounts (with an s). Adjust the names according to the project_name of the respective environment.

If the dev project shares the same cluster with the registry project, one additional requirement exists. Images are built on this project, so its builder service account needs privileges to push to the registry’s namespace. One way to achieve this is by adding that service account to the list of users with an editor role in registry.yml:

    editor_users:
      - system:serviceaccount:dev:builder

4.2. Customizing the Automation

4.2.1. Access to the projects

As discussed in Section 2.2.4, “Manage appropriate privilege”, it is important to provide appropriate privilege to the team to ensure the deployment remains secure while enabling the team to perform their job effectively. In this reference implementation, the Ansible auth role provides tasks for managing authorization. Using host_vars and group_vars files we are able to manage project access across the clusters. The auth role may be simplistic but is easily extended using the provided pattern.

4.2.2. Application

The application managed by this CI/CD workflow is defined by two main configuration items:

  • A template for the application’s build and deployment configurations
  • A set of parameters to control the template’s instantiation process

These are controlled by the app_* variables in Section 4.1.1.1, “group_vars/all.yml:

  • app_template_path is the path (relative to the root of the source repo) where the application template is stored.
  • app_name specifies a name used for object instances resulting from the template, like the Build and Deployment configurations
  • app_base_tag refers to the ImageStreamTag that contains the base image for the application’s build.

The example automation assumes the application template accepts at least the following parameters:

  • NAME: suggested name for the objects generated by the template. Obtained from the app_name variable.
  • TAG: generated by the pipeline as VERSION-BUILDNUMBER, where VERSION is the contents of the app/VERSION file and BUILDNUMBER is the sequential number of the build that generates the image.
  • REGISTRY: the URL for the registry. Obtained from central_registry_hostname.
  • REGISTRY_PROJECT: the namespace in the registry under which built images are kept. Obtained from the project_name of the registry project configuration.
  • IMAGESTREAM_TAG: In practice this means that the application images are expected to be at:

    ${REGISTRY}/${REGISTRY_PROJECT}/${NAME}:${TAG}

    and the ImageStreamTag ${IMAGESTREAM_TAG} points there.

Also, as the application used as an example in this reference implementation is a NodeJS based application, the pipeline includes a stage for automated testing using npm test.

4.2.3. Jenkins

The Jenkins instance driving the automation is deployed from a custom image which is itself built using S2I. This enables customization of the official base image through the addition of a custom list of plugins. The process is described in the Jenkins image documentation.

The Build Configuration jenkins-custom defines this S2I build for Jenkins itself. This build is also driven by Jenkins through the Section 3.3.3, “jenkins-lifecycle” pipeline, which watches for changes in the repo and triggers the pipeline-based build when appropriate.

These are the various components of this process in more detail:

  • jenkins-custom-build.yaml contains the S2I build configuration.
  • plugins.txt contains a list of plugins to install into the Jenkins custom image during the S2I build.
  • jenkins-pipeline.yaml is a template to deploy the pipeline that manages the build of the custom Jenkins instance. The pipeline itself is defined in its own Jenkinsfile.
  • jenkins-master.yaml is a template from which the deployment related objects are created: Deployment Configuration, Services, associated Route, etc. The deployment uses a Persistent Volume Claim associated to Jenkins' data storage volume (/var/lib/jenkins) so the data remains accessible upon potential container restarts and migrations across nodes.

4.3. Execute and Monitor Jenkins app-pipeline

After the initial Ansible automation has run, log in to OpenShift and continue the pipeline automation.

  1. Login into OpenShift

    $ oc login

    Figure 4.1. OpenShift Login

    oc web login
  2. Jenkins is configured as a S2I build. Confirm the build completes

    $ oc get build -l 'buildconfig=jenkins-custom' --template '{{with index .items 0}}{{.status.phase}}{{end}}'
    
    Complete

    Figure 4.2. OpenShift Builds

    oc web build
  3. Once the S2I build of Jenkins is complete, a deployment of Jenkins starts automatically. Confirm the Jenkins pod is running and the Jenkins application has started successfully by reviewing the log for INFO: Jenkins is fully up and running.

    $ oc get pod -l 'name==jenkins'
    $ oc logs -f dc/jenkins
    
    ... [OUTPUT ABBREVIATED] ...
    INFO: Waiting for Jenkins to be started
    Nov 28, 2017 2:59:40 PM jenkins.InitReactorRunner$1 onAttained
    INFO: Loaded all jobs
    ... [OUTPUT ABBREVIATED] ...
    
    Nov 28, 2017 2:59:44 PM hudson.WebAppMain$3 run
    INFO: Jenkins is fully up and running
    
    ... [OUTPUT ABBREVIATED] ...

    Figure 4.3. OpenShift Pods

    oc web pods
  4. Once Jenkins is up and running the application pipeline can be started. Click [Builds][Pipeline] to navigate to the OpenShift Pipeline view.

    $ oc start-build app-pipeline
    
    build "app-pipeline-1" started

    Figure 4.4. OpenShift Pipeline View

    oc web app pipeline
  5. To get a detailed view of the pipeline progress click [View Log] which will launch the Jenkins console output

    Figure 4.5. OpenShift Pipeline View Log

    oc web view log
    1. Upon clicking [View Log] you may be prompted to log in with OpenShift.

      Figure 4.6. OpenShift OAuth

      oc web jenkins login
    2. If this is the first time accessing the Jenkins console you will need to authorize access to Jenkins from your OpenShift account.

      Figure 4.7. OpenShift OAuth permissions

      oc web authorize
    3. Below are the pipeline console logs for the app-pipeline.

      Figure 4.8. Jenkins pipeline console

      jenkins pipeline console
    4. Returning to the OpenShift WebUI. The [Builds][Pipeline] view displays the completed pipeline and executed stages.

      Figure 4.9. OpenShift Pipeline stage view

      oc web pipeline view

4.4. Execute and Monitor Jenkins release-pipeline

  1. Now that the app-pipeline has run and completed successfully promotion of the production image is possible. Return to the OpenShift WebUI pipeline view. There press [Start Pipeline]. To get a detailed view of the pipeline progress click [View Log] which will launch the Jenkins console output. Upon clicking [View Log] you may be prompted to login with OpenShift credentials.

    $ oc start-build release-pipeline
    
    build "release-pipeline-1" started

    Figure 4.10. OpenShift Pipeline

    oc start release pipeline
  2. The pipeline will request the tag to be promoted to production. To access the input field click the [Input requested] link.

    Figure 4.11. OpenShift Pipeline View Log

    jenkins console input requested
  3. Enter the build tag of the image to be promoted to production. Once complete press [Proceed] to continue.

    Figure 4.12. Jenkins input - Image tag

    jenkins tag input
  4. Returning to the OpenShift WebUI. The [Builds][Pipeline] view displays the completed pipeline and executed stages.

    Figure 4.13. OpenShift Pipeline stage view

    oc web release pipeline view
  5. Log in to the production cluster and project. Click the application link which is available in the project’s home.

    NodeJS Example Application

    app