Chapter 4. Deploying the Automation
The automation used in this reference implementation can be found here. The code should be checked out to the local workstation:
$ git clone https://github.com/RHsyseng/jenkins-on-openshift.git $ cd jenkins-on-openshift/ansible
The repository contains a reference Ansible playbook in main.yml to configure a set of application project namespaces on a set of OpenShift clusters, covering an application’s life-cycle through separate projects for development, staging, and production, using a shared container image registry.
When using the oc client to work against multiple clusters it is important to learn how to switch between contexts. See the command line documentation for information on cluster context.
The reference playbook deploys a Jenkins instance in the development cluster to drive the life-cycle of the application through the environments.
4.1. Initial Configuration
Configure the environments: using the group_vars/*.yml.example files, rename each file to remove '.example', e.g.
for i in group_vars/*.example; do mv "${i}" "${i/.example}"; doneThe directory should look like this:
. └── group_vars ├── all.yml ├── dev.yml ├── prod.yml ├── registry.yml └── stage.ymlEdit these files and adjust the variables accordingly. At the very least, these settings must be customized (see the variables section below for more details):
-
all.yml:
central_registry_hostname -
[group].yml:
clusterhost
-
all.yml:
Configure authentication to each environment using the host_vars/*-1.yml.example files as a guide. Rename each file to remove '.example'. The directory should look like this:
. └── host_vars ├── dev-1.yml ├── prod-1.yml ├── registry-1.yml └── stage-1.ymlThen edit each of the files and set the respective authentication information. See the host variables section for more details.
Run the playbook:
ansible-playbook -i inventory.yml main.yml
A number of Ansible actions may appear as failed while executing the playbook. The playbook is operating normally if the end result has no failed hosts. See ignoring failed commands for additional information.
4.1.1. Variables
Below is a description of the variables used by the playbooks. Adjust the values of these variables to suit the given environment, clusters, and application.
The various variables are stored in different files depending on the scope they have, and therefore are meant to be configured through the group or host variable files in {group,host}_vars/*.yml.
However, Ansible’s variable precedence rules apply here, so it is possible to set or override some variable values in different places.
For example, to disable TLS certificate validation for the staging environment/cluster only, validate_certs: false may be set in group_vars/stage.yml while also keeping validate_certs: true in the group_vars/all.yml file.
It is also possible to override values in the inventory file and via the --extra-vars option of the ansible-playbook command. For example, in a single cluster environment it may be better to set the value of clusterhost just once as an inventory variable (i.e. in the vars section of inventory.yml) instead of using a group variable. Moreover, if the same user is the administrator of all the projects, instead of configuring authentication details in each environment’s file inside host_vars a single token can be passed directly to the playbook with:
ansible-playbook --extra-vars token=$(oc whoami -t) ...
See the Ansible documentation for more details.
4.1.1.1. group_vars/all.yml
This file specifies variables that are common through all the environments:
Table 4.1. Variables common through all environments
| Variable Name | Required Review | Description |
|---|---|---|
| central_registry_hostname | Yes | The hostname[:port] of the central registry where all images will be stored. |
| source_repo_url | No | git repository URL of the pipelines to deploy |
| source_repo_branch | No | git branch to use for pipeline deployment |
| app_template_path | No | Relative path within the git repo where theapplication template is stored |
| app_name | No | Name of the application |
| app_base_tag | No | Base ImageStreamTag that the application will use |
| validate_certs | Yes | Whether to validate the TLS certificates during cluster/registry communications |
| notify_email_list | Yes | Email notifications from pipelines: destination |
| notify_email_from | Yes | Email notifications from pipelines: from |
| notify_email_replyto | Yes | Email notifications from pipelines: reply-to |
| oc_url | No | URL location of OpenShift Origin client |
| oc_extract_dest | Yes | Disk location that the client will be downloaded and extracted |
| oc_path | Yes | Path to OpenShift client (used if workaround is False) |
| oc_no_log | No | Disables logging of oc commands to hide OpenShift token. |
| enable_dockercfg_workaround | Yes | Implements workaround described in the Appendix Known issues |
4.1.1.2. group_vars/[group].yml
There is a .yml file for each of the environments: development (dev), staging (stage), production (prod), and shared registry. Each of these files contains variables describing their respective cluster and project details:
Table 4.2. Variables common through all environments
| Variable Name | Required Review | Description |
|---|---|---|
| clusterhost | Yes |
Specifies the hostname[:port] to contact the OpenShift cluster where the environment is hosted. Do not include the protocol ( |
| project_name | No | Describe the project where the respective environment is hosted. |
| project_display_name | No | Describe the project where the respective environment is hosted. |
| project_description | No | Describe the project where the respective environment is hosted. |
| {admin,editor,viewer}_{users,groups} | No |
A set of lists of users/groups that need permissions on the project. Users listed in the |
| deprecated_{admin,editor,viewer}_{users,groups} | No | Add users/groups that must have their permissions revoked. |
4.1.1.3. host_vars/[environment]-1.yml
These files contain authentication credentials for each of the environments.
Depending on the authentication method of the OpenShift cluster, authentication credentials can be provided either as openshift_username and openshift_password or as an authentication token.
A token (for example obtained from oc whoami -t) always works. Moreover, it is the only option if the available authentication method requires an external login (for example GitHub), where a username/password combination can not be used from the command line.
Since we are dealing with authentication information, ansible-vault can help protect the information in these files through encryption.
4.1.2. Example setups
Here are some sample values for the configuration variables to address specific needs.
4.1.2.1. TLS/SSL certificate validation
The validate_certs variable is a Boolean that enables or disables TLS certificate validation for the clusters and the registry.
It is important to keep in mind the playbooks provided here only interact with the configured OpenShift clusters through an API, and do not interfere with the cluster’s own configuration.
Therefore, if for any reason TLS certificate validation is disabled for a cluster, the cluster administrator must also take measures to ensure the cluster operates accordingly.
In particular, image push/pull is performed by the container runtime in each of the nodes in the cluster. If validate_certs is disabled for the registry being used (central_registry_hostname), the nodes also require the registry to be configured as an insecure registry.
To configure a node to use an insecure registry edit either /etc/containers/registries.conf or /etc/sysconfig/docker and restart docker.
Disabling certificate validation is not ideal; properly managed TLS certificates are preferable. OpenShift documentation has sections on Securing the Container Platform itself as well as Securing the Registry.
4.2. Customizing the Automation
4.2.1. Access to the projects
As discussed in Section 2.2.4, “Manage appropriate privilege”, it is important to provide appropriate privilege to the team to ensure the deployment remains secure while enabling the team to perform their job effectively. In this reference implementation, the Ansible auth role provides tasks for managing authorization. Using host_vars and group_vars files we are able to manage project access across the clusters. The auth role may be simplistic but is easily extended using the provided pattern.
4.2.2. Application
The application managed by this CI/CD workflow is defined by two main configuration items:
- A template for the application’s build and deployment configurations
- A set of parameters to control the template’s instantiation process
These are controlled by the app_* variables in Section 4.1.1.1, “group_vars/all.yml”:
-
app_template_pathis the path (relative to the root of the source repo) where the application template is stored. -
app_namespecifies a name used for object instances resulting from the template, like the Build and Deployment configurations -
app_base_tagrefers to the ImageStreamTag that contains the base image for the application’s build.
The example automation assumes the application template accepts at least the following parameters:
-
NAME: suggested name for the objects generated by the template. Obtained from theapp_namevariable. -
TAG: generated by the pipeline asVERSION-BUILDNUMBER, whereVERSIONis the contents of theapp/VERSIONfile andBUILDNUMBERis the sequential number of the build that generates the image. -
REGISTRY: the URL for the registry. Obtained fromcentral_registry_hostname. -
REGISTRY_PROJECT: the namespace in the registry under which built images are kept. Obtained from theproject_nameof the registry project configuration. IMAGESTREAM_TAG: In practice this means that the application images are expected to be at:${REGISTRY}/${REGISTRY_PROJECT}/${NAME}:${TAG}and the ImageStreamTag
${IMAGESTREAM_TAG}points there.
Also, as the application used as an example in this reference implementation is a NodeJS based application, the pipeline includes a stage for automated testing using npm test.
4.2.3. Jenkins
The Jenkins instance driving the automation is deployed from a custom image which is itself built using S2I. This enables customization of the official base image through the addition of a custom list of plugins. The process is described in the Jenkins image documentation.
The Build Configuration jenkins-custom defines this S2I build for Jenkins itself. This build is also driven by Jenkins through the Section 3.3.3, “jenkins-lifecycle” pipeline, which watches for changes in the repo and triggers the pipeline-based build when appropriate.
These are the various components of this process in more detail:
- jenkins-custom-build.yaml contains the S2I build configuration.
- plugins.txt contains a list of plugins to install into the Jenkins custom image during the S2I build.
- jenkins-pipeline.yaml is a template to deploy the pipeline that manages the build of the custom Jenkins instance. The pipeline itself is defined in its own Jenkinsfile.
-
jenkins-master.yaml is a template from which the deployment related objects are created: Deployment Configuration, Services, associated Route, etc. The deployment uses a Persistent Volume Claim associated to Jenkins' data storage volume (
/var/lib/jenkins) so the data remains accessible upon potential container restarts and migrations across nodes.
4.3. Execute and Monitor Jenkins app-pipeline
After the initial Ansible automation has run, log in to OpenShift and continue the pipeline automation.
Login into OpenShift
$ oc login
Figure 4.1. OpenShift Login

Jenkins is configured as a S2I build. Confirm the build completes
$ oc get build -l 'buildconfig=jenkins-custom' --template '{{with index .items 0}}{{.status.phase}}{{end}}' CompleteFigure 4.2. OpenShift Builds

Once the S2I build of Jenkins is complete, a deployment of Jenkins starts automatically. Confirm the Jenkins pod is running and the Jenkins application has started successfully by reviewing the log for
INFO: Jenkins is fully up and running.$ oc get pod -l 'name==jenkins' $ oc logs -f dc/jenkins ... [OUTPUT ABBREVIATED] ... INFO: Waiting for Jenkins to be started Nov 28, 2017 2:59:40 PM jenkins.InitReactorRunner$1 onAttained INFO: Loaded all jobs ... [OUTPUT ABBREVIATED] ... Nov 28, 2017 2:59:44 PM hudson.WebAppMain$3 run INFO: Jenkins is fully up and running ... [OUTPUT ABBREVIATED] ...
Figure 4.3. OpenShift Pods

Once Jenkins is up and running the application pipeline can be started. Click [Builds] → [Pipeline] to navigate to the OpenShift Pipeline view.
$ oc start-build app-pipeline build "app-pipeline-1" started
Figure 4.4. OpenShift Pipeline View

To get a detailed view of the pipeline progress click [View Log] which will launch the Jenkins console output
Figure 4.5. OpenShift Pipeline View Log

Upon clicking [View Log] you may be prompted to log in with OpenShift.
Figure 4.6. OpenShift OAuth

If this is the first time accessing the Jenkins console you will need to authorize access to Jenkins from your OpenShift account.
Figure 4.7. OpenShift OAuth permissions

Below are the pipeline console logs for the
app-pipeline.Figure 4.8. Jenkins pipeline console

Returning to the OpenShift WebUI. The [Builds] → [Pipeline] view displays the completed pipeline and executed stages.
Figure 4.9. OpenShift Pipeline stage view

4.4. Execute and Monitor Jenkins release-pipeline
Now that the
app-pipelinehas run and completed successfully promotion of the production image is possible. Return to the OpenShift WebUI pipeline view. There press [Start Pipeline]. To get a detailed view of the pipeline progress click [View Log] which will launch the Jenkins console output. Upon clicking [View Log] you may be prompted to login with OpenShift credentials.$ oc start-build release-pipeline build "release-pipeline-1" started
Figure 4.10. OpenShift Pipeline

The pipeline will request the tag to be promoted to production. To access the input field click the [Input requested] link.
Figure 4.11. OpenShift Pipeline View Log

Enter the build tag of the image to be promoted to production. Once complete press [Proceed] to continue.
Figure 4.12. Jenkins input - Image tag

Returning to the OpenShift WebUI. The [Builds] → [Pipeline] view displays the completed pipeline and executed stages.
Figure 4.13. OpenShift Pipeline stage view

Log in to the production cluster and project. Click the application link which is available in the project’s home.
NodeJS Example Application

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.