Chapter 2. Components and Considerations
This chapter describes the components required to setup and configure Application CI/CD on OpenShift 3.7. It also provides guidance to develop automation, based on experiences developing the examples in this Reference Implementation.
2.1. Components
2.1.1. Software Version Details
The following table provides installed software versions for the different instances comprising the reference implementation.
Table 2.1. Application CI/CD software versions
| Software | Version |
|---|---|
| atomic-openshift{master,clients,node,sdn-ovs,utils} | 3.7.9 |
| Jenkins (provided) | 2.73.3 |
| ansible | 2.4.1.0 |
2.1.2. OpenShift Clusters
Multiple deployment clusters give teams flexibility to rapidly test application changes. In this reference implementation, two clusters and three OpenShift projects are used. The production cluster and project is separate from the non-production cluster. The non-production cluster is assigned to specific lifecycle projects: development and stage.
The Ansible inventory, playbooks, and roles were written in a manner to support using any combination of clusters or projects.
Both the Jenkins OpenShift client and Ansible use the oc client. If network connectivity is available between clusters location should not matter.
2.1.3. Registry
OpenShift deploys an integrated registry available to each cluster. In this reference implementation, an OpenShift-based external registry to the clusters is used. This enables a more flexible application promotion and cluster upgrade process. To promote an application, the target cluster pulls the application image from the external registry, simplifying multi-cluster coordination. When a cluster must be upgraded, the critical application image remains on an independent system.
Additional external registries may also be utilized including Sonatype Nexus, JFrog Artifactory, or an additional project within an existing OpenShift cluster.
2.1.4. Jenkins
Development teams need a service to drive automation but want to minimize the effort required to configure and maintain an internal-facing service. The integrated Jenkins service addresses these concerns in several ways:
- Authentication
- Authorization
- Deployment configuration
- Pipeline integration
- OpenShift plugin
These integrations simplify Jenkins server operation so a team can focus on software development. By integrating authentication, teams are much less likely to use shared server credentials, a weak backdoor administrative password, or other insecure authentication practices. By integrating authorization, team members inherit the same OpenShift project privilege in the Jenkins environment, allowing them to obtain necessary information while protecting the automation from unauthorized access. By stabilizing deployment configuration, teams have a straightforward way to store Jenkins configurations in source control. For example, instead of adding plugins through the web user interface, a Jenkins administrator may make a change to a simple text file managed by source control. This source control change can trigger a redeployment of the Jenkins server. With this model, the deployment is now reproducible and plugin failures may be quickly rolled back. Jenkins Pipeline integration allows users to view pipeline status directly from the OpenShift web UI for an integrated view into the deployment lifecycle. The Jenkins OpenShift Client plugin allows team members to more easily automate calls to OpenShift, simplifying pipeline code.
See Jenkins image documentation for reference.
2.1.5. Declarative Pipelines
The Jenkins declarative pipeline syntax is a relatively recent approach to defining workflows in Jenkins. It uses a simpler syntax than the scripted pipeline. Using this syntax allows integration with OpenShift. In this project, the declarative pipeline syntax is used exclusively.
2.1.6. Configuration
Maintaining Jenkins configuration in source control has several benefits but exposes some special cases as well. A good practice is pinning plugin versions using a plugins.txt file (see the section about Jenkins customization for an example) so plugins are not arbitrarily updated whenever a new Jenkins configuration is deployed. In this case, however, more diligence must be exercised to keep plugins updated. Refer to the Jenkin master plugin management page ([Manage Jenkins] → [Manage plugins]) to understand which plugins must be updated.
Figure 2.1. Jenkins Update Center

2.1.7. Developing Automation
Developing Jenkins automation can be a frustrating experience. In many cases, the pipeline files are downloaded from source control each time the pipeline runs. This leads to development cycles where a source control commit is made to test out a change. Also, the code is not easily run on a local workstation.
One development environment option is to use the pipeline sandbox to create pipeline code directly in the Jenkins web interface. Development cycle time is greatly reduced and an interactive step generator is available to assist with the specific plugin syntax. To use this environment:
From the Jenkins home dashboard, select [New item].
Figure 2.2. Jenkins Dashboard

Name the test pipeline, select [Pipeline] project type and click [OK].
Figure 2.3. Jenkins New Item

Edit the pipeline directly in the [Pipeline] text area. For snippet generation and inline plugin documentation, click [Pipeline Syntax].
Figure 2.4. Jenkins New Pipeline

Once a pipeline has been committed to source control, Jenkins allows a user to edit a previously run pipeline and [Replay] the task.
Figure 2.5. Jenkins Pipeline Replay

2.2. Guiding Principles
2.2.1. Automate everything
The OpenShift web interface and CLI are useful tools for developing, debugging and monitoring applications. However, once the application is running and basic project configuration is known, these artifacts should be committed to source control with corresponding automation, such as Ansible. This ensures an application is reproducible across clusters and project configuration changes can be managed, rolled back, and audited.
The important discipline is to enable automation to be run at any time. In this project, the ansible playbook may be run arbitrarily to keep the system in a known state. This ensures the environments and source-controlled configuration do not drift. Running the playbook on a regular basis provides the team a limited set of changes between runs, which makes debugging much simpler. If drift is suspected, the automation becomes effectively useless.
2.2.2. Use appropriate tools
Many automation tools are able to solve similar problems. Each tool has a unique way of solving a particular stage of automation. The following questions can help make tooling choices:
- What stage of automation is involved?
- Who will be maintaining the automation?
- What user or process will be running the automation?
- How will the automation be triggered?
For this paper the following choices have been made:
- Ansible configures the OpenShift projects and authorization
- Jenkins provides centralized orchestration of the application lifecycle, including builds, tests, and deployments
2.2.3. Parameterize Everything
Parameterization enables a team to centralize the configuration of a project so changes involve less risk. Project values may need to be altered for testing, migrating to a new cluster, or due to a changing application requirement. Since Ansible provides the baseline configuration for the cluster, including Jenkins, playbook parameterization is a good place to define parameters shared across clusters.
Using this principle, most parameter values for this paper flow through the system in this way:
Ansible → OpenShift → Jenkins
This requires parameters to be mapped in several places. For example, if parameters are defined using Ansible group and host variable files, the OpenShift objects created must be templated to replace these values. Regardless, the benefits are considered worthwhile.
For this project, we have chosen to prefer parameterized OpenShift templates over Jinja2-style Ansible templates. This allows the OpenShift templates to be used without Ansible.
Keeping a consistent parameterization approach simplifies maintenance and reduces confusion.
2.2.4. Manage appropriate privilege
Team members should have unrestricted access to necessary debugging information required to perform job functions effectively. Source control should also be relied upon to trigger state change as much as possible. With this model, the team will be encouraged to automate everything while maintaining access to information needed to resolve problems. The following table provides privilege suggestions for a team:
Table 2.2. Suggested Project Privileges
| Registry | Dev cluster | Stage cluster | Prod cluster | |
|---|---|---|---|---|
| Dev leads | pull only | project administrator | project editor | project viewer, run debug pods |
| Developers | pull only | project editor | project editor | project viewer |
| Operators | pull only | project viewer | project editor, varied cluster priv | project editor, varied cluster priv |
These are suggestions to stimulate conversation. Privilege granularity may vary widely depending on the size and culture of the team. Appropriate cluster privilege should be assigned to the operations team. What is important is to develop a privilege plan as a DevOps team.
Since team membership and roles change over time, it is important to enable unassigning privilege. This reference implementation includes a way to manage lists of deprecated users. See the group variables for an example.
An authentication token is used in this reference implementation to allow deployment orchestration in each cluster from the Jenkins master. The jenkins token is mounted as a secret in the Jenkins pod. Jenkins uses the credential plugin to securely manage cluster access and protect secrets from leaking into log files.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.