Chapter 3. Deploying Red Hat OpenShift Container Platform

With the prerequisites met, the focus shifts to installation of Red Hat OpenShift Container Platform. A series of Ansible playbooks and roles provided by the atomic-openshift packages ensure the remainder of the OpenShift prerequisites and set up the cluster.

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

The playbooks run through the complete process of installing Red Hat OpenShift Container Platform. Any errors during the playbook run result in helpful messages specifying what failed and how to retry just that section of the install.

Note

Some errors are not recoverable, and will require a complete reinstall of the Red Hat Virtualization instances. See Appendix B, Reinstall Red Hat OpenShift Container Platform for a process to streamline re-installation.

3.1. Adding OpenShift Logging (Optional)

Red Hat OpenShift Container Platform allows the option to deploy aggregate logging for containers running in the OpenShift environment. OpenShift uses the Elasticsearch, Fluentd, and Kibana (EFK) stack to collect logs from applications and present them to OpenShift users. Cluster administrators can view all logs but application developers can only view logs for projects they have permission to view. The EFK stack consists of the following components:

  • Elasticsearch - Object store for logs with search capability
  • Fluentd - Unified logging layer to gather logs from OpenShift
  • Kibana - Web interface to visualize data in Elasticsearch
  • Curator - Coordinates and schedules Elasticsearch maintenance operations

Below is an example of some of the best practices when deploying OpenShift logging.

Elasticsearch, Kibana, and Curator are deployed on nodes with the label of "region=infra". Specifying the node label ensures that the Elasticsearch and Kibana components are not competing with applications for resources. The Elasticsearch cluster size (openshift_logging_es_cluster_size) of three is the minimum number of nodes to ensure data durability and redundancy.

Note

Fluentd runs on all OpenShift nodes regardless of the node label.

The logging project created during installation must be modified to bypass the default node selector. If this is not done then the Elasticsearch, Kibana, and Curator components cannot be started. There are two options in making the appropriate edit.

Option 1:

ssh into the first master instance (master1.example.com) or the bastion instance and modify the logging project.

Add the following line openshift.io/node-selector: "" under annotations but do not remove any lines.

$ oc edit project logging
... [OUTPUT ABBREVIATED] ...
  annotations:
    openshift.io/node-selector: ""
... [OUTPUT ABBREVIATED] ...
project "logging" edited

Option 2:

Using oc patch command

oc patch namespace logging \
    -p "{\"metadata\":{\"annotations\":{\"openshift.io/node-selector\":\"\"}}}"

Log out of the master1 instance and on the bastion instance add the following lines to the /etc/ansible/hosts file below the registry and above the [masters] entry.

openshift_hosted_logging_deploy=true
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=10Gi
openshift_logging_es_cluster_size=3
openshift_logging_es_nodeselector={"region":"infra"}
openshift_logging_kibana_nodeselector={"region":"infra"}
openshift_logging_curator_nodeselector={"region":"infra"}
Note

A StorageClass must be defined when requesting dynamic storage.

Run the deployment playbooks to deploy logging using the parameters defined in the inventory file.

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

An example of a successful playbook run is shown below.

PLAY RECAP ********************************************************************
localhost                  : ok=11   changed=0    unreachable=0    failed=0
master1.example.com : ok=237  changed=94   unreachable=0    failed=0
master2.example.com : ok=16   changed=4    unreachable=0    failed=0
master3.example.com : ok=16   changed=4    unreachable=0    failed=0
Wednesday 28 March 2018  12:34:47 -0400 (0:00:00.415)       0:03:58.679 ******
===============================================================================
openshift_facts : Ensure various deps are installed -------------------- 46.27s
openshift_facts : Ensure various deps are installed -------------------- 38.13s
openshift_logging : Run JKS generation script --------------------------- 8.98s
openshift_logging : restart master api ---------------------------------- 3.98s
openshift_logging : Gather OpenShift Logging Facts ---------------------- 3.69s
openshift_facts : Gather Cluster facts and set is_containerized if needed --- 3.60s
openshift_facts : Gather Cluster facts and set is_containerized if needed --- 3.13s
openshift_logging : restart master api ---------------------------------- 2.90s
openshift_logging_elasticsearch : Set logging-es-cluster service -------- 1.74s
openshift_logging : include_role ---------------------------------------- 1.60s
openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role --- 1.51s
openshift_logging_kibana : Set logging-kibana service ------------------- 1.46s
openshift_logging_elasticsearch : Set logging-es-cluster service -------- 1.45s
openshift_logging_elasticsearch : Set logging-es service ---------------- 1.43s
openshift_logging_elasticsearch : Set logging-es service ---------------- 1.41s
openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role --- 1.40s
openshift_logging_elasticsearch : Create rolebinding-reader role -------- 1.40s
openshift_logging_elasticsearch : Set logging-es service ---------------- 1.39s
openshift_logging_elasticsearch : Set logging-es-cluster service -------- 1.37s
openshift_logging_elasticsearch : Create rolebinding-reader role -------- 1.36s

Once the playbook finishes ssh into the first master instance (master1.example.com) or the bastion host, and view the pods in the logging project.

$ oc get pods -n logging
NAME                                       READY     STATUS    RESTARTS   AGE
logging-curator-1-tzrsx                    1/1       Running   2          4m
logging-es-data-master-9uq6xi6z-1-dw6vq    1/1       Running   0          5m
logging-es-data-master-mcanh3m7-1-deploy   1/1       Running   0          5m
logging-es-data-master-mcanh3m7-1-vfkt2    1/1       Running   0          4m
logging-es-data-master-qbwcw4j6-1-227gj    1/1       Running   0          4m
logging-es-data-master-qbwcw4j6-1-deploy   1/1       Running   0          4m
logging-fluentd-49hfs                      1/1       Running   0          4m
logging-fluentd-4jwd0                      1/1       Running   0          4m
logging-fluentd-8wtph                      1/1       Running   0          4m
logging-fluentd-bnld6                      1/1       Running   0          4m
logging-fluentd-dp89n                      1/1       Running   0          4m
logging-fluentd-hsgl8                      1/1       Running   0          4m
logging-fluentd-htgx0                      1/1       Running   0          4m
logging-fluentd-s9jp0                      1/1       Running   0          4m
logging-kibana-1-76wxp                     2/2       Running   0          4m
Note

The curator pod restarts are normal as it tries to connect to the Elasticsearch cluster before it is created.

3.2. Adding OpenShift Metrics (Optional)

Red Hat OpenShift Container Platform has the ability to gather metrics from running pods by querying kubelet and storing the values in Heapster. Cluster metrics add CPU, memory, and network-based metrics to the OpenShift administrative interface. Additionally, having cluster metrics configured in the cluster enables the creation of Horizontal Pod Autoscalers by the OpenShift user. It is important to understand capacity planning when deploying metrics into an OpenShift environment.

Persistent storage should be employed to prevent the loss of metrics in the event of a pod restart. Node selectors should be used to specify where the metrics components should run. In the reference architecture environment, the components are deployed on nodes with the label of "region=infra".

Add the following lines to the /etc/ansible/hosts file in the [OSEv3:vars] section.

openshift_hosted_metrics_deploy=true
openshift_hosted_metrics_storage_kind=dynamic
openshift_hosted_metrics_storage_volume_size=10Gi
openshift_metrics_hawkular_nodeselector={"region":"infra"}
openshift_metrics_cassandra_nodeselector={"region":"infra"}
openshift_metrics_heapster_nodeselector={"region":"infra"}
Note

A StorageClass must be defined when requesting dynamic storage.

Run the deployment playbooks to deploy metrics using the parameters defined in the inventory file.

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml

An example of a successful playbook run is shown below.

PLAY RECAP *********************************************************************
localhost                  : ok=11   changed=0    unreachable=0    failed=0
master1.example.com : ok=177  changed=47   unreachable=0    failed=0
master2.example.com : ok=16   changed=3    unreachable=0    failed=0
master3.example.com : ok=16   changed=3    unreachable=0    failed=0

Wednesday 30 August 2017  12:49:12 -0400 (0:00:00.296)       0:01:54.693 ******
===============================================================================
openshift_facts : Gather Cluster facts and set is_containerized if needed --- 3.87s
openshift_metrics : restart master api ---------------------------------- 3.33s
openshift_facts : Gather Cluster facts and set is_containerized if needed --- 3.19s
openshift_facts : Ensure various deps are installed --------------------- 3.09s
openshift_facts : Ensure various deps are installed --------------------- 2.74s
openshift_metrics : slurp ----------------------------------------------- 2.56s
openshift_metrics : Set serviceaccounts for hawkular metrics/cassandra --- 2.38s
openshift_metrics : restart master api ---------------------------------- 2.25s
openshift_metrics : Stop Heapster --------------------------------------- 1.89s
openshift_metrics : Stop Hawkular Metrics ------------------------------- 1.60s
openshift_metrics : Start Hawkular Metrics ------------------------------ 1.57s
openshift_metrics : Start Hawkular Cassandra ---------------------------- 1.56s
openshift_metrics : command --------------------------------------------- 1.53s
openshift_metrics : Start Heapster -------------------------------------- 1.51s
openshift_metrics : read files for the hawkular-metrics secret ---------- 1.46s
openshift_metrics : Generate services for cassandra --------------------- 1.24s
openshift_metrics : Generating serviceaccounts for hawkular metrics/cassandra --- 1.19s
openshift_metrics : Set hawkular cluster roles -------------------------- 1.14s
openshift_metrics : generate hawkular-cassandra keys -------------------- 1.02s
Gathering Facts --------------------------------------------------------- 0.83s

Once the playbook finishes ssh into the first master instance (master1.example.com) or the bastion and view the pods in the openshift-infra project.

$ oc get pods -n openshift-infra
NAME                         READY     STATUS    RESTARTS   AGE
hawkular-cassandra-1-4q46f   1/1       Running   0          3m
hawkular-metrics-0w46z       1/1       Running   0          3m
heapster-gnhsh               1/1       Running   0          3m

3.3. Cloudforms Integration (Optional)

The steps defined below assume that Red Hat Cloudforms has been deployed and is accessible by the OpenShift environment.

Note

To receive the most information about the deployed environment ensure that the OpenShift metrics components are deployed.

3.3.1. Requesting the Red Hat OpenShift Container Platform Management Token

The management token is used to allow for Cloudforms to retrieve information from the recently deployed OpenShift environment.

To request this token run the following command from a system with the oc client installed and from an account that has privileges to request the token from the management-infra namespace.

oc sa get-token -n management-infra management-admin
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtYW5hZ2VtZW50LWluZnJhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im1hbmFnZW1lbnQtYWRtaW4tdG9rZW4tdHM0cTIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibWFuYWdlbWVudC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY0ZDlmMGMxLTEyY2YtMTFlOC1iNTgzLWZhMTYzZTEwNjNlYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptYW5hZ2VtZW50LWluZnJhOm1hbmFnZW1lbnQtYWRtaW4ifQ.LwNm0652paGcJu7m63PxBhs4mjXwYcqMS5KD-0aWkEMCPo64WwNEawyyYH31SvuEPaE6qFxZwDdJHwdNsfq1CjUL4BtZHv1I2QZxpVl6gMBQowNf6fWSeGe1FDZ4lkLjzAoMOCFUWA0Z7lZM1FAlyjfz2LkPNKaFW0ffelSJ2SteuXB_4FNup-T5bKEPQf2pyrwvs2DadClyEEKpIrdZxuekJ9ZfIubcSc3pp1dZRu8wgmSQSLJ1N75raaUU5obu9cHjcbB9jpDhTW347oJOoL_Bj4bf0yyuxjuUCp3f4fs1qhyjHb5N5LKKBPgIKzoQJrS7j9Sqzo9TDMF9YQ5JLQ

3.3.2. Adding OpenShift as a Containtainer Provider

Now that the token has been acquired, perform the following steps in the link below to add Red Hat OpenShift Container Platform to Red Hat Cloudforms.

https://access.redhat.com/documentation/en-us/red_hat_cloudforms/4.6/html/integration_with_openshift_container_platform/integration

3.3.3. Adding Red Hat Virtualization to Cloudforms

Red Hat Cloudforms also allows for not only the management of Red Hat OpenShift Container Platform but also for the management of Red Hat Virtualization. The link below contains the steps for adding Red Hat Virtualization to Cloudforms.

https://access.redhat.com/documentation/en-us/red_hat_cloudforms/4.6/html/managing_providers/infrastructure_providers#red_hat_virtualization_providers