Chapter 6. Extending the Cluster

By default, the reference architecture playbooks are configured to deploy 3 master, 3 application, and 2 infrastructure nodes. This cluster size provides enough resources to get started with deploying a few test applications or a Continuous Integration Workflow example. However, as the cluster begins to be utilized by more teams and projects, it will be become necessary to provision more application or infrastructure nodes to support the expanding environment. To facilitate easily growing the cluster, the add-node.py python script (similar to ose-on-aws.py) is provided in the openshift-ansible-contrib repository. It will allow for provisioning either an Application or Infrastructure node per run and can be ran as meany times as needed. The add-node.py script launches a new AWS Cloudformation Stack to provision the new resource.

6.1. Prerequisites for Adding a Node

Verify the quantity and type of the nodes in the cluster by using the oc get nodes command. The output below is an example of a complete OpenShift environment after the deployment of the reference architecture environment.

$ oc get nodes
NAME                          STATUS                     AGE
ip-10-20-4-198.ec2.internal   Ready,SchedulingDisabled   14m
ip-10-20-4-209.ec2.internal   Ready                      14m
ip-10-20-4-232.ec2.internal   Ready                      14m
ip-10-20-5-187.ec2.internal   Ready                      14m
ip-10-20-5-22.ec2.internal    Ready                      14m
ip-10-20-5-94.ec2.internal    Ready,SchedulingDisabled   14m
ip-10-20-6-42.ec2.internal    Ready                      14m
ip-10-20-6-20.ec2.internal    Ready,SchedulingDisabled   14m

6.2. Introduction to add-node.py

The python script add-node.py is operationally similar to the ose-on-aws.py script. Parameters can optionally be passed in when calling the script. The existing-stack trigger allows the Ansible playbooks to associate the new node with the existing AWS instances. The existing-stack is the value of --stack-name when running ose-on-aws.py. Any required parameters not already set will automatically prompted for at run time. To see all allowed parameters, the --help trigger is available.

Note

On deployments of the Reference Architecture environment post 3.5 --use-cloudformation-facts is available to auto-populate values. If the deployment occurred before 3.5 then it is required to fill in the values manually. To view the possible configuration triggers run add-node.py -h

6.3. Adding an Application Node

To add an application node, run the add-node.py script following the example below. Once the instance is launched, the installation of OpenShift will automatically begin.

Note

If --use-cloudformation-facts is not used the --iam-role or Specify the name of the existing IAM Instance Profile is available by logging into the IAM Dashboard and selecting the role sub-menu. Select the node role and record the information from the Instance Profile ARN(s) line. An example Instance Profile would be OpenShift-Infra-NodeInstanceProfile-TNAGMYGY9W8K.

If the Reference Architecture deployment is >= 3.5

$ ./add-node.py --existing-stack=dev --rhsm-user=rhsm-user --rhsm-password=password --public-hosted-zone=sysdeseng.com --keypair=OSE-key --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" --use-cloudformation-facts --shortname=ose-app-node03 --subnet-id=subnet-0a962f4

If the Reference Architecture deployment was performed before 3.5.

$ ./add-node.py --existing-stack=dev --rhsm-user=rhsm-user --rhsm-password=password --public-hosted-zone=sysdeseng.com --keypair=OSE-key --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" --node-sg=sg-309f0a4a --shortname=ose-app-node03 --iam-role=OpenShift-Infra-NodeInstanceProfile-TNAGMYGY9W8K --subnet-id=subnet-0a962f4

6.4. Adding an Infrastructure Node

The process for adding an Infrastructure Node is nearly identical to adding an Application Node. The only differences in adding an Infrastructure node is the requirement of the infrastructure security group (ose_infra_node_sg) and the name of the ELB used by the router (ose_router_elb). Follow the example steps below to add a new infrastructure node.

Note

If --use-cloudformation-facts is not used the --iam-role or Specify the name of the existing IAM Instance Profile: is available visiting the IAM Dashboard and selecting the role sub-menu. Select the node role and record the information from the Instance Profile ARN(s) line. An example Instance Profile would be OpenShift-Infra-NodeInstanceProfile-TNAGMYGY9W8K.

If the Reference Architecture deployment is >= 3.5

$ ./add-node.py --existing-stack=dev --rhsm-user=rhsm-user --rhsm-password=password --public-hosted-zone=sysdeseng.com --keypair=OSE-key --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" --use-cloudformation-facts --shortname=ose-infra-node04 --node-type=infra --subnet-id=subnet-0a962f4

If the Reference Architecture deployment was performed before 3.5.

$ ./add-node.py --rhsm-user=user --rhsm-password=password --public-hosted-zone=sysdeseng.com --keypair=OSE-key --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" --node-type=infra --iam-role=OpenShift-Infra-NodeInstanceProfile-TNAGMYGY9W8K --node-sg=sg-309f9a4a --infra-sg=sg-289f9a52 --shortname=ose-infra-node04 --subnet-id=subnet-0a962f4 --infra-elb-name=OpenShift-InfraElb-1N0DZ3CFCAHLV

6.5. Validating a Newly Provisioned Node

To verify a newly provisioned node that has been added to the existing environment, use the oc get nodes command. In this example, node ip-10-20-6-198.ec2.internal is an application node newly deployed by the add-node.py playbooks..

$ oc get nodes
NAME                          STATUS                     AGE
ip-10-20-4-198.ec2.internal   Ready,SchedulingDisabled   34m
ip-10-20-4-209.ec2.internal   Ready                      34m
ip-10-20-4-232.ec2.internal   Ready                      34m
ip-10-20-5-187.ec2.internal   Ready                      34m
ip-10-20-5-22.ec2.internal    Ready                      34m
ip-10-20-5-94.ec2.internal    Ready,SchedulingDisabled   34m
ip-10-20-6-198.ec2.internal   Ready                      1m
ip-10-20-6-42.ec2.internal    Ready                      14m
ip-10-20-6-20.ec2.internal    Ready,SchedulingDisabled   34m

$ oc get nodes --show-labels | grep app | wc -l
3