Chapter 6. Adding Hosts to an Existing Cluster

6.1. Overview

Depending on how your OpenShift Enterprise cluster was installed, you can add new hosts (either nodes or masters) to your installation by using the install tool for quick installations, or by using the scaleup.yml playbook for advanced installations.

6.2. Adding Hosts Using the Quick Installer Tool

If you used the quick install tool to install your OpenShift Enterprise cluster, you can use the quick install tool to add a new node host to your existing cluster, or to reinstall the cluster entirely.

Note

Currently, you can not use the quick installer tool to add new master hosts. You must use the advanced installation method to do so.

If you used the installer in either interactive or unattended mode, you can re-run the installation as long as you have an installation configuration file at ~/.config/openshift/installer.cfg.yml (or specify a different location with the -c option).

Important

The recommended maximum number of nodes is 300.

To add nodes to your installation:

  1. Re-run the installer with the install subcommand in interactive or unattended mode:

    $ atomic-openshift-installer [-u] [-c </path/to/file>] install
  2. The installer detects your current environment and allows you to either add an additional node or re-perform a clean install:

    Gathering information from hosts...
    Installed environment detected.
    By default the installer only adds new nodes to an installed environment.
    Do you want to (1) only add additional nodes or (2) perform a clean install?:

    Choose (1) and follow the on-screen instructions to complete your desired task.

6.3. Adding Hosts Using the Advanced Install

If you installed using the advanced install, you can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, then runs the configuration playbooks on the new hosts only. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.

This process is similar to re-running the installer in the quick installation method to add nodes, however you have more configuration options available when using the advanced method and when running the playbooks directly.

You must have an existing inventory file (for example, /etc/ansible/hosts) that is representative of your current cluster configuration in order to run the scaleup.yml playbook. If you previously used the atomic-openshift-installer command to run your installation, you can check ~/.config/openshift/.ansible/hosts for the last inventory file that the installer generated, and use or modify that as needed as your inventory file. You must then specify the file location with -i when calling ansible-playbook later.

Important

The recommended maximum number of nodes is 300.

To add a host to an existing cluster:

  1. Ensure you have the latest playbooks by updating the atomic-openshift-utils package:

    # yum update atomic-openshift-utils
  2. Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section:

    For example, to add a new node host, add new_nodes:

    [OSEv3:children]
    masters
    nodes
    new_nodes

    To add new master hosts, add new_masters.

  3. Create a [new_<host_type>] section much like an existing section, specifying host information for any new hosts you want to add. For example, when adding a new node:

    [nodes]
    master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
    node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
    node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
    
    [new_nodes]
    node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"

    See Configuring Host Variables for more options.

    When adding new masters, hosts added to the [new_masters] section must also be added to the [new_nodes] section with the openshift_schedulable=false variable. This ensures the new master host is part of the OpenShift SDN and that pods are not scheduled for placement on them. For example:

    [masters]
    master[1:2].example.com
    
    [new_masters]
    master3.example.com
    
    [nodes]
    node[1:3].example.com openshift_node_labels="{'region': 'infra'}"
    master[1:2].example.com openshift_schedulable=false
    
    [new_nodes]
    master3.example.com openshift_schedulable=false
  4. Run the scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i option.

    For additional nodes:

    # ansible-playbook [-i /path/to/file] \
        /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml

    For additional masters:

    # ansible-playbook [-i /path/to/file] \
        /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/scaleup.yml
  5. After the playbook completes successfully, verify the installation.
  6. Finally, move any hosts you had defined in the [new_<host_type>] section into their appropriate section (but leave the [new_<host_type>] section definition itself in place) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes. For example, when adding new nodes:

    [nodes]
    master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
    node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
    node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
    node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
    
    [new_nodes]