Chapter 2. Red Hat OpenShift Container Platform Prerequisites

A successful deployment of Red Hat OpenShift Container Platform requires many prerequisites. The prerequisites include the deployment of components in Google Cloud Platform and the required configuration steps prior to the actual installation of Red Hat OpenShift Container Platform using Ansible. In the subsequent sections, details regarding the prerequisites and configuration changes required for an Red Hat OpenShift Container Platform on a Google Cloud Platform environment are discussed in detail.

2.1. Google Cloud Platform Setup

Documentation for the Google Cloud Platform setup can be found on the following link: https://cloud.google.com/compute/docs. To access the Google Cloud Platform Console, a Google account is needed.

2.1.1. Google Cloud Platform Project

This reference architecture assumes that a "Greenfield" deployment is being done in a newly created Google Cloud Platform project. To set that up, log into the Google Cloud Platform console home and select the Project button:

Create New Project

The name refarch is used for this reference architecture:

Name New Project
Note

It is required the project id to be unique in all the Google Cloud Platform. In this implementation refarch-204310 is used.

2.1.2. Google Cloud Platform Billing

The next step is to set up billing for Google Cloud Platform so new resources can be created. Select "Enable Billing" in the "Billing tab" in the Google Cloud Platform. The new project can be linked to an existing project or financial information can be entered:

Enable Billing
Link to existing account

2.1.3. Google Cloud Platform Cloud DNS Zone

In this reference implementation guide, the domain example.com was purchased through an external provider and the subdomain gce.example.com is configured to be managed by Cloud DNS. For the example below, the domain gce.example.com represents the Public Hosted Domain Name used for the installation of Red Hat OpenShift Container Platform. The following instructions shows how to add the Publicly Hosted Domain Name to Cloud DNS.

  • Select the "Network services" → "Cloud DNS" section within the Google Cloud Platform console.
  • Select "Create Zone".
  • Enter a name logical represent the Publicly Hosted Domain Name as the Cloud DNS Zone in the "Zone Name" box.

    • Zone name must be separated by dashes (-)
    • It is recommended to just replace any dots (.) with dashes (-) for easier recognition of the zone (gce-example-com)
  • Enter the Publicly Hosted Domain Name (gce.example.com) in the "DNS name" box.
  • Enter a Description (Public Zone for OpenShift Container Platform 3 Reference Architecture)
  • Select "Create"
Cloud DNS addition

Once the Pubic Zone is created, delegation to gce.example.com should be setup to use the Google Name Servers.

Note

Refer to the domain registrar provider documentation for instructions on how to make the name server change.

Cloud DNS nameservers

2.1.4. Google Cloud Platform service account

A service account is created to avoid using personal users when deploying Google Cloud Platform objects. This is an optional step and the infrastructure objects can be deployed using a personal account.

Note

For simplicity, the service account has project editor privileges meaning it can create/delete any object within the project. Use it with caution.

The following instructions shows how to create a service account in the Google Cloud Platform project:

  • Select the "IAM" → "Service accounts" section within the Google Cloud Platform console.
  • Select "Create Service account".
  • Select "Project" → "Editor" as service account Role.
  • Select "Furnish a new private key".
  • Select "Save"
Warning

This process generates a json file with a private key that is used to access Google Cloud Platform. It should be store securely to prevent Google Cloud Platform access.

Service Account

2.1.5. Google Cloud Platform SSH Keys (optional)

Google Cloud Platform injects ssh public keys as authorized keys to be able to login using ssh in the instances created. The list of ssh keys injected in the instance can be configured per instance basis or per project.

The following instructions shows how to create and add a ssh key in the Google Cloud Platform project. The creation step is optional if the ssh key exists, but the import should be done.

Note

This procedure adds the ssh key to the whole project but this can be overwritten on every instance. See https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-project-keys for more information.

  • From the local workstation, create a ssh keypair:
$ ssh-keygen -t rsa -N '' -f ~/.ssh/gcp_key

This process generates both private and public key:

Your identification has been saved in ~/.ssh/gcp_key.
Your public key has been saved in ~/.ssh/gcp_key.pub.
The key fingerprint is:
SHA256:5gmNKPww1KkUhX8NwPM2OfO+R5ipS1vLJgZacm0FIys myuser@example.com
The key's randomart image is:
+---[RSA 2048]----+
|  .+o.           |
|  .o+.+          |
|  o.o= *         |
| +E.o.Ooo        |
|  *..+o*S+       |
|  .=+ o+=..      |
|   =.o.o+.       |
|  .  .++o..      |
|     .o++o       |
+----[SHA256]-----+
  • View and copy the public key:
ssh-rsa <KEY_VALUE> <user>@<hostname>
Note

Where the <user>@<hostname> is usually just a comment, when using Google Cloud Platform tools to inject the ssh key, the user is created and it is required to fit the environment.

As an example, the following public key has been tweaked to remove the <hostname>:

$ cat ~/.ssh/gcp_key.pub
ssh-rsa AAAAB3NzaC1y...[OUTPUT OMITTED]...vzwgvAtkYXAbnP myuser
  • Select the "Compute Engine" → "Metadata" section within the Google Cloud Platform console.
  • Select "SSH Keys" tab and select "+ Add item"
SSH Keys
  • Add the public key content to the box and "Save" it.
SSH Key added
  • Select the "Compute Engine" → "Metadata" → "Metadata" section within the Google Cloud Platform console.
  • Edit the 'sshKeys' key and append the ssh public key with the following format:
<user>:ssh-rsa <KEY_VALUE> <user>

As an example:

myuser:ssh-rsa AAAAB3NzaC1y...[OUTPUT OMITTED]...vzwgvAtkYXAbnP myuser
SSH Key added in the metadata/sshKeys

2.2. Google Cloud Platform tooling preparation

To manage the Google Cloud Platform resources using the command line, it is required to download and install the Google Cloud SDK with the gcloud command-line tool.

Google provides repository which can be used to install Google Cloud SDK. The following instructions shows how to install the Google Cloud SDK on Red Hat Enterprise Linux 7 or Fedora based OS:

  • Configure a new repository with the Google Cloud SDK information:
$ sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOF
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • Install the google-cloud-sdk package and dependencies
$ sudo yum install google-cloud-sdk

2.2.1. Google Cloud Platform command line configuration

The gcloud command requires some configuration to access Google Cloud Platform project and resources. As a regular user execute gcloud init to configure it using the user/password used to access the Google Cloud Platform console:

$ gcloud init

Optionally disable usage reporting:

$ gcloud config set disable_usage_reporting true

To verify it is working:

$ gcloud info
Google Cloud SDK [200.0.0]
...[OUTPUT OMITTED]...
Account: [XXX]
Project: [refarch-204310]

Current Properties:
  [core]
    project: [refarch-204310]
    account: [XXX]
    disable_usage_reporting: [True]
  [compute]
    region: [us-west1]
    zone: [us-west1-a]
...[OUTPUT OMITTED]...

2.2.2. Google Cloud Platform command line service account configuration

The service account previously created should be used to deploy the objects. The json file downloaded as part of the service account creation is used.

  • Verify the content:
$ cat ~/Downloads/my-key-file.json
{
  "type": "service_account",
  "project_id": "refarch-204310",
  "private_key_id": "xxxx",
  "private_key": "....",
  ...[OUTPUT OMITTED]...
  • Use the key file to activate the service account:
$ gcloud auth activate-service-account --key-file=~/Downloads/my-key-file.json
  • Verify it is used with gcloud cli:
$ gcloud auth list
                 Credentialed Accounts
ACTIVE  ACCOUNT
        my-regular-user@example.com
*       ocp-sa@refarch-204310.iam.gserviceaccount.com

2.2.2.1. Google Cloud Platform gsutil command line service account configuration

gsutil command is used to handle object storage. By default gsutil should use the service account access configured to gcloud.

If the command fails due to bad permissions, verify the service account has proper permissions to handle storage ('storage.bucket.*').

As a test, a sample bucket can be created/deleted:

$ BUCKETNAME=$(cat /dev/urandom | tr -dc 'a-z' | fold -w 32 | head -n 1)
$ gsutil mb gs://${BUCKETNAME}
$ gsutil rb gs://${BUCKETNAME}

If after verifying it has the proper permissions gsutil still complains about permissions, configure gsutil to use the HMAC access key and secret.

  • Verify the content of the key file:
$ cat ~/Downloads/my-key-file.json
{
  "type": "service_account",
  "project_id": "refarch-204310",
  "private_key_id": "xxxx",
  "private_key": "....",
  ...[OUTPUT OMITTED]...
  • Configure gsutil to use the private_key_id and private_key:
$ gsutil config -a
  ...[OUTPUT OMITTED]...
  What is your google access key ID? xxxx
  What is your google secret access key? "-----BEGIN PRIVATE KEY-----\n...[OUTPUT OMITTED]...\n-----END PRIVATE KEY-----\n"

2.3. Environment configuration

To simplify the Google Cloud Platform components creation, the following variables are exported to environment variables to be reused during the commands execution. These variables are examples that need to be modified to fit the particular environment.

Environment variables

# Google Project ID
export PROJECTID="refarch-204310"
# Google Region
export REGION="us-west1"
export DEFAULTZONE="us-west1-a"
# For multizone deployments
ZONES=("us-west1-a" "us-west1-b" "us-west1-c")
# For single zone deployments
# ZONES=("us-west1-a")
export MYZONES_LIST="$(declare -p ZONES)"
# OpenShift Cluster ID
export CLUSTERID="refarch"
# Network and subnet configuration
export CLUSTERID_NETWORK="${CLUSTERID}-net"
export CLUSTERID_SUBNET="${CLUSTERID}-subnet"
# Subnet CIDR, modify if needed
export CLUSTERID_SUBNET_CIDR="10.240.0.0/24"
# DNS
export DNSZONE="example-com"
export DOMAIN="gce.example.com."
export TTL=3600
# RHEL image to be used
export RHELIMAGE="${CLUSTERID}-rhel-image"
export IMAGEPROJECT="${PROJECTID}"
# Bastion settings
export BASTIONDISKSIZE="20GB"
export BASTIONSIZE="g1-small"
# Master nodes settings
export MASTER_NODE_COUNT=3
export MASTERDISKSIZE="40GB"
export MASTERSIZE="n1-standard-8"
export ETCDSIZE="50GB"
export MASTERCONTAINERSSIZE="20GB"
export MASTERLOCALSIZE="30GB"
# Infra nodes settings
export INFRA_NODE_COUNT=3
export INFRADISKSIZE="40GB"
# By default, 8Gi RAM is required to run elasticsearch pods
# as part of the aggregated logging component
export INFRASIZE="n1-standard-8"
export INFRACONTAINERSSIZE="20GB"
export INFRALOCALSIZE="30GB"
# App nodes settings
export APP_NODE_COUNT=3
export APPDISKSIZE="40GB"
export APPSIZE="n1-standard-2"
export APPCONTAINERSSIZE="20GB"
export APPLOCALSIZE="30GB"
# CNS nodes settings
export CNS_NODE_COUNT=3
export CNSDISKSIZE="40GB"
# By default, 8Gi RAM is required to run CNS nodes
export CNSSIZE="n1-standard-8"
export CNSDISKSIZE="40GB"
export CNSCONTAINERSSIZE="20GB"
export CNSGLUSTERSIZE="100GB"

Instead exporting every variable and to simplify the deployment, a file can be created with the previous content, then use source to export them in a single command:

Environment variables source script

$ cat ~/myvars
# Google Project ID
export PROJECTID="refarch-204310"
# Google Region
export REGION="us-west1"
export DEFAULTZONE="us-west1-a"
# For multizone deployments
ZONES=("us-west1-a" "us-west1-b" "us-west1-c")
# For single zone deployments
# ZONES=("us-west1-a")
export MYZONES_LIST="$(declare -p ZONES)"
# OpenShift Cluster ID
export CLUSTERID="refarch"
# Network and subnet configuration
export CLUSTERID_NETWORK="${CLUSTERID}-net"
export CLUSTERID_SUBNET="${CLUSTERID}-subnet"
# Subnet CIDR, modify if needed
export CLUSTERID_SUBNET_CIDR="10.240.0.0/24"
# DNS
export DNSZONE="example-com"
export DOMAIN="gce.example.com."
export TTL=3600
# RHEL image to be used
export RHELIMAGE="${CLUSTERID}-rhel-image"
export IMAGEPROJECT="${PROJECTID}"
# Bastion settings
export BASTIONDISKSIZE="20GB"
export BASTIONSIZE="g1-small"
# Master nodes settings
export MASTER_NODE_COUNT=3
export MASTERDISKSIZE="40GB"
export MASTERSIZE="n1-standard-8"
export ETCDSIZE="50GB"
export MASTERCONTAINERSSIZE="20GB"
export MASTERLOCALSIZE="30GB"
# Infra nodes settings
export INFRA_NODE_COUNT=3
export INFRADISKSIZE="40GB"
# By default, 8Gi RAM is required to run elasticsearch pods
# as part of the aggregated logging component
export INFRASIZE="n1-standard-8"
export INFRACONTAINERSSIZE="20GB"
export INFRALOCALSIZE="30GB"
# App nodes settings
export APP_NODE_COUNT=3
export APPDISKSIZE="40GB"
export APPSIZE="n1-standard-2"
export APPCONTAINERSSIZE="20GB"
export APPLOCALSIZE="30GB"
# CNS nodes settings
export CNS_NODE_COUNT=3
export CNSDISKSIZE="40GB"
# By default, 8Gi RAM is required to run CNS nodes
export CNSSIZE="n1-standard-8"
export CNSDISKSIZE="40GB"
export CNSCONTAINERSSIZE="20GB"
export CNSGLUSTERSIZE="100GB"
$ source ~/myvars

Configure the default project, region and zone.

Default project, region and zone

$ gcloud config set project ${PROJECTID}
$ gcloud config set compute/region ${REGION}
$ gcloud config set compute/zone ${DEFAULTZONE}

2.4. Creating a Red Hat Enterprise Linux Base Image

The Red Hat Enterprise Linux image from Google Cloud Platform can be used for the deployment of Red Hat OpenShift Container Platform but the instances deployed using this image are charged more as the image carries it’s own Red Hat Enterprise Linux subscription. To avoid double paying, the following process can be used to upload an image of Red Hat Enterprise Linux. For this particular reference environment the image used is Red Hat Enterprise Linux 7.5.

Note

For more information on how to create custom Google Cloud Platform images, see https://cloud.google.com/compute/docs/images#custom_images

The following instructions shows how to create and add a Red Hat Enterprise Linux image in a Google Cloud Platform project with the proper Google Cloud Platform tools included:

Cloud Access
$ sudo yum install qemu-img

$ qemu-img convert -p -S 4096 -f qcow2 -O raw rhel-server-7.5-x86_64-kvm.qcow2 disk.raw
Note

The raw disk name should be disk.raw and the raw file size can be up to 10GB.

  • Create a tar.gz file with the raw disk
$ tar -Szcf rhel-7.5.tar.gz disk.raw
  • Create a bucket to host the temporary image with the proper labels
$ gsutil mb -l ${REGION} gs://${CLUSTERID}-rhel-image-temp

$ cat <<EOF > labels.json
{
  "ocp-cluster": "${CLUSTERID}"
}
EOF

$ gsutil label set labels.json gs://${CLUSTERID}-rhel-image-temp

$ rm -f labels.json
Note

Google Cloud Platform buckets are global objects. This means that every bucket name must be unique. If the bucket creation process fails because the name is used, select a different bucket name.

  • Upload the file
$ gsutil -o GSUtil:parallel_composite_upload_threshold=150M \
    cp rhel-7.5.tar.gz gs://${CLUSTERID}-rhel-image-temp
  • Create a Google Cloud Platform image with the temporary image
$ gcloud compute images create ${CLUSTERID}-rhel-temp-image \
  --family=rhel \
  --source-uri=https://storage.googleapis.com/${CLUSTERID}-rhel-image-temp/rhel-7.5.tar.gz
  • Create a temporary firewall rule to allow ssh access to the temporary instance:
$ gcloud compute firewall-rules create ${CLUSTERID}-ssh-temp \
  --direction=INGRESS --priority=1000 --network=default \
  --action=ALLOW --rules=tcp:22,icmp \
  --source-ranges=0.0.0.0/0 --target-tags=${CLUSTERID}-temp
  • Create a instance with the temporary image to be customized for Google Cloud Platform integration
$ gcloud compute instances create ${CLUSTERID}-temp \
  --zone=${DEFAULTZONE} \
  --machine-type=g1-small \
  --network=default --async \
  --maintenance-policy=MIGRATE \
  --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
  --min-cpu-platform=Automatic \
  --image=${CLUSTERID}-rhel-temp-image \
  --tags=${CLUSTERID}-temp \
  --boot-disk-size=10GB --boot-disk-type=pd-standard \
  --boot-disk-device-name=${CLUSTERID}-temp
  • Verify the instance is working properly
$ gcloud compute instances get-serial-port-output ${CLUSTERID}-temp

$ gcloud compute instances list
  • Connect to the instance using ssh and the key previously created. The IP address is shown in the previous gcloud compute instances list command.
$ ssh -i ~/.ssh/gcp_key cloud-user@instance-ip
Note

As the Google Cloud Platform tools are not installed (but the ssh public key is injected using cloud-init), the user is not created and it uses cloud-user by default even if the key was created with a different username.

  • Subscribe the instance temporary, attach it to the proper pool and configure basic repositories:
$ sudo subscription-manager register --username=myuser --password=mypass

$ sudo subscription-manager list --available

$ sudo subscription-manager attach --pool=mypool

$ sudo subscription-manager repos --disable="*" \
    --enable=rhel-7-server-rpms
  • Add the Google Cloud Platform repo:
$ sudo tee /etc/yum.repos.d/google-cloud.repo << EOF
[google-cloud-compute]
name=Google Cloud Compute
baseurl=https://packages.cloud.google.com/yum/repos/google-cloud-compute-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • Remove unneeded packages and install the required ones
$ sudo yum remove irqbalance cloud-init rhevm-guest-agent-common \
    kexec-tools microcode_ctl rpcbind

$ sudo yum install google-compute-engine python-google-compute-engine \
    rng-tools acpid firewalld
Note

google-compute-engine tools include some tools that provide similar functionality as cloud-init and they are recommended in Google Cloud Platform instances. For more information see https://github.com/GoogleCloudPlatform/compute-image-packages

  • Optionally, update all the packages to the latest versions
$ sudo yum update
  • Enable required serivces
$ sudo systemctl enable rngd \
    google-accounts-daemon \
    google-clock-skew-daemon \
    google-shutdown-scripts \
    google-network-daemon
  • Clean up the image:
$ sudo tee /etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
MTU=1460
EOF

$ sudo yum clean all

$ sudo rm -rf /var/cache/yum

$ sudo logrotate -f /etc/logrotate.conf

$ sudo rm -f /var/log/*-???????? /var/log/*.gz

$ sudo rm -f /var/log/dmesg.old /var/log/anaconda

$ cat /dev/null | sudo tee /var/log/audit/audit.log

$ cat /dev/null | sudo tee /var/log/wtmp

$ cat /dev/null | sudo tee /var/log/lastlog

$ cat /dev/null | sudo tee /var/log/grubby

$ sudo rm -f /etc/udev/rules.d/70-persistent-net.rules \
    /etc/udev/rules.d/75-persistent-net-generator.rules \
    /etc/ssh/ssh_host_* /home/cloud-user/.ssh/*

$ sudo subscription-manager remove --all

$ sudo subscription-manager unregister

$ sudo subscription-manager clean

$ export HISTSIZE=0

$ sudo poweroff
Note

For more information on the steps to make a clean VM for use as a template or clone, see https://access.redhat.com/solutions/198693

  • Once the instance has been powered off, the proper image can be created:
$ gcloud compute images create ${CLUSTERID}-rhel-image \
  --family=rhel --source-disk=${CLUSTERID}-temp \
  --source-disk-zone=${DEFAULTZONE}

After that last step, the image is ready to be used as instances for Red Hat OpenShift Container Platform.

The next steps clean up the temporary image, buckets and firewall rules:

$ gcloud compute instances delete ${CLUSTERID}-temp \
  --delete-disks all --zone=${DEFAULTZONE}

$ gcloud compute images delete ${CLUSTERID}-rhel-temp-image

$ gsutil -m rm -r gs://${CLUSTERID}-rhel-image-temp

$ gcloud compute firewall-rules delete ${CLUSTERID}-ssh-temp

Last step is to clean the workstation:

$ rm -f disk.raw \
      rhel-server-7.5-x86_64-kvm.qcow2 \
      rhel-7.5.tar.gz

2.5. Creating Red Hat OpenShift Container Platform Networks

A VPC Network and subnet are created to allow for virtual machines to be launched.

Network and subnet creation

# Network
$ gcloud compute networks create ${CLUSTERID_NETWORK} --subnet-mode custom

# Subnet
$ gcloud compute networks subnets create ${CLUSTERID_SUBNET} \
  --network ${CLUSTERID_NETWORK} \
  --range ${CLUSTERID_SUBNET_CIDR}

2.6. Creating Firewall Rules

Google Cloud Platform firewall rules allows the user to define inbound and outbound traffic filters that can be applied to tags on a network. This allows the user to limit network traffic to each instance based on the function of the instance services and not depend on host based filtering.

This section describes the ports and services required for each type of host and how to create the security groups on Google Cloud Platform.

The following table shows the tags associated to every instance type:

Table 2.1. Firewall Rules association

Instance typeTags associated

Bastion

${CLUSTERID}-bastion

App nodes

${CLUSTERID}-node

Masters

${CLUSTERID}-master and ${CLUSTERID}-node

Infra nodes

${CLUSTERID}-infra & ${CLUSTERID}-node

CNS nodes

${CLUSTERID}-cns & ${CLUSTERID}-node

Note

There is an extra tag ${CLUSTERID}ocp used by default when creating "LoadBalancer" services that needs to be applied to application nodes. A bugzilla is being investigated to allow customization of that tag.

2.6.1. Bastion Firewall rules

The bastion instance only needs to allow inbound ssh from the internet and it should be allowed to reach any host in the network as the installation is performed in the bastion instace. Also this instance exists to serve as the jump host between the private subnet and public internet.

Table 2.2. Bastion Allowed firewall rules

Port/ProtocolServiceRemote sourcePurpose

22/TCP

SSH

Any

Secure shell login

Creation of the above firewall rules is as follows:

Bastion Firewall rules

# External to bastion
$ gcloud compute firewall-rules create ${CLUSTERID}-external-to-bastion \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=tcp:22,icmp \
  --source-ranges=0.0.0.0/0 --target-tags=${CLUSTERID}-bastion

# Bastion to all hosts
$ gcloud compute firewall-rules create ${CLUSTERID}-bastion-to-any \
    --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
    --action=ALLOW --rules=all \
    --source-tags=${CLUSTERID}-bastion --target-tags=${CLUSTERID}-node

2.6.2. Master Firewall Rules

The Red Hat OpenShift Container Platform master firewall rules requires the most complex network access controls. In addition to the ports used by the API and master console, these nodes contain the etcd servers that form the cluster.

Table 2.3. Master Host Allowed firewall rules

Port/ProtocolServiceRemote sourcePurpose

2379/TCP

etcd

Masters

Client → Server connections

2380/TCP

etcd

Masters

Server → Server cluster communications

8053/TCP

DNS

Masters and nodes

Internal name services (3.2+)

8053/UDP

DNS

Masters and nodes

Internal name services (3.2+)

443/TCP

HTTPS

Any

Master WebUI and API

10250/TCP

kubelet

Nodes

Kubelet communications

Note

As masters nodes are in fact nodes, the same rules used for Red Hat OpenShift Container Platform nodes are used.

Creation of the above security group is as follows:

Master Firewall rules

# Nodes to master
$ gcloud compute firewall-rules create ${CLUSTERID}-node-to-master \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=udp:8053,tcp:8053 \
  --source-tags=${CLUSTERID}-node --target-tags=${CLUSTERID}-master

# Master to node
$ gcloud compute firewall-rules create ${CLUSTERID}-master-to-node \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=tcp:10250 \
  --source-tags=${CLUSTERID}-master --target-tags=${CLUSTERID}-node

# Master to master
$ gcloud compute firewall-rules create ${CLUSTERID}-master-to-master \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=tcp:2379,tcp:2380 \
  --source-tags=${CLUSTERID}-master --target-tags=${CLUSTERID}-master

# Any to master
$ gcloud compute firewall-rules create ${CLUSTERID}-any-to-masters \
  --direction=INGRESS --priority=1000  --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=tcp:443 \
  --source-ranges=${CLUSTERID_SUBNET_CIDR} --target-tags=${CLUSTERID}-master

2.6.3. Infrastructure Node Firewall Rules

The infrastructure nodes run the Red Hat OpenShift Container Platform router and the registry. The security group must accept inbound connections on the web ports to be forwarded to their destinations.

Table 2.4. Infrastructure Node Allowed firewall rules

Port/ProtocolServicesRemote sourcePurpose

80/TCP

HTTP

Any

Cleartext application web traffic

443/TCP

HTTPS

Any

Encrypted application web traffic

9200/TCP

ElasticSearch

Any

ElasticSearch API

9300/TCP

ElasticSearch

Any

Internal cluster use

Creation of the above security group is as follows:

Infrastructure node Firewall rules

# Infra node to infra node
$ gcloud compute firewall-rules create ${CLUSTERID}-infra-to-infra \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=tcp:9200,tcp:9300 \
  --source-tags=${CLUSTERID}-infra --target-tags=${CLUSTERID}-infra

# Routers
$ gcloud compute firewall-rules create ${CLUSTERID}-any-to-routers \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --source-ranges 0.0.0.0/0 \
  --target-tags ${CLUSTERID}-infra \
  --allow tcp:443,tcp:80

2.6.4. Node Firewall rules

The node firewall rules allows pod to pod communication via SDN traffic.

Table 2.5. Node Allowed firewall rules

Port/ProtocolServicesRemote sourcePurpose

4789/UDP

SDN

Nodes

Pod to pod communications

10250/TCP

Kubelet

Infra nodes

Heapster to gather metrics from nodes

Creation of the above security group is as follows:

Node Firewall rules

# Node to node SDN
$ gcloud compute firewall-rules create ${CLUSTERID}-node-to-node \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=udp:4789 \
  --source-tags=${CLUSTERID}-node --target-tags=${CLUSTERID}-node

# Infra to node kubelet
$ gcloud compute firewall-rules create ${CLUSTERID}-infra-to-node \
    --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
    --action=ALLOW --rules=tcp:10250 \
    --source-tags=${CLUSTERID}-infra --target-tags=${CLUSTERID}-node

Note

As masters and infrastructure nodes are tagged as nodes the same rules apply to them.

2.6.5. CNS Node Firewall Rules (Optional)

The CNS nodes require specific ports of the glusterfs pods. The rules defined only allow glusterfs communication.

Table 2.6. CNS Allowed firewall rules

Port/ProtocolServicesRemote sourcePurpose

111/TCP

Gluster

Gluser Nodes

Portmap

111/UDP

Gluster

Gluser Nodes

Portmap

2222/TCP

Gluster

Gluser Nodes

CNS communication

3260/TCP

Gluster

Gluser Nodes

Gluster Block

24007/TCP

Gluster

Gluster Nodes

Gluster Daemon

24008/TCP

Gluster

Gluster Nodes

Gluster Management

24010/TCP

Gluster

Gluster Nodes

Gluster Block

49152-49664/TCP

Gluster

Gluster Nodes

Gluster Client Ports

Creation of the above security group is as follows:

CNS Node Firewall rules

# CNS to CNS node
$ gcloud compute firewall-rules create ${CLUSTERID}-cns-to-cns \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW --rules=tcp:2222 \
  --source-tags=${CLUSTERID}-cns --target-tags=${CLUSTERID}-cns

# Node to CNS node (client)
$ gcloud compute firewall-rules create ${CLUSTERID}-node-to-cns \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --action=ALLOW \
  --rules=tcp:111,udp:111,tcp:3260,tcp:24007-24010,tcp:49152-49664 \
  --source-tags=${CLUSTERID}-node --target-tags=${CLUSTERID}-cns

2.7. Creating External IP Addresses

Even if all the instances have an external IP to be able to access internet, those IPs are ephemeral. It is required to create some named external IPs to be able to register them with DNS:

  • Masters load balancer IP
  • Applications load balancer IP
  • Bastion host

Creation of the above external IPs is as follows:

External IPs

# Masters load balancer
$ gcloud compute addresses create ${CLUSTERID}-master-lb \
    --ip-version=IPV4 \
    --global

# Applications load balancer
$ gcloud compute addresses create ${CLUSTERID}-apps-lb \
    --region ${REGION}

# Bastion host
$ gcloud compute addresses create ${CLUSTERID}-bastion \
  --region ${REGION}

2.8. Creating External DNS Records

Cloud DNS service is used to register 3 external entries:

Table 2.7. DNS Records

DNS RecordIPExample

${CLUSTERID}-ocp.${DOMAIN}

Masters load balancer IP

refarch-ocp.gce.example.com

*.${CLUSTERID}-apps.${DOMAIN}

Applications load balancer IP

*.refarch-apps.gce.example.com

${CLUSTERID}-bastion.${DOMAIN}

Bastion host

refarch-bastion.gce.example.com

DNS Records

# Masters load balancer entry
$ export LBIP=$(gcloud compute addresses list \
  --filter="name:${CLUSTERID}-master-lb" --format="value(address)")

$ gcloud dns record-sets transaction start --zone=${DNSZONE}

$ gcloud dns record-sets transaction add \
  ${LBIP} --name=${CLUSTERID}-ocp.${DOMAIN} --ttl=${TTL} --type=A \
  --zone=${DNSZONE}
$ gcloud dns record-sets transaction execute --zone=${DNSZONE}

# Applications load balancer entry
$ export APPSLBIP=$(gcloud compute addresses list \
  --filter="name:${CLUSTERID}-apps-lb" --format="value(address)")

$ gcloud dns record-sets transaction start --zone=${DNSZONE}

$ gcloud dns record-sets transaction add \
  ${APPSLBIP} --name=\*.${CLUSTERID}-apps.${DOMAIN} --ttl=${TTL} --type=A \
  --zone=${DNSZONE}

$ gcloud dns record-sets transaction execute --zone=${DNSZONE}

# Bastion host
$ export BASTIONIP=$(gcloud compute addresses list \
  --filter="name:${CLUSTERID}-bastion" --format="value(address)")

$ gcloud dns record-sets transaction start --zone=${DNSZONE}

$ gcloud dns record-sets transaction add \
  ${BASTIONIP} --name=${CLUSTERID}-bastion.${DOMAIN} --ttl=${TTL} --type=A \
  --zone=${DNSZONE}

$ gcloud dns record-sets transaction execute --zone=${DNSZONE}

2.9. Creating Google Cloud Platform Instances

The reference environment in this document consists of the following Google Cloud Platform instances:

  • a single bastion instance
  • three master instances
  • six node instances (3 infrastructure nodes and 3 application nodes)

The bastion host serves as the installer of the Ansible playbooks that deploy Red Hat OpenShift Container Platform as well as an entry point for management tasks.

The master nodes contain the master components: the API server, controller manager server, and etcd. The master manages nodes in its Kubernetes cluster and schedules pods to run on nodes.

The nodes provide the runtime environments for the containers. Each node in a cluster has the required services to be managed by the master nodes. Nodes have the required services to run pods.

For more information about the differences between master and nodes see OpenShift Documentation - Kubernetes Infrastructure.

Note

Always check resource quotas before the deployment of resources. Support requests can be submitted to request the ability to deploy more types of an instance or to increase the total amount of CPUs.

2.9.1. Instance Storage

Dedicated volumes are attached to instances to provide additional disk space. Red Hat OpenShift Container Platform requires at least 40 GB of space available on the master nodes and at least 15GB on the infrastructure and application nodes for the /var partition. This reference architecture breaks out /var/lib/etcd, container storage, and ephemeral pod storage to provide individual storage for those specific directories.

Each master node mounts 3 dedicated disks using the steps provided in this reference architecture:

  • A dedicated disk to host etcd storage.
  • A dedicated disk to store the container container images.
  • A dedicated disk to store ephemeral pod storage.

Each infra and app node Google Cloud Platform instance creates and mounts 2 dedicated disks using the steps provided in this reference architecture:

  • A dedicated disk to store the container container images.
  • A dedicated disk to store ephemeral pod storage.

Table 2.8. Instances storage

DiskRoleMountpointInstance type

${CLUSTERID}-master-${i}-etcd

etcd storage

/var/lib/etcd

Master only

${CLUSTERID}-master-${i}-containers

Container storage

/var/lib/docker

Master and nodes

${CLUSTERID}-master-${i}-local

Ephemeral pod storage

/var/lib/origin/openshift.local.volumes

Master and nodes

2.9.2. Bastion instance

The bastion host serves as the installer of the Ansible playbooks that deploy Red Hat OpenShift Container Platform as well as an entry point for management tasks. The bastion instance is created and attached to the external IP previously created:

Bastion instance

$ export BASTIONIP=$(gcloud compute addresses list \
  --filter="name:${CLUSTERID}-bastion" --format="value(address)")

$ gcloud compute instances create ${CLUSTERID}-bastion \
  --async --machine-type=${BASTIONSIZE} \
  --subnet=${CLUSTERID_SUBNET} \
  --address=${BASTIONIP} \
  --maintenance-policy=MIGRATE \
  --scopes=https://www.googleapis.com/auth/cloud.useraccounts.readonly,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol \
  --tags=${CLUSTERID}-bastion \
  --metadata "ocp-cluster=${CLUSTERID},${CLUSTERID}-type=bastion" \
  --image=${RHELIMAGE} --image-project=${IMAGEPROJECT} \
  --boot-disk-size=${BASTIONDISKSIZE} --boot-disk-type=pd-ssd \
  --boot-disk-device-name=${CLUSTERID}-bastion \
  --zone=${DEFAULTZONE}

2.9.3. Master Instances and Components

The following script needs to be created in the local workstation to be used in the instance for customization at boot time:

Master customization

$ vi master.sh
#!/bin/bash
LOCALVOLDEVICE=$(readlink -f /dev/disk/by-id/google-*local*)
ETCDDEVICE=$(readlink -f /dev/disk/by-id/google-*etcd*)
CONTAINERSDEVICE=$(readlink -f /dev/disk/by-id/google-*containers*)
LOCALDIR="/var/lib/origin/openshift.local.volumes"
ETCDDIR="/var/lib/etcd"
CONTAINERSDIR="/var/lib/docker"

for device in ${LOCALVOLDEVICE} ${ETCDDEVICE} ${CONTAINERSDEVICE}
do
  mkfs.xfs ${device}
done

for dir in ${LOCALDIR} ${ETCDDIR} ${CONTAINERSDIR}
do
  mkdir -p ${dir}
  restorecon -R ${dir}
done

echo UUID=$(blkid -s UUID -o value ${LOCALVOLDEVICE}) ${LOCALDIR} xfs defaults,discard,gquota 0 2 >> /etc/fstab
echo UUID=$(blkid -s UUID -o value ${ETCDDEVICE}) ${ETCDDIR} xfs defaults,discard 0 2 >> /etc/fstab
echo UUID=$(blkid -s UUID -o value ${CONTAINERSDEVICE}) ${CONTAINERSDIR} xfs defaults,discard 0 2 >> /etc/fstab

mount -a

Note

The previous customization script should be named master.sh

The master instances are created using the following bash loop:

Master instances

# Disks multizone and single zone support
$ eval "$MYZONES_LIST"

$ for i in $(seq 0 $((${MASTER_NODE_COUNT}-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute disks create ${CLUSTERID}-master-${i}-etcd \
    --type=pd-ssd --size=${ETCDSIZE} --zone=${zone[$i]}
  gcloud compute disks create ${CLUSTERID}-master-${i}-containers \
    --type=pd-ssd --size=${MASTERCONTAINERSSIZE} --zone=${zone[$i]}
  gcloud compute disks create ${CLUSTERID}-master-${i}-local \
    --type=pd-ssd --size=${MASTERLOCALSIZE} --zone=${zone[$i]}
done

# Master instances multizone and single zone support
$ for i in $(seq 0 $((${MASTER_NODE_COUNT}-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances create ${CLUSTERID}-master-${i} \
    --async --machine-type=${MASTERSIZE} \
    --subnet=${CLUSTERID_SUBNET} \
    --address="" --no-public-ptr \
    --maintenance-policy=MIGRATE \
    --scopes=https://www.googleapis.com/auth/cloud.useraccounts.readonly,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol \
    --tags=${CLUSTERID}-master,${CLUSTERID}-node \
    --metadata "ocp-cluster=${CLUSTERID},${CLUSTERID}-type=master" \
    --image=${RHELIMAGE}  --image-project=${IMAGEPROJECT} \
    --boot-disk-size=${MASTERDISKSIZE} --boot-disk-type=pd-ssd \
    --boot-disk-device-name=${CLUSTERID}-master-${i} \
    --disk=name=${CLUSTERID}-master-${i}-etcd,device-name=${CLUSTERID}-master-${i}-etcd,mode=rw,boot=no \
    --disk=name=${CLUSTERID}-master-${i}-containers,device-name=${CLUSTERID}-master-${i}-containers,mode=rw,boot=no \
    --disk=name=${CLUSTERID}-master-${i}-local,device-name=${CLUSTERID}-master-${i}-local,mode=rw,boot=no \
    --metadata-from-file startup-script=./master.sh \
    --zone=${zone[$i]}
done

Note

The customization script is executed at boot time as it specified with the --metadata-from-file startup-script=./master.sh flag.

2.9.4. Infrastructure and application nodes

The following script needs to be created in the local workstation to be used in the instance for customization at boot time and it is valid for both infrastructure and application nodes:

Node customization

$ vi node.sh
#!/bin/bash
LOCALVOLDEVICE=$(readlink -f /dev/disk/by-id/google-*local*)
CONTAINERSDEVICE=$(readlink -f /dev/disk/by-id/google-*containers*)
LOCALDIR="/var/lib/origin/openshift.local.volumes"
CONTAINERSDIR="/var/lib/docker"

for device in ${LOCALVOLDEVICE} ${CONTAINERSDEVICE}
do
  mkfs.xfs ${device}
done

for dir in ${LOCALDIR} ${CONTAINERSDIR}
do
  mkdir -p ${dir}
  restorecon -R ${dir}
done

echo UUID=$(blkid -s UUID -o value ${LOCALVOLDEVICE}) ${LOCALDIR} xfs defaults,discard,gquota 0 2 >> /etc/fstab
echo UUID=$(blkid -s UUID -o value ${CONTAINERSDEVICE}) ${CONTAINERSDIR} xfs defaults,discard 0 2 >> /etc/fstab

mount -a

Note

The previous customization script should be named node.sh

The infrastructure node instances are created using the following bash loop:

Infrastructure instances

# Disks multizone and single zone support
$ eval "$MYZONES_LIST"

$ for i in $(seq 0 $(($INFRA_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute disks create ${CLUSTERID}-infra-${i}-containers \
  --type=pd-ssd --size=${INFRACONTAINERSSIZE} --zone=${zone[$i]}
  gcloud compute disks create ${CLUSTERID}-infra-${i}-local \
  --type=pd-ssd --size=${INFRALOCALSIZE} --zone=${zone[$i]}
done

# Infrastructure instances multizone and single zone support
$ for i in $(seq 0 $(($INFRA_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances create ${CLUSTERID}-infra-${i} \
    --async --machine-type=${INFRASIZE} \
    --subnet=${CLUSTERID_SUBNET} \
    --address="" --no-public-ptr \
    --maintenance-policy=MIGRATE \
    --scopes=https://www.googleapis.com/auth/cloud.useraccounts.readonly,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol \
    --tags=${CLUSTERID}-infra,${CLUSTERID}-node,${CLUSTERID}ocp \
    --metadata "ocp-cluster=${CLUSTERID},${CLUSTERID}-type=infra" \
    --image=${RHELIMAGE}  --image-project=${IMAGEPROJECT} \
    --boot-disk-size=${INFRADISKSIZE} --boot-disk-type=pd-ssd \
    --boot-disk-device-name=${CLUSTERID}-infra-${i} \
    --disk=name=${CLUSTERID}-infra-${i}-containers,device-name=${CLUSTERID}-infra-${i}-containers,mode=rw,boot=no \
    --disk=name=${CLUSTERID}-infra-${i}-local,device-name=${CLUSTERID}-infra-${i}-local,mode=rw,boot=no \
    --metadata-from-file startup-script=./node.sh \
    --zone=${zone[$i]}
done

Note

The customization script is executed at boot time as it specified with the --metadata-from-file startup-script=./node.sh flag.

The application node instances are created using the following bash loop:

Application instances

# Disks multizone and single zone support
$ eval "$MYZONES_LIST"

$ for i in $(seq 0 $(($APP_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute disks create ${CLUSTERID}-app-${i}-containers \
  --type=pd-ssd --size=${APPCONTAINERSSIZE} --zone=${zone[$i]}
  gcloud compute disks create ${CLUSTERID}-app-${i}-local \
  --type=pd-ssd --size=${APPLOCALSIZE} --zone=${zone[$i]}
done

# Application instances multizone and single zone support
$ for i in $(seq 0 $(($APP_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances create ${CLUSTERID}-app-${i} \
    --async --machine-type=${APPSIZE} \
    --subnet=${CLUSTERID_SUBNET} \
    --address="" --no-public-ptr \
    --maintenance-policy=MIGRATE \
    --scopes=https://www.googleapis.com/auth/cloud.useraccounts.readonly,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol \
    --tags=${CLUSTERID}-node,${CLUSTERID}ocp \
    --metadata "ocp-cluster=${CLUSTERID},${CLUSTERID}-type=app" \
    --image=${RHELIMAGE}  --image-project=${IMAGEPROJECT} \
    --boot-disk-size=${APPDISKSIZE} --boot-disk-type=pd-ssd \
    --boot-disk-device-name=${CLUSTERID}-app-${i} \
    --disk=name=${CLUSTERID}-app-${i}-containers,device-name=${CLUSTERID}-app-${i}-containers,mode=rw,boot=no \
    --disk=name=${CLUSTERID}-app-${i}-local,device-name=${CLUSTERID}-app-${i}-local,mode=rw,boot=no \
    --metadata-from-file startup-script=./node.sh \
    --zone=${zone[$i]}
done

Note

The customization script is executed at boot time as it specified with the --metadata-from-file startup-script=./node.sh flag.

Note

Notice the differences on the tags as different firewall rules need to be applied to different instances.

2.10. Creating Load Balancers

Load balancers are used to provide highly-availability to the Red Hat OpenShift Container Platform master and router services.

2.10.1. Master Load Balancer

The master load balancer requires a static public IP address which is used when specifying the OpenShift public hostname(refarch-ocp.example.com). The load balancer uses probes to validate that instances in the backend pools are available.

As explained in the Section 1.1.4.3, “Load balancing” section, there are different kinds of Google Cloud Platform load balancers. TCP global load balancing is used for masters load balancing.

The load balancer uses probes to validate that instances in the backend pools are available. The probe request the "/healthz" endpoint of the Red Hat OpenShift Container Platform API at 443/TCP port.

Master load balancer health check

# Health check
$ gcloud compute health-checks create https ${CLUSTERID}-master-lb-healthcheck \
    --port 443 --request-path "/healthz" --check-interval=10s --timeout=10s \
    --healthy-threshold=3 --unhealthy-threshold=3

A backend is created using the health check and using client IP affinity rule to ensure client connections are redirected to the same backend server as the initial connection occurred to avoid websocket communication timeouts.

Master load balancer backend

# Create backend and set client ip affinity to avoid websocket timeout
$ gcloud compute backend-services create ${CLUSTERID}-master-lb-backend \
    --global \
    --protocol TCP \
    --session-affinity CLIENT_IP \
    --health-checks ${CLUSTERID}-master-lb-healthcheck \
    --port-name ocp-api

An unmanaged instance group is created in every zone to host the master instances and then are added to the backend service.

Master load balancer backend addition

$ eval "$MYZONES_LIST"

# Multizone and single zone support for instance groups
$ for i in $(seq 0 $((${#ZONES[@]}-1))); do
  ZONE=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instance-groups unmanaged create ${CLUSTERID}-masters-${ZONE} \
    --zone=${ZONE}
  gcloud compute instance-groups unmanaged set-named-ports \
    ${CLUSTERID}-masters-${ZONE} --named-ports=ocp-api:443 --zone=${ZONE}
  gcloud compute instance-groups unmanaged add-instances \
    ${CLUSTERID}-masters-${ZONE} --instances=${CLUSTERID}-master-${i} \
    --zone=${ZONE}
  # Instances are added to the backend service
  gcloud compute backend-services add-backend ${CLUSTERID}-master-lb-backend \
    --global \
    --instance-group ${CLUSTERID}-masters-${ZONE} \
    --instance-group-zone ${ZONE}
done

The TCP proxy rule is created with no proxy header to be transparent and using the backend service previously created.

Master load balancer tcp proxy

# Do not set any proxy header to be transparent
$ gcloud compute target-tcp-proxies create ${CLUSTERID}-master-lb-target-proxy \
    --backend-service ${CLUSTERID}-master-lb-backend \
    --proxy-header NONE

Finally, create the forwarding rules and allow health checks to be performed from Google’s health check IPs.

Master load balancer forwarding rules

$ export LBIP=$(gcloud compute addresses list \
  --filter="name:${CLUSTERID}-master-lb" --format="value(address)")

# Forward only 443/tcp port
$ gcloud compute forwarding-rules create \
    ${CLUSTERID}-master-lb-forwarding-rule \
    --global \
    --target-tcp-proxy ${CLUSTERID}-master-lb-target-proxy \
    --address ${LBIP} \
    --ports 443

# Allow health checks from Google health check IPs
$ gcloud compute firewall-rules create ${CLUSTERID}-healthcheck-to-lb \
  --direction=INGRESS --priority=1000 --network=${CLUSTERID_NETWORK} \
  --source-ranges 130.211.0.0/22,35.191.0.0/16 \
  --target-tags ${CLUSTERID}-master \
  --allow tcp:443

Note

See https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules for more information on health checks and Google IP addresses performing the health checks.

2.10.2. Applications Load Balancer

The applications load balancer requires a static public IP address which is used when specifying the applications wildcard public hostname (*.myocp-apps.example.com). The load balancer uses probes to validate that instances in the backend pools are available.

As explained in the Section 1.1.4.3, “Load balancing” section, there are different kinds of Google Cloud Platform load balancers. Network load balancing is used for applications load balancing.

The load balancer uses probes to validate that instances in the backend pools are available. The probe request the "/healthz" endpoint of the Red Hat OpenShift Container Platform routers at 1936/TCP port (https probe)

Applications load balancer health check

# Health check
$ gcloud compute http-health-checks create ${CLUSTERID}-infra-lb-healthcheck \
    --port 1936 --request-path "/healthz" --check-interval=10s --timeout=10s \
    --healthy-threshold=3 --unhealthy-threshold=3

A target pool is created using the health check and infrastructure nodes are added.

Applications load balancer target pool

# Target Pool
$ gcloud compute target-pools create ${CLUSTERID}-infra \
    --http-health-check ${CLUSTERID}-infra-lb-healthcheck

$ for i in $(seq 0 $(($INFRA_NODE_COUNT-1))); do
  gcloud compute target-pools add-instances ${CLUSTERID}-infra \
  --instances=${CLUSTERID}-infra-${i}
done

Finally, create the forwarding rules to 80/TCP and 443/TCP.

Applications load balancer forwarding rules

# Forwarding rules and firewall rules
$ export APPSLBIP=$(gcloud compute addresses list \
  --filter="name:${CLUSTERID}-apps-lb" --format="value(address)")

$ gcloud compute forwarding-rules create ${CLUSTERID}-infra-http \
    --ports 80 \
    --address ${APPSLBIP} \
    --region ${REGION} \
    --target-pool ${CLUSTERID}-infra

$ gcloud compute forwarding-rules create ${CLUSTERID}-infra-https \
    --ports 443 \
    --address ${APPSLBIP} \
    --region ${REGION} \
    --target-pool ${CLUSTERID}-infra

2.11. Creating Red Hat OpenShift Container Platform Registry Storage

To leverage Google Cloud Platform cloud storage, the Red Hat OpenShift Container Platform uses a dedicated bucket to store images. Using Google Cloud Platform Cloud Storage allow for the registry to grow dynamically in size without the need for intervention from an Administrator.

Registry Storage

# Bucket to host registry
$ gsutil mb -l ${REGION} gs://${CLUSTERID}-registry

$ cat <<EOF > labels.json
{
  "ocp-cluster": "${CLUSTERID}"
}
EOF

$ gsutil label set labels.json gs://${CLUSTERID}-registry

$ rm -f labels.json

Note

Google Cloud Platform buckets are global objects. This means that every bucket name must be unique. If the bucket creation process fails because the name is used, select a different bucket name.

2.12. Creating CNS Instances (Optional)

The following script needs to be created in the local workstation to be used in the CNS instance for customization at boot time:

CNS customization

$ vi cns.sh
#!/bin/bash
CONTAINERSDEVICE=$(readlink -f /dev/disk/by-id/google-*containers*)
CONTAINERSDIR="/var/lib/docker"

mkfs.xfs ${CONTAINERSDEVICE}
mkdir -p ${CONTAINERSDIR}
restorecon -R ${CONTAINERSDIR}

echo UUID=$(blkid -s UUID -o value ${CONTAINERSDEVICE}) ${CONTAINERSDIR} xfs defaults,discard 0 2 >> /etc/fstab

mount -a

Note

The previous customization script should be named cns.sh

The CNS node instances are created using the following bash loop:

CNS instances

# Disks multizone and single zone support
$ eval "$MYZONES_LIST"

$ for i in $(seq 0 $(($CNS_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute disks create ${CLUSTERID}-cns-${i}-containers \
  --type=pd-ssd --size=${CNSCONTAINERSSIZE} --zone=${zone[$i]}
  gcloud compute disks create ${CLUSTERID}-cns-${i}-gluster \
  --type=pd-ssd --size=${CNSGLUSTERSIZE} --zone=${zone[$i]}
done

# CNS instances multizone and single zone support
$ for i in $(seq 0 $(($CNS_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances create ${CLUSTERID}-cns-${i} \
    --async --machine-type=${CNSSIZE} \
    --subnet=${CLUSTERID_SUBNET} \
    --address="" --no-public-ptr \
    --maintenance-policy=MIGRATE \
    --scopes=https://www.googleapis.com/auth/cloud.useraccounts.readonly,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol\
    --tags=${CLUSTERID}-cns,${CLUSTERID}-node,${CLUSTERID}ocp \
    --metadata "ocp-cluster=${CLUSTERID},${CLUSTERID}-type=cns" \
    --image=${RHELIMAGE} --image-project=${IMAGEPROJECT} \
    --boot-disk-size=${CNSDISKSIZE} --boot-disk-type=pd-ssd \
    --boot-disk-device-name=${CLUSTERID}-cns-${i} \
    --disk=name=${CLUSTERID}-cns-${i}-containers,device-name=${CLUSTERID}-cns-${i}-containers,mode=rw,boot=no \
    --disk=name=${CLUSTERID}-cns-${i}-gluster,device-name=${CLUSTERID}-cns-${i}-gluster,mode=rw,boot=no \
    --metadata-from-file startup-script=./cns.sh \
    --zone=${zone[$i]}
done

CNS instances should have a minimum of 4vCPUs and 32GB of RAM. These instances by default only schedule the glusterfs pods, therefore the extra storage disks are to host container images and for glusterfs storage.

2.13. Removing startup scripts

To avoid rerunning the startup scripts and wipe the whole content of the attached disks, the startup scripts are disabled after the first boot:

Disable startup scripts

$ eval "$MYZONES_LIST"

# Masters
$ for i in $(seq 0 $((${MASTER_NODE_COUNT}-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances remove-metadata \
    --keys startup-script ${CLUSTERID}-master-${i} --zone=${zone[$i]}
done

# Application nodes
$ for i in $(seq 0 $(($APP_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances remove-metadata \
    --keys startup-script ${CLUSTERID}-app-${i} --zone=${zone[$i]}
done

# Infrastructure nodes
$ for i in $(seq 0 $(($INFRA_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances remove-metadata \
    --keys startup-script ${CLUSTERID}-infra-${i} --zone=${zone[$i]}
done

# CNS nodes
$ for i in $(seq 0 $(($CNS_NODE_COUNT-1))); do
  zone[$i]=${ZONES[$i % ${#ZONES[@]}]}
  gcloud compute instances remove-metadata \
    --keys startup-script ${CLUSTERID}-cns-${i} --zone=${zone[$i]}
done

2.14. Configuring Bastion for Red Hat OpenShift Container Platform

The following subsections describe all the steps needed to use the bastion instance as a jump host to access the instances in the private subnet.

2.14.1. Configuring ~/.ssh/config to use Bastion as Jumphost

To easily connect to the Red Hat OpenShift Container Platform environment, follow the steps below.

On the local workstation with private key previously created:

$ eval $(ssh-agent -s)
$ ssh-add /path/to/<keypair-name>
Identity added: /path/to/<keypair-name> (/path/to/<keypair-name>)

ssh into the bastion host with the -A option that enables forwarding of the authentication agent connection.

$ ssh -A <myuser>@bastion
Note

As the instances are using the Linux Guest Environment for Google Compute Engine instead cloud-init, the user is created at boot time and the ssh key injected. This means in order to be able to access the instance, the keypair user should be used instead the default 'cloud-user' that comes with Red Hat Enterprise Linux.

Once logged into the bastion host, verify the ssh agent forwarding is working via checking for the SSH_AUTH_SOCK

$ echo "$SSH_AUTH_SOCK"
/tmp/ssh-NDFDQD02qB/agent.1387

Attempt to ssh into one of the Red Hat OpenShift Container Platform instances using the ssh agent forwarding.

Note

No password should be prompted if working properly.

$ ssh ${CLUSTERID}-master-2

After verifying the previous steps work and in order to simplify the process, the following snippet can be added to the ~/.ssh/config file:

Host bastion
    HostName        ${CLUSTERID}-bastion.${DOMAIN}
    User            <myuser>
    IdentityFile    /path/to/<keypair-name>
    ForwardAgent     yes

Host ${CLUSTERID}-*
    ProxyCommand    ssh <myuser>@bastion -W %h:%p
    IdentityFile    /path/to/<keypair-name>
    User            <myuser>
Note

keypair-name needs to be replaced with the proper keypair (~/.ssh/gcp_key in previous examples) as well as the user.

Then from the local workstation ssh can 'jump' to any instance as ssh uses the bastion host as a jump host:

$ ssh ${CLUSTERID}-master-2

2.15. Red Hat OpenShift Container Platform Prerequisites

Once the instances have been deployed and the ~/.ssh/config file reflects the deployment the following steps should be performed to prepare for the installation of Red Hat OpenShift Container Platform.

2.15.1. OpenShift Authentication

Red Hat OpenShift Container Platform provides the ability to use many different authentication platforms. For this reference architecture, Google’s OpenID Connect Integration is used. A listing of other authentication options are available at Configuring Authentication and User Agent.

When configuring the authentication, the following parameters must be added to the ansible inventory. An example is shown below.

openshift_master_identity_providers=[{'name': 'google', 'challenge': 'false', 'login': 'true', 'kind': 'GoogleIdentityProvider', 'mapping_method': 'claim', 'clientID': '246358064255-5ic2e4b1b9ipfa7hddfkhuf8s6eq2rfj.apps.googleusercontent.com', 'clientSecret': 'Za3PWZg7gQxM26HBljgBMBBF', 'hostedDomain': 'redhat.com'}]
Note

Multiple authentication providers can be supported depending on the needs of the end users.

Note

The following tasks should be performed on the bastion host.

2.15.2. Ansible Setup

Register the bastion host and install the required packages.

Bastion preparation

$ sudo subscription-manager register --username <rhuser> --password <pass>
# or if using activation key
# sudo subscription-manager register --activationkey=<ak> --org=<orgid>
$ sudo subscription-manager attach --pool=<poolid>
$ sudo subscription-manager repos --disable="*" \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.9-rpms" \
    --enable="rhel-7-fast-datapath-rpms" \
    --enable="rhel-7-server-ansible-2.4-rpms"

$ sudo yum install atomic-openshift-utils
# Optionally, update to the latest packages and reboot the host
$ sudo yum update
$ sudo reboot

It is recommended to configure ansible.cfg as follows:

[defaults]
forks = 20
host_key_checking = False
remote_user = myuser
roles_path = roles/
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 600
log_path = $HOME/ansible.log
nocows = 1
callback_whitelist = profile_tasks

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=false -o ForwardAgent=yes
control_path = %(directory)s/%%h-%%r
pipelining = True
timeout = 10

[persistent_connection]
connect_timeout = 30
connect_retries = 30
connect_interval = 1
Note

See Ansible Install Optimization for more information.

2.15.3. Inventory File

This section provides an example inventory file required for an advanced installation of Red Hat OpenShift Container Platform.

The inventory file contains both variables and instances used for the configuration and deployment of Red Hat OpenShift Container Platform.

Warning

The following inventory needs to be adjusted to the environment about to be deployed and substitute bash variables with the real values.

$ vi inventory
[OSEv3:children]
masters
etcd
nodes

[OSEv3:vars]
ansible_become=true
openshift_release=v3.9
os_firewall_use_firewalld=True
openshift_clock_enabled=true

openshift_cloudprovider_kind=gce
openshift_gcp_project=${PROJECTID}
openshift_gcp_prefix=${CLUSTERID}
# If deploying single zone cluster set to "False"
openshift_gcp_multizone="True"
openshift_gcp_network_name=${CLUSTERID}-net

openshift_master_api_port=443
openshift_master_console_port=443

openshift_node_local_quota_per_fsgroup=512Mi

openshift_hosted_registry_replicas=1
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=gcs
openshift_hosted_registry_storage_gcs_bucket=${CLUSTERID}-registry

openshift_master_cluster_method=native
openshift_master_cluster_hostname=${CLUSTERID}-ocp.${DOMAIN}
openshift_master_cluster_public_hostname=${CLUSTERID}-ocp.${DOMAIN}
openshift_master_default_subdomain=${CLUSTERID}-apps.${DOMAIN}

os_sdn_network_plugin_name=redhat/openshift-ovs-networkpolicy

deployment_type=openshift-enterprise

# Required per https://access.redhat.com/solutions/3480921
oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7
openshift_storage_glusterfs_s3_image=registry.access.redhat.com/rhgs3/rhgs-s3-server-rhel7
openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7

# Service catalog
openshift_hosted_etcd_storage_kind=dynamic
openshift_hosted_etcd_storage_volume_name=etcd-vol
openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
openshift_hosted_etcd_storage_volume_size=1G
openshift_hosted_etcd_storage_labels={'storage': 'etcd'}

# Metrics
openshift_metrics_install_metrics=true
openshift_metrics_cassandra_storage_type=dynamic
openshift_metrics_storage_volume_size=20Gi
openshift_metrics_cassandra_nodeselector={"region":"infra"}
openshift_metrics_hawkular_nodeselector={"region":"infra"}
openshift_metrics_heapster_nodeselector={"region":"infra"}

# Aggregated logging
openshift_logging_install_logging=true
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=100Gi
openshift_logging_es_cluster_size=3
openshift_logging_es_nodeselector={"region":"infra"}
openshift_logging_kibana_nodeselector={"region":"infra"}
openshift_logging_curator_nodeselector={"region":"infra"}
openshift_logging_es_number_of_replicas=1

openshift_master_identity_providers=[{'name': 'google', 'challenge': 'false', 'login': 'true', 'kind': 'GoogleIdentityProvider', 'mapping_method': 'claim', 'clientID': '246358064255-5ic2e4b1b9ipfa7hddfkhuf8s6eq2rfj.apps.googleusercontent.com', 'clientSecret': 'Za3PWZg7gQxM26HBljgBMBBF', 'hostedDomain': 'redhat.com'}]

[masters]
${CLUSTERID}-master-0
${CLUSTERID}-master-1
${CLUSTERID}-master-2

[etcd]
${CLUSTERID}-master-0
${CLUSTERID}-master-1
${CLUSTERID}-master-2

[nodes]
${CLUSTERID}-master-0 openshift_node_labels="{'region': 'master'}"
${CLUSTERID}-master-1 openshift_node_labels="{'region': 'master'}"
${CLUSTERID}-master-2 openshift_node_labels="{'region': 'master'}"
${CLUSTERID}-infra-0 openshift_node_labels="{'region': 'infra', 'node-role.kubernetes.io/infra': 'true'}"
${CLUSTERID}-infra-1 openshift_node_labels="{'region': 'infra', 'node-role.kubernetes.io/infra': 'true'}"
${CLUSTERID}-infra-2 openshift_node_labels="{'region': 'infra', 'node-role.kubernetes.io/infra': 'true'}"
${CLUSTERID}-app-0 openshift_node_labels="{'region': 'apps'}"
${CLUSTERID}-app-1 openshift_node_labels="{'region': 'apps'}"
${CLUSTERID}-app-2 openshift_node_labels="{'region': 'apps'}"
Note

The openshift_gcp_* values are required for Red Hat OpenShift Container Platform to be able to create Google Cloud Platform resources such as disks for persistent volumes or LoadBalancer type services.

2.15.3.1. CNS Inventory (Optional)

If CNS is used in the Red Hat OpenShift Container Platform installation specific variables must be set in the inventory.

$ vi inventory
[OSEv3:children]
masters
etcd
nodes
glusterfs

....omitted...

[nodes]
....omitted...
${CLUSTERID}-cns-0 openshift_node_labels="{'region': 'cns', 'node-role.kubernetes.io/cns': 'true'}"
${CLUSTERID}-cns-1 openshift_node_labels="{'region': 'cns', 'node-role.kubernetes.io/cns': 'true'}"
${CLUSTERID}-cns-2 openshift_node_labels="{'region': 'cns', 'node-role.kubernetes.io/cns': 'true'}"

[glusterfs]
${CLUSTERID}-cns-0 glusterfs_devices='[ "/dev/disk/by-id/google-${CLUSTERID}-cns-0-gluster" ]' openshift_node_local_quota_per_fsgroup=""
${CLUSTERID}-cns-1 glusterfs_devices='[ "/dev/disk/by-id/google-${CLUSTERID}-cns-1-gluster" ]' openshift_node_local_quota_per_fsgroup=""
${CLUSTERID}-cns-2 glusterfs_devices='[ "/dev/disk/by-id/google-${CLUSTERID}-cns-2-gluster" ]' openshift_node_local_quota_per_fsgroup=""
Note

As the CNS nodes are not intended to run pods with local storage and they don’t have a dedicated disk for /var/lib/origin/openshift.local.volumes, the openshift_node_local_quota_per_fsgroup requires to be disabled on those nodes otherwise the installation complains about the filesystem not mounted with the gquota option.

The following is an example of a full inventory including CNS nodes:

[OSEv3:children]
masters
etcd
nodes
glusterfs

[OSEv3:vars]
ansible_become=true
openshift_release=v3.9
os_firewall_use_firewalld=True
openshift_clock_enabled=true

openshift_cloudprovider_kind=gce
openshift_gcp_project=refarch-204310
openshift_gcp_prefix=refarch
# If deploying single zone cluster set to "False"
openshift_gcp_multizone="True"
openshift_gcp_network_name=refarch-net

openshift_master_api_port=443
openshift_master_console_port=443

openshift_node_local_quota_per_fsgroup=512Mi

openshift_hosted_registry_replicas=1
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=gcs
openshift_hosted_registry_storage_gcs_bucket=refarch-registry

openshift_master_cluster_method=native
openshift_master_cluster_hostname=refarch.gce.example.com
openshift_master_cluster_public_hostname=refarch.gce.example.com
openshift_master_default_subdomain=refarch-apps.gce.example.com

os_sdn_network_plugin_name=redhat/openshift-ovs-networkpolicy

deployment_type=openshift-enterprise

# Required per https://access.redhat.com/solutions/3480921
oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7
openshift_storage_glusterfs_s3_image=registry.access.redhat.com/rhgs3/rhgs-s3-server-rhel7
openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7

# Service catalog
openshift_hosted_etcd_storage_kind=dynamic
openshift_hosted_etcd_storage_volume_name=etcd-vol
openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
openshift_hosted_etcd_storage_volume_size=1G
openshift_hosted_etcd_storage_labels={'storage': 'etcd'}

# Metrics
openshift_metrics_install_metrics=true
openshift_metrics_cassandra_storage_type=dynamic
openshift_metrics_storage_volume_size=20Gi
openshift_metrics_cassandra_nodeselector={"region":"infra"}
openshift_metrics_hawkular_nodeselector={"region":"infra"}
openshift_metrics_heapster_nodeselector={"region":"infra"}

# Aggregated logging
openshift_logging_install_logging=true
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=100Gi
openshift_logging_es_cluster_size=3
openshift_logging_es_nodeselector={"region":"infra"}
openshift_logging_kibana_nodeselector={"region":"infra"}
openshift_logging_curator_nodeselector={"region":"infra"}
openshift_logging_es_number_of_replicas=1

openshift_master_identity_providers=[{'name': 'google', 'challenge': 'false', 'login': 'true', 'kind': 'GoogleIdentityProvider', 'mapping_method': 'claim', 'clientID': '246358064255-5ic2e4b1b9ipfa7hddfkhuf8s6eq2rfj.apps.googleusercontent.com', 'clientSecret': 'Za3PWZg7gQxM26HBljgBMBBF', 'hostedDomain': 'redhat.com'}]

[masters]
refarch-master-0
refarch-master-1
refarch-master-2

[etcd]
refarch-master-0
refarch-master-1
refarch-master-2

[nodes]
refarch-master-0 openshift_node_labels="{'region': 'master'}"
refarch-master-1 openshift_node_labels="{'region': 'master'}"
refarch-master-2 openshift_node_labels="{'region': 'master'}"
refarch-infra-0 openshift_node_labels="{'region': 'infra', 'node-role.kubernetes.io/infra': 'true'}"
refarch-infra-1 openshift_node_labels="{'region': 'infra', 'node-role.kubernetes.io/infra': 'true'}"
refarch-infra-2 openshift_node_labels="{'region': 'infra', 'node-role.kubernetes.io/infra': 'true'}"
refarch-app-0 openshift_node_labels="{'region': 'apps'}"
refarch-app-1 openshift_node_labels="{'region': 'apps'}"
refarch-app-2 openshift_node_labels="{'region': 'apps'}"
refarch-cns-0 openshift_node_labels="{'region': 'cns', 'node-role.kubernetes.io/cns': 'true'}"
refarch-cns-1 openshift_node_labels="{'region': 'cns', 'node-role.kubernetes.io/cns': 'true'}"
refarch-cns-2 openshift_node_labels="{'region': 'cns', 'node-role.kubernetes.io/cns': 'true'}"

[glusterfs]
refarch-cns-0 glusterfs_devices='[ "/dev/disk/by-id/google-refarch*-cns-0-gluster" ]' openshift_node_local_quota_per_fsgroup=""
refarch-cns-1 glusterfs_devices='[ "/dev/disk/by-id/google-refarch*-cns-1-gluster" ]' openshift_node_local_quota_per_fsgroup=""
refarch-cns-2 glusterfs_devices='[ "/dev/disk/by-id/google-refarch*-cns-2-gluster" ]' openshift_node_local_quota_per_fsgroup=""

2.15.4. Node Registration

Now that the inventory has been created the nodes must be subscribed using subscription-manager.

The ad-hoc playbook below uses the redhat_subscription module to register the instances. The first example uses the numeric pool value for the Red Hat OpenShift Container Platform subscription. The second uses an activation key and organization.

$ ansible nodes -i inventory -b -m redhat_subscription -a \
  "state=present username=USER password=PASSWORD pool_ids=NUMERIC_POOLID"

OR

$ ansible nodes -i inventory -b -m redhat_subscription -a \
  "state=present activationkey=KEY org_id=ORGANIZATION pool_ids=NUMERIC_POOLID"

2.15.5. Repository Setup

Once the instances are registered, the proper repositories must be assigned to the instances to allow for packages for Red Hat OpenShift Container Platform to be installed.

$ ansible nodes -i inventory -b -m shell -a \
    'subscription-manager repos --disable="*" \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.9-rpms" \
    --enable="rhel-7-fast-datapath-rpms" \
    --enable="rhel-7-server-ansible-2.4-rpms"'

2.15.6. Preflight checks and other configurations

By default, the health check port in the infrastructure nodes is blocked by iptables rules.

$ ansible *-infra-* -i inventory -b -m firewalld -a \
    "port=1936/tcp permanent=true state=enabled"

If using LoadBalancer type services, port 10256/tcp should be opened for health checks:

$ ansible nodes -i inventory -b -m firewalld -a \
    "port=10256/tcp permanent=true state=enabled"
Note

Red Hat is investigating this requirement in this Bugzilla to be fixed in future Red Hat OpenShift Container Platform releases.

It is recommended to update all packages to the latest version and reboot the nodes before the installation to ensure all services are using the updated bits:

$ ansible all -i inventory -b -m yum -a "name=* state=latest"
$ ansible all -i inventory -b -m command -a "reboot"

It can be useful to check for potential issues or misconfigurations in the instances before continuing the installation process. Connect to every instance using the bastion host and verify the disks are properly created and mounted, and verify for potential errors in the log files to ensure everything is ready for the Red Hat OpenShift Container Platform installation:

$ ssh <instance>
$ lsblk
$ mount
$ free -m
$ cat /proc/cpuinfo
$ sudo journalctl
$ sudo yum repolist

2.15.7. Red Hat OpenShift Container Platform Prerequisites Playbook

The Red Hat OpenShift Container Platform Ansible installation provides a playbook to ensure all prerequisites are met prior to the installation of Red Hat OpenShift Container Platform. This includes steps such as registering all the nodes with Red Hat Subscription Manager and setting up the container storage on the container image volumes.

Via the ansible-playbook command on the bastion instance, ensure all the prerequisites are met using prerequisites.yml playbook:

$ ansible-playbook -i inventory \
    /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml