Chapter 2. Red Hat OpenShift Container Platform Instance Prerequisites

A successful deployment of Red Hat OpenShift Container Platform requires many prerequisites. The prerequisites include the deployment of components in Microsoft Azure and the required configuration steps prior to the actual installation of Red Hat OpenShift Container Platform using Ansible. In the subsequent sections, details regarding the prerequisites and configuration changes required for an Red Hat OpenShift Container Platform on a Microsoft Azure environment are discussed in detail.

2.1. Azure CLI Setup

The Microsoft Azure cli can be used to deploy all of the components associated with this reference environment. This is one of many options for deploying instances and load balancers, creating network security groups, and any required accounts to sucessfully deploy a full functional Red Hat OpenShift Container Platform. In order to install the Azure CLI perform the following steps https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum?view=azure-cli-latest.

Once the Azure CLI has been installed, follow the directions to authenticate to Microsoft Azure. https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest

2.2. SSH key

If the user performing the deployment does not currently have a public and private SSH key perform the following.

$ ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa.pub

2.3. Creating a Resource Group

Microsoft Azure resource groups contains all of the components deployed for an environment. The resource group defines where these items are deployed geographically.

For this reference environment deployment, the resource group name is openshift deployed in the location of "East US".

Warning

Do not attempt to deploy instances in a different geographic location than the resource group resides.

# az group create \
    --name openshift \
    --location "East US"

2.4. Creating a Red Hat Enterprise Linux Base Image

The Red Hat Enterprise Linux image from Microsoft Azure can be used for the deployment of OpenShift but the instances deployed using this image are charged more as the image carries it’s own Red Hat Enterprise Linux subscription. To save cost the following link can be used to upload an image of Red Hat Enterprise Linux. For this particular reference environment the image used is Red Hat Enterprise Linux 7.5.

To create an image follow the steps below. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/redhat-create-upload-vhd

2.5. Creation of Red Hat OpenShift Container Platform Networks

A virtual network and subnet are created to allow for virtual machines to be launched. The addresses below can be modified to suit any requirements for an organization.

# az network vnet create \
    --name openshiftvnet \
    --resource-group openshift \
    --subnet-name ocp \
    --address-prefix 10.0.0.0/16 \
    --subnet-prefix 10.0.0.0/24

2.6. Creating Network Security Groups

Microsoft Azure network security groups (NSGs) allows the user to define inbound and outbound traffic filters that can be applied to each instance on a network. This allows the user to limit network traffic to each instance based on the function of the instance services and not depend on host based filtering.

This section describes the ports and services required for each type of host and how to create the security groups on Microsoft Azure.

The following table shows the security group association to every instance type:

Table 2.1. Security Group association

Instance typeSecurity groups associated

Bastion

bastion-nsg

Masters

master-nsg

Infra nodes

infra-node-nsg

App nodes

node-nsg

CNS nodes(Optional)

cns-nsg

2.6.1. Bastion Security Group

The bastion instance only needs to allow inbound ssh. This instance exists to serve as the jump host between the private subnet and public internet.

Table 2.2. Bastion Security Group TCP ports

Port/ProtocolServiceRemote sourcePurpose

22/TCP

SSH

Any

Secure shell login

Creation of the above security group is as follows:

# az network nsg create \
    --resource-group openshift \
    --name bastion-nsg \
    --tags bastion_nsg

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name bastion-nsg \
    --name bastion-nsg-ssh  \
    --priority 500 \
    --destination-port-ranges 22 \
    --access Allow --protocol Tcp \
    --description "SSH access from Internet"

2.6.2. Master Security Group

The Red Hat OpenShift Container Platform master security group requires the most complex network access controls. In addition to the ports used by the API and master console, these nodes contain the etcd servers that form the cluster.

Table 2.3. Master Host Security Group Ports

Port/ProtocolServiceRemote sourcePurpose

2379/TCP

etcd

Masters

Client → Server connections

2380/TCP

etcd

Masters

Server → Server cluster communications

8053/TCP

DNS

Masters and nodes

Internal name services (3.2+)

8053/UDP

DNS

Masters and nodes

Internal name services (3.2+)

443/TCP

HTTPS

Any

Master WebUI and API

Note

As masters nodes are in fact nodes, the same rules used for OpenShift nodes are used.

Creation of the above security group is as follows:

# az network nsg create \
    --resource-group openshift \
    --name master-nsg \
    --tags master_security_group

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name master-ssh  \
    --priority 500 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 22 \
    --access Allow --protocol Tcp \
    --description "SSH from the bastion"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name master-etcd  \
    --priority 525 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 2379 2380 \
    --access Allow --protocol Tcp \
    --description "ETCD service ports"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name master-api \
    --priority 550 \
    --destination-port-ranges 443 \
    --access Allow --protocol Tcp \
    --description "API port"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name master-api-lb \
    --source-address-prefixes VirtualNetwork \
    --priority 575 \
    --destination-port-ranges 443 \
    --access Allow --protocol Tcp \
    --description "API port"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name master-ocp-tcp  \
    --priority 600 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 8053 \
    --access Allow --protocol Tcp \
    --description "TCP DNS and fluentd"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name master-ocp-udp  \
    --priority 625 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 8053 \
    --access Allow --protocol Udp \
    --description "UDP DNS and fluentd"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name node-kubelet  \
    --priority 650 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 10250 \
    --access Allow --protocol Tcp \
    --description "kubelet"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name master-nsg \
    --name node-sdn  \
    --priority 675 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 4789 \
    --access Allow --protocol Udp \
    --description "OpenShift sdn"

2.6.3. Infrastructure Node Security Group

The infrastructure nodes run the Red Hat OpenShift Container Platform router and the registry. The security group must accept inbound connections on the web ports to be forwarded to their destinations.

Table 2.4. Infrastructure Node Security Group Ports

Port/ProtocolServicesRemote sourcePurpose

80/TCP

HTTP

Any

Cleartext application web traffic

443/TCP

HTTPS

Any

Encrypted application web traffic

9200/TCP

ElasticSearch

Any

ElasticSearch API

9300/TCP

ElasticSearch

Any

Internal cluster use

Creation of the above security group is as follows:

# az network nsg create \
    --resource-group openshift \
    --name infra-node-nsg \
    --tags infra_security_group

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name infra-node-nsg \
    --name infra-ssh  \
    --priority 500 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 22  \
    --access Allow --protocol Tcp \
    --description "SSH from the bastion"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name infra-node-nsg \
    --name router-ports  \
    --priority 525 \
    --source-address-prefixes AzureLoadBalancer \
    --destination-port-ranges 80 443  \
    --access Allow --protocol Tcp \
    --description "OpenShift router"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name infra-node-nsg \
    --name infra-ports  \
    --priority 550 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 9200 9300 \
    --access Allow --protocol Tcp \
    --description "ElasticSearch"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name infra-node-nsg \
    --name node-kubelet  \
    --priority 575 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 10250 \
    --access Allow --protocol Tcp \
    --description "kubelet"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name infra-node-nsg \
    --name node-sdn  \
    --priority 600 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 4789 \
    --access Allow --protocol Udp \
    --description "OpenShift sdn"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name infra-node-nsg \
    --name router-ports \
    --priority 625 \
    --destination-port-ranges 80 443 \
    --access Allow --protocol Tcp \
    --description "OpenShift router"

2.6.4. Node Security Group

The node security group is assigned to application instances. The rules defined only allow for ssh traffic from the bastion host or other nodes, pod to pod communication via SDN traffic and kubelet communication via Kubernetes.

Table 2.5. Node Security Group Ports

Port/ProtocolServicesRemote sourcePurpose

22/TCP

SSH

Bastion

Secure shell login

4789/UDP

SDN

Nodes

Pod to pod communications

10250/TCP

kubernetes

Nodes

Kubelet communications

10256/TCP

Health Check

Nodes

External LB health check

Creation of the above security group is as follows:

# az network nsg create \
    --resource-group openshift \
    --name node-nsg \
    --tags node_security_group

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name node-nsg \
    --name node-ssh  \
    --priority 500 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 22 \
    --access Allow --protocol Tcp \
    --description "SSH from the bastion"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name node-nsg \
    --name node-kubelet \
    --priority 525 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 10250 \
    --access Allow --protocol Tcp \
    --description "kubelet"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name node-nsg \
    --name node-sdn  \
    --priority 550 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 4789 --access Allow \
    --protocol Udp \
    --description "ElasticSearch and ocp apps"
# az network nsg rule create \
    --resource-group openshift \
    --nsg-name node-nsg \
    --name node-sdn  \
    --priority 575 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 10256 --access Allow \
    --protocol Tcp \
    --description "Load Balancer health check"

2.6.5. CNS Node Security Group (Optional)

The CNS nodes require the same ports as the nodes but also require specific ports of the glusterfs pods. The rules defined only allow for ssh traffic from the bastion host or other nodes, glusterfs communication, pod to pod communication via SDN traffic and kubelet communication via Kubernetes.

Table 2.6. CNS Security Group Ports

Port/ProtocolServicesRemote sourcePurpose

22/TCP

SSH

Bastion

Secure shell login

111/TCP

Gluster

Gluser Nodes

Portmap

111/UDP

Gluster

Gluser Nodes

Portmap

2222/TCP

Gluster

Gluser Nodes

CNS communication

3260/TCP

Gluster

Gluser Nodes

Gluster Block

4789/UDP

SDN

Nodes

Pod to pod communications

10250/TCP

kubernetes

Nodes

Kubelet communications

24007/TCP

Gluster

Gluster Nodes

Gluster Daemon

24008/TCP

Gluster

Gluster Nodes

Gluster Management

24010/TCP

Gluster

Gluster Nodes

Gluster Block

49152-49664/TCP

Gluster

Gluster Nodes

Gluster Client Ports

Creation of the above security group is as follows:

# az network nsg create \
    --resource-group openshift \
    --name cns-nsg \
    --tags node_security_group

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name cns-nsg \
    --name node-ssh  \
    --priority 500 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 22 \
    --access Allow --protocol Tcp \
    --description "SSH from the bastion"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name cns-nsg \
    --name node-kubelet \
    --priority 525 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 10250 \
    --access Allow --protocol Tcp \
    --description "kubelet"

# az network nsg rule create \
    --resource-group openshift \
    --nsg-name cns-nsg \
    --name node-sdn  \
    --priority 550 \
    --source-address-prefixes VirtualNetwork \
    --destination-port-ranges 4789 --access Allow \
    --protocol Udp \
    --description "ElasticSearch and ocp apps"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name gluster-ssh  \
      --priority 575 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges 2222 \
      --access Allow --protocol Tcp \
      --description "Gluster SSH"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name gluster-daemon  \
      --priority 600 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges 24008 \
      --access Allow --protocol Tcp \
      --description "Gluster Daemon"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name gluster-mgmt  \
      --priority 625 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges 24009 \
      --access Allow --protocol Tcp \
      --description "Gluster Management"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name gluster-client  \
      --priority 650 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges  49152-49664 \
      --access Allow --protocol Tcp \
      --description "Gluster Clients"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name portmap-tcp  \
      --priority 675 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges  111 \
      --access Allow --protocol Tcp \
      --description "Portmap tcp"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name portmap-udp  \
      --priority 700 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges  111 \
      --access Allow --protocol Udp \
      --description "Portmap udp"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name gluster-iscsi  \
      --priority 725 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges  3260 \
      --access Allow --protocol Tcp \
      --description "Gluster Blocks"

# az network nsg rule create \
      --resource-group openshift \
      --nsg-name cns-nsg \
      --name gluster-block  \
      --priority 750 \
      --source-address-prefixes VirtualNetwork \
      --destination-port-ranges  24010 \
      --access Allow --protocol Tcp \
      --description "Gluster Block"

2.7. OpenShift Load Balancers

Load balancers are used to provide highly-availability to the Red Hat OpenShift Container Platform master and router services.

2.7.1. Master Load Balancer

The master load balancer requires a static public IP address which is used when specifying the OpenShift public hostname(openshift-master.example.com). The load balancer uses probes to validate that instances in the backend pools are available.

The first step when working with load balancers on Microsoft Azure is to request a static IP address to be used for the load balancer.

# az network public-ip create \
    --resource-group openshift \
    --name masterExternalLB \
    --allocation-method Static

Using the static IP address create a load balancer to be used by the master instances.

# az network lb create \
    --resource-group openshift \
    --name OcpMasterLB \
    --public-ip-address masterExternalLB \
    --frontend-ip-name masterApiFrontend \
    --backend-pool-name masterAPIBackend

The load balancer uses probes to validate that instances in the backend pools are available. The probe will attempt to connect to TCP port 443 which is used by the OpenShift API.

# az network lb probe create \
    --resource-group openshift \
    --lb-name OcpMasterLB \
    --name masterHealthProbe \
    --protocol tcp \
    --port 443

Lastly, create the load balancing rule. This rule will allow the load balanacer to accept port 443 and forward the requests to the Red Hat OpenShift Container Platform Master API and web console containers. The load-distribution variable ensures that client connections will redirect to the same backend server as the initial connection occured.

# az network lb rule create \
    --resource-group openshift \
    --lb-name OcpMasterLB \
    --name ocpApiHealth \
    --protocol tcp --frontend-port 443 \
    --backend-port 443 \
    --frontend-ip-name masterApiFrontend \
    --backend-pool-name masterAPIBackend \
    --probe-name masterHealthProbe \
    --load-distribution SourceIPProtocol

2.7.2. Router Load Balancer

The Router load balancer is used by the infrastructure instances hosting the router pods. This load balancer requires the use of a static public IP address. The static ip address is used for the OpenShift subdomain (*.apps.example.com) for the routing to application containers.

The first step when working with load balancers on Microsoft Azure is to request a static IP address to be used for the load balancer.

# az network public-ip create \
    --resource-group openshift \
    --name routerExternalLB \
    --allocation-method Static

Using the static IP address create a load balancer to be used by the infrastructure instances.

# az network lb create \
    --resource-group openshift \
    --name OcpRouterLB \
    --public-ip-address routerExternalLB \
    --frontend-ip-name routerFrontEnd \
    --backend-pool-name routerBackEnd

The load balancer uses probes to validate that instances in the backend pools are available. The probe will attempt to connect to TCP port 80.

# az network lb probe create \
    --resource-group openshift \
    --lb-name OcpRouterLB \
    --name routerHealthProbe \
    --protocol tcp \
    --port 80

The final step for configuring the load balancer is to create rules for routing. The rules created allow the load balancer to listen on ports 80 and 443 and pass traffic to the infrastructure instances hosting the HAProxy containers for routing. For simplicity, the probe routerHealthProbe is used for both rule definitions because the HAProxy container serves both 80 and 443 so if the service is not available for port 80 then the service is not available for 443 as well.

# az network lb rule create \
    --resource-group openshift \
    --lb-name OcpRouterLB \
    --name routerRule \
    --protocol tcp \
    --frontend-port 80 \
    --backend-port 80 \
    --frontend-ip-name routerFrontEnd \
    --backend-pool-name routerBackEnd \
    --probe-name routerHealthProbe \
    --load-distribution SourceIPProtocol

# az network lb rule create \
    --resource-group openshift \
    --lb-name OcpRouterLB \
    --name httpsRouterRule \
    --protocol tcp \
    --frontend-port 443 \
    --backend-port 443 \
    --frontend-ip-name routerFrontEnd \
    --backend-pool-name routerBackEnd \
    --probe-name routerHealthProbe \
    --load-distribution SourceIPProtocol

2.8. Creating Instances for Red Hat OpenShift Container Platform

This reference environment consists of the following instances:

  • one bastion instance
  • three master instances
  • three infrastructure instances
  • three application instances

etcd requires that an odd number of cluster members exist. Three masters were chosen to support high availability and etcd clustering. Three infrastructure instances allow for minimal to zero downtime for applications running in the OpenShift environment. Applications instance can be one to many instances depending on the requirements of the organization.

Note

Always check resource quotas before the deployment of resources. Support requests can be submitted to request the ability to deploy more types of an instance or to increase the total amount of CPUs.

2.8.1. Availability Sets

Availability sets are created for masters, infrastructure, and application instances. The availability sets are used in Red Hat OpenShift Container Platform to define which nodes should be part of load balancer OpenShift services.

# az vm availability-set create \
    --resource-group openshift \
    --name ocp-app-instances

# az vm availability-set create \
    --resource-group openshift \
    --name ocp-infra-instances

# az vm availability-set create \
    --resource-group openshift \
    --name ocp-master-instances

2.8.2. Master Instance Creation

The bash command lines runs a for loop that allow for all of the master instances to be created with their respective specific mounts, instance size, and network security groups configured at launch time.

Important

Master instances should have a minimum of 4vCPUs and 16GB of ram. This reference environment uses Standard Disks and specific sizes of S4 managed disks for more information visit https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds.

# for i in 1 2 3; do \
    az network nic create \
    --resource-group openshift \
    --name ocp-master-${i}VMNic \
    --vnet-name openshiftvnet \
    --subnet ocp \
    --network-security-group master-nsg \
    --lb-name OcpMasterLB \
    --lb-address-pools masterAPIBackend \
    --internal-dns-name ocp-master-${i} \
    --public-ip-address ""; \
  done

# for i in 1 2 3; do \
    az vm create \
    --resource-group openshift \
    --name ocp-master-$i \
    --availability-set ocp-master-instances \
    --size Standard_D4s_v3 \
    --image RedHat:RHEL:7-RAW:latest \
    --admin-user cloud-user \
    --ssh-key /root/.ssh/id_rsa.pub \
    --data-disk-sizes-gb 32 \
    --nics ocp-master-${i}VMNic; \
  done

# for i in 1 2 3; do \
    az vm disk \
    attach --resource-group openshift \
    --vm-name ocp-master-$i \
    --disk ocp-master-container-$i \
    --new --size-gb 32; \
  done

# for i in 1 2 3; do \
    az vm disk \
    attach --resource-group openshift \
    --vm-name ocp-master-$i \
    --disk ocp-master-etcd-$i \
    --new --size-gb 32; \
  done

2.8.3. Infrastructure Instance Creation

The bash command lines runs a for loop that allow for all of the infrastructure instances to be created with their respective specific mounts, instance size, and security groups configured at launch time.

Infrastructure instances should have a minimum of 1vCPUs and 8GB of RAM, but if logging and metrics are deployed, a larger instances should be created. Below the instance size of 4vCPUs and 16GB of RAM are used to ensure resource requirements are met to host the EFK pods. This reference environment uses Standard Disks and specific sizes of S4 and S6 managed disks for more information visit https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds.

# for i in 1 2 3; do \
    az network nic create \
    --resource-group openshift \
    --name ocp-infra-${i}VMNic \
    --vnet-name openshiftvnet \
    --subnet ocp \
    --network-security-group infra-node-nsg \
    --lb-name ocpRouterLB \
    --lb-address-pools routerBackend \
    --internal-dns-name ocp-infra-$i \
    --public-ip-address ""; \
  done

# for i in 1 2 3; do \
    az vm create \
    --resource-group openshift \
    --name ocp-infra-$i \
    --availability-set ocp-infra-instances \
    --size Standard_D4s_v3 \
    --image RedHat:RHEL:7-RAW:latest \
    --admin-user cloud-user \
    --ssh-key /root/.ssh/id_rsa.pub \
    --data-disk-sizes-gb 32 \
    --nics ocp-infra-${i}VMNic; \
  done

# for i in 1 2 3; do \
    az vm disk attach \
    --resource-group openshift \
    --vm-name ocp-infra-$i \
    --disk ocp-infra-container-$i \
    --new --size-gb 64; \
  done

2.8.4. Application Instance Creation

The bash command lines runs a for loop that allow for all of the application instances to be created with their respective mounts, instance size, and security groups configured at launch time.

Application instances should have a minimum of 1vCPUs and 8GB of RAM. Instances can be resized but resizing requires a reboot. Application instances can be added after the installation. The instances deployed below have 2vCPUs and 8GB of RAM. This reference environment uses Standard Disks and specific sizes of S4 and S6 managed disks for more information visit https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds.

Note

If there is a desire to deploy more than three nodes modify the for loop below to reflect the desired number of instances.

# for i in 1 2 3; do \
    az network nic create \
    --resource-group openshift \
    --name ocp-app-${i}VMNic \
    --vnet-name openshiftvnet \
    --subnet ocp \
    --network-security-group node-nsg \
    --internal-dns-name ocp-app-$i \
    --public-ip-address ""; \
  done

# for i in 1 2 3; do \
    az vm create \
    --resource-group openshift \
    --name ocp-app-$i \
    --availability-set ocp-app-instances \
    --size Standard_D2S_v3 \
    --image RedHat:RHEL:7-RAW:latest \
    --admin-user cloud-user \
    --ssh-key /root/.ssh/id_rsa.pub \
    --data-disk-sizes-gb 64 \
    --nics ocp-app-${i}VMNic; \
  done

# for i in 1 2 3; do \
    az vm disk attach \
    --resource-group openshift \
    --vm-name ocp-app-$i \
    --disk ocp-app-container-$i \
    --new --size-gb 64; \
  done

2.8.5. CNS Instance Creation (Optional)

The bash command lines runs a for loop that allows for all of the CNS instances to be created with their respective mounts, instance size, and network security groups configured at launch time.

CNS instances should have a minimum of 4vCPUs and 32GB of RAM. These instances by default only schedule the glusterfs pods. This reference environment uses Standard Disks and specific sizes of S4, S6, and S20 managed disks for more information visit https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds.

First, create the availability set to be used for the CNS instances.

# az vm availability-set create \
    --resource-group openshift \
    --name ocp-cns-instances

Create the instances using the cns-nsg and the ocp-cns-instances availability set.

# for i in 1 2 3; do \
    az network nic create
    --resource-group openshift \
    --name ocp-cns-${i}VMNic \
    --vnet-name openshiftvnet \
    --subnet ocp \
    --network-security-group cns-nsg \
    --internal-dns-name ocp-cns-$i \
    --public-ip-address ""; \
  done

# for i in 1 2 3; do \
  az vm create \
    --resource-group openshift \
    --name ocp-cns-$i \
    --availability-set ocp-cns-instances \
    --size Standard_D8s_v3 \
    --image RedHat:RHEL:7-RAW:latest \
    --admin-user cloud-user \
    --ssh-key /root/.ssh/id_rsa.pub \
    --data-disk-sizes-gb 32 \
    --nics ocp-cns-${i}VMNic; \
  done

# for i in 1 2 3; do \
    az vm disk attach \
    --resource-group openshift \
    --vm-name ocp-cns-$i \
    --disk ocp-cns-container-$i \
    --new --size-gb 64; \
  done

# for i in 1 2 3; do \
    az vm disk attach \
    --resource-group openshift \
    --vm-name ocp-cns-$i \
    --disk ocp-cns-volume-$i \
    --sku Premium_LRS \
    --new --size-gb 512; \
  done

2.8.6. Deploying the Bastion Instance

The bastion instance allows for limited access ssh access from one network into another. This requires the bastion instance to have a static IP address.

The bastion instance is not resource intensive. An instance size with a small amount of vCPUs and ram may be used and recommended.

# az network public-ip create \
    --name bastion-static \
    --resource-group openshift \
    --allocation-method Static
# az network nic create \
    --name bastion-VMNic \
    --resource-group openshift \
    --subnet ocp \
    --vnet-name openshiftvnet \
    --network-security-group bastion-nsg \
    --public-ip-address bastion-static

Finally, launch the instance using the newly provisioned network interface.

# az vm create \
    --resource-group openshift \
    --name bastion --size Standard_D1 \
    --image RedHat:RHEL:7-RAW:latest \
    --admin-user cloud-user \
    --ssh-key /root/.ssh/id_rsa.pub \
    --nics bastion-VMNic

2.8.7. DNS

Three DNS records are set for the environment described in this document. The document assumes that a DNS zone has been created in Azure already and the nameservers are already configured to use this zone.

2.8.8. Master Load Balancer Record

The static IP address that was created for the master load balancer requires an A record set to allow for mapping to the OpenShift master console.

# az network public-ip show \
    --resource-group openshift \
    --name masterExternalLB \
    --query "{address: ipAddress}" \
    --output tsv

40.70.56.154

# az network dns record-set \
    a add-record \
    --resource-group openshift \
    --zone-name example.com \
    --record-set-name openshift-master \
    --ipv4-address 40.70.56.154

2.8.9. Router Load Balancer Record

The static IP address that was created for the Router load balancer requires an A record. This DNS entry is used to route to applications containers in the OpenShift environment through the HAProxy containers running on the infrastucture instances.

# az network public-ip show \
    --resource-group openshift \
    --name RouterExternalLB \
    --query "{address: ipAddress}" \
    --output tsv

52.167.228.197

# az network dns record-set \
    a add-record \
    --resource-group openshift \
    --zone-name example.com \
    --record-set-name *.apps \
    --ipv4-address 52.167.228.197

2.8.10. Bastion Record

The final DNS record that needs to be created is for the bastion instance. This DNS record is used to map the static IP address to bastion.example.com

# az network public-ip show \
    --resource-group openshift \
    --name bastion-static \
  --query "{address: ipAddress}" \
  --output tsv

52.179.166.233


# az network dns record-set \
  a add-record \
  --resource-group openshift \
  --zone-name example.com \
  --record-set-name bastion \
  --ipv4-address 13.82.214.23

2.9. Bastion Configuration for Red Hat OpenShift Container Platform

The following subsections describe all the steps needed to use the bastion instance as a jump host to access the instances in the private subnet.

2.9.1. Configure ~/.ssh/config to use Bastion as Jumphost

To easily connect to the Red Hat OpenShift Container Platform environment, the following snippet can be added to the ~/.ssh/config file in the host used to administer the Red Hat OpenShift Container Platform platform:

Host bastion
    HostName        bastion.example.com
    User            <cloud-user>
    IdentityFile    /path/to/<keypair-name>.pem

Host ocp*
    ProxyCommand    ssh cloud-user@bastion -W %h:%p
    IdentityFile    /path/to/<keypair-name>.pem
    User            <cloud-user>

As an example, to access one of the master instances, this can be used:

# ssh master1

2.10. Azure Service Principal

A service principal is used to allow for the creation and management of Kubernetes service load balancers and disks for persistent storage. The service principal values are then added to /etc/origin/cloudprovider/azure.conf to be used for the OpenShift masters and nodes.

The first step is to obtain the subscription id.

# az account list
[
{
  "cloudName": "AzureCloud",
  "id": "8227d1d9-c10c-4366-86cc-e3ddbbcbba1d",
  "isDefault": false,
  "name": "Pay-As-You-Go",
  "state": "Enabled",
  "tenantId": "82d3ef91-24fd-4deb-ade5-e96dfd00535e",
  "user": {
    "name": "admin@example.com",
    "type": "user"
  }
]

Once the subscription id is obtained, create the service principal with the role of contributor in the openshift Microsoft Azure resource group. Record the output of these values to be used in future steps when defining the cloud provider values.

# az ad sp create-for-rbac --name openshiftcloudprovider \
     --password Cl0udpr0vid2rs3cr3t --role contributor \
     --scopes /subscriptions/8227d1d9-c10c-4366-86cc-e3ddbbcbba1d/resourceGroups/openshift

Retrying role assignment creation: 1/36
Retrying role assignment creation: 2/36
{
  "appId": "17b0c26b-41ff-4649-befd-a38f3eec2768",
  "displayName": "ocpcloudprovider",
  "name": "http://ocpcloudprovider",
  "password": "$3r3tR3gistry",
  "tenant": "82d3ef91-24fd-4deb-ade5-e96dfd00535e"
}

2.11. Blob Registry Storage

Microsoft Azure blob storage is used to allow for the storing of container images. The Red Hat OpenShift Container Platform registry uses blob storage to allow for the registry to grow dynamically in size without the need for intervention from an Administrator. An account must be created to allowing for the storage to be used.

# az storage account create \
    --name openshiftregistry \
    --resource-group openshift \
    --location eastus \
    --sku Standard_LRS

2.12. OpenShift Prerequisites

Once the instances have been deployed and the ~/.ssh/config file reflects the deployment the following steps should be performed to prepare for the installation of OpenShift.

Note

The following tasks should be performed on the workstation that was used to provision the infrastructure. The workstation can be an virtual machine, workstation, vagrant vm, or instance running in the cloud. The most important requirement is that the system is running RHEL 7.

2.12.1. Ansible Setup

Install the following packages on the system performing the installation of OpenShift.

$ subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.9-rpms" \
    --enable="rhel-7-fast-datapath-rpms" \
    --enable="rhel-7-server-ansible-2.4-rpms"

$ yum -y install ansible atomic-openshift-utils git

2.12.2. OpenShift Authentication

Red Hat OpenShift Container Platform provides the ability to use many different authentication platforms. For this reference architecture, Google’s OpenID Connect Integration is used. A listing of other authentication options are available at Configuring Authentication and User Agent.

When configuring the authentication, the following parameters must be added to the ansible inventory. An example is shown below.

openshift_master_identity_providers=[{'name': 'google', 'challenge': 'false', 'login': 'true', 'kind': 'GoogleIdentityProvider', 'mapping_method': 'claim', 'clientID': '246358064255-5ic2e4b1b9ipfa7hddfkhuf8s6eq2rfj.apps.googleusercontent.com', 'clientSecret': 'Za3PWZg7gQxM26HBljgBMBBF', 'hostedDomain': 'redhat.com'}]
Note

Multiple authentication providers can be supported depending on the needs of the end users.

2.12.3. Azure Account Key

Using the blob storage account created in the earlier steps, query Microsoft Azure to get the key for the openshiftregistry account. This key is used in the inventory to setup the image registry.

# az storage account keys list \
    --account-name openshiftregistry \
    --resource-group openshift \
    --output table

KeyName    Permissions    Value
key1       Full           QgmccEiitO7F1ZSYGDdpwe4HtzCrGCKvMi1vyCqFjxBcJnOliKYiVsez1jol1qUh74P75KInnXx78gFDuz6obQ==
key2       Full           liApA26Y3GVRllvrVyFx51xF3MEuFKlsSGDxBXk4JERNsxu6juvcPWO/fNNiG2O2Z++ATlxhJ+nSPXtU4Zs5xA==

Record the value of the first key to be used in the inventory file created in the next section for the variable openshift_hosted_registry_storage_azure_blob_accountkey.

2.12.4. Preparing the Inventory File

This section provides an example inventory file required for an advanced installation of Red Hat OpenShift Container Platform.

The inventory file contains both variables and instances used for the configuration and deployment of Red Hat OpenShift Container Platform.

2.12.4.1. Cloud Provider Configuration

Based off of the values defined in the above steps. The installation requires certain values to be defined to support the Azure cloud provider.

Table 2.7. Cloud Provider Values

KeyValue

openshift_cloudprovider_azure_tenant_id

The value of tenant from the output of the service principal creation

openshift_cloudprovider_azure_subscription_id

{provder} subscription

openshift_cloudprovider_azure_client_id

The alphanumeric value from the output of the service principal creation(appID)

openshift_cloudprovider_azure_client_secret

Password provided at service principal creation(password)

openshift_cloudprovider_azure_resource_group

The resource group containing the Microsoft Azure components

cloud

AzurePublicCloud AzureUSGovernmentCloud AzureChinaCloud AzureGermanCloud

openshift_cloudprovider_azure_location

Microsoft Azure geographic location

These entries allow for the creation of load balaners but these values are optional to the installation.

Table 2.8. Optional Cloud Provider values

openshift_cloudprovider_azure_security_group_name

Security group name used by nodes

openshift_cloudprovider_azure_vnet_name

Virtual network where the VMs run

openshift_cloudprovider_azure_availability_set_name

The availability set in which the nodes reside

2.12.4.2. Creating the Installation Inventory

The file /etc/ansible/hosts will be used to define all of the configuration items in which the OpenShift installer will configure to allow for the OpenShift services to run. Below is an example inventory based on the values defined in previous steps.

# vi /etc/ansible/hosts
[OSEv3:children]
masters
etcd
nodes

[OSEv3:vars]
ansible_ssh_user=cloud-user
ansible_become=true
openshift_cloudprovider_kind=azure
osm_controller_args={'cloud-provider': ['azure'], 'cloud-config': ['/etc/origin/cloudprovider/azure.conf']}
osm_api_server_args={'cloud-provider': ['azure'], 'cloud-config': ['/etc/origin/cloudprovider/azure.conf']}
openshift_node_kubelet_args={'cloud-provider': ['azure'], 'cloud-config': ['/etc/origin/cloudprovider/azure.conf'], 'enable-controller-attach-detach': ['true']}
openshift_master_api_port=443
openshift_master_console_port=443
openshift_hosted_router_replicas=3
openshift_hosted_registry_replicas=1
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-master.example.com
openshift_master_cluster_public_hostname=openshift-master.example.com
openshift_master_default_subdomain=apps.example.com
deployment_type=openshift-enterprise
#cloudprovider
openshift_cloudprovider_kind=azure
openshift_cloudprovider_azure_client_id=17b0c26b-41ff-4649-befd-a38f3eec2768
openshift_cloudprovider_azure_client_secret=Cl0udpr0vid2rs3cr3t
openshift_cloudprovider_azure_tenant_id=422r3f91-21fe-4esb-vad5-d96dfeooee5d
openshift_cloudprovider_azure_subscription_id=8227d1d9-c10c-4366-86cc-e3ddbbcbba1d
openshift_cloudprovider_azure_resource_group=openshift
openshift_cloudprovider_azure_location=eastus
#endcloudprovider
openshift_master_identity_providers=[{'name': 'google', 'challenge': 'false', 'login': 'true', 'kind': 'GoogleIdentityProvider', 'mapping_method': 'claim', 'clientID': '246358064255-5ic2e4b1b9ipfa7hddfkhuf8s6eq2rfj.apps.googleusercontent.com', 'clientSecret': 'Za3PWZg7gQxM26HBljgBMBBF', 'hostedDomain': 'redhat.com'}]
openshift_node_local_quota_per_fsgroup=512Mi
networkPluginName=redhat/ovs-networkpolicy
oreg_url_master=registry.access.redhat.com/openshift3/ose-${component}:${version}
oreg_url_node=registry.access.redhat.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7
openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7

# Do not uninstall service catalog until post installation. Needs storage class object
openshift_enable_service_catalog=false

# Setup azure blob registry storage
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=azure_blob
openshift_hosted_registry_storage_azure_blob_accountname=openshiftregistry
openshift_hosted_registry_storage_azure_blob_accountkey=QgmccEiitO7F1ZSYGDdpwe4HtzCrGCKvMi1vyCqFjxBcJnOliKYiVsez1jol1qUh74P75KInnXx78gFDuz6obQ==
openshift_hosted_registry_storage_azure_blob_container=registry
openshift_hosted_registry_storage_azure_blob_realm=core.windows.net


[masters]
ocp-master-1
ocp-master-2
ocp-master-3

[etcd]
ocp-master-1
ocp-master-2
ocp-master-3

[nodes]
ocp-master-1 openshift_node_labels="{'region': 'master'}" openshift_hostname=ocp-master-1
ocp-master-2 openshift_node_labels="{'region': 'master'}" openshift_hostname=ocp-master-2
ocp-master-3 openshift_node_labels="{'region': 'master'}" openshift_hostname=ocp-master-3
ocp-infra-1 openshift_node_labels="{'region': 'infra'}" openshift_hostname=ocp-infra-1
ocp-infra-2 openshift_node_labels="{'region': 'infra'}" openshift_hostname=ocp-infra-2
ocp-infra-3 openshift_node_labels="{'region': 'infra'}" openshift_hostname=ocp-infra-3
ocp-app-1 openshift_node_labels="{'region': 'apps'}" openshift_hostname=ocp-app-1
ocp-app-2 openshift_node_labels="{'region': 'apps'}" openshift_hostname=ocp-app-2
ocp-app-3 openshift_node_labels="{'region': 'apps'}" openshift_hostname=ocp-app-3

2.12.4.3. CNS Inventory (Optional)

If CNS is used in the OpenShift installation specific variables must be set in the inventory.

# vi /etc/ansible/hosts
[OSEv3:children]
masters
etcd
nodes
glusterfs

....omitted...

[nodes]
....omitted...
ocp-cns-1 openshift_schedulable=True openshift_hostname=ocp-cns-1
ocp-cns-2 openshift_schedulable=True openshift_hostname=ocp-cns-2
ocp-cns-3 openshift_schedulable=True openshift_hostname=ocp-cns-3

[glusterfs]
ocp-cns-1 glusterfs_devices='[ "/dev/sde" ]'
ocp-cns-2 glusterfs_devices='[ "/dev/sde" ]'
ocp-cns-3 glusterfs_devices='[ "/dev/sde" ]'

2.12.5. Node Registration

Now that the inventory has been created the nodes must be subscribed using subscription-manager.

The ad-hoc playbook below uses the redhat_subscription module to register the instances. The first example uses the numeric pool value for the OpenShift subscription. The second uses an activation key and organization.

# ansible nodes -b -m redhat_subscription -a \
    "state=present username=USER password=PASSWORD pool_ids=NUMBERIC_POOLID"

OR

# ansible nodes -b -m redhat_subscription -a \
    "state=present activationkey=KEY org_id=ORGANIZATION"

2.12.6. Repository Setup

Once the instances are registered, the proper repositories must be assigned to the instances to allow for packages for Red Hat OpenShift Container Platform to be installed.

# ansible nodes -b -m shell -a \
    'subscription-manager repos --disable="*" \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.9-rpms" \
    --enable="rhel-7-fast-datapath-rpms" \
    --enable="rhel-7-server-ansible-2.4-rpms"'

2.12.7. EmptyDir Storage

During the deployment of instances, an extra volume was added to the instances for EmptyDir storage. EmptyDir storage is storage that is used by containers that are not persistent. The volume is used to help ensure that the /var volume does not get filled by containers using the storage.

The ad-hoc plays creates a filesystem on the disk, add an entry to fstab, and then mounts the volume.

# ansible nodes -b -m filesystem -a "fstype=xfs dev=/dev/sdc"

# ansible nodes -b -m file -a \
    "path=/var/lib/origin/openshift.local.volumes \
    state=directory mode=0755"

# ansible nodes -b -m mount -a \
    "path=/var/lib/origin/openshift.local.volumes \
    src=/dev/sdc state=present \
    fstype=xfs opts=gquota"

# ansible nodes -b -m shell -a \
    "restorecon -R \
    /var/lib/origin/openshift.local.volumes"

# ansible nodes -b -m mount -a \
    "path=/var/lib/origin/openshift.local.volumes \
    src=/dev/sdc state=mounted fstype=xfs opts=gquota"

2.12.8. etcd Storage

During the deployment of the master instances, an extra volume was added to the instances for etcd storage. Having the separate disks specifically for etcd ensures that all of the resources are available to the etcd service such as i/o and total disk space.

The ad-hoc plays creates a filesystem on the disk, add an entry to fstab, and then mounts the volume.

# ansible masters -b -m filesystem -a "fstype=xfs dev=/dev/sde"

# ansible masters -b -m file -a \
    "path=/var/lib/etcd \
    state=directory mode=0755"

# ansible masters -b -m mount -a \
    "path=/var/lib/etcd \
    src=/dev/sde state=present \
    fstype=xfs"

# ansible masters -b -m shell -a \
    "restorecon -R \
    /var/lib/etcd"

# ansible masters -b -m mount -a \
    "path=/var/lib/etcd \
    src=/dev/sde state=mounted fstype=xfs"

2.12.9. Container Storage

The prerequisite playbook provided by the OpenShift Ansible RPMs configures container storage and installs any remaining packages for the installation.

# ansible-playbook -e \
    'container_runtime_docker_storage_setup_device=/dev/sdd' \
    -e 'container_runtime_docker_storage_type=overlay2' \
    /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
Note

/dev/sdd reflects the disk created for container storage.