Chapter 2. Red Hat OpenShift Container Platform Prerequisites

A successful deployment of Red Hat OpenShift Container Platform requires many prerequisites. This consists of a set of infrastructure and host configuration steps prior to the actual installation of Red Hat OpenShift Container Platform using Ansible. In the following sections, details regarding the prerequisites and configuration changes required for an Red Hat OpenShift Container Platform on VMware SDDC environment are discussed in detail.

For simplicity’s sake, assume the vCenter environment is pre-existing and is configured with best practices for the infrastructure.

Technologies such as SIOC and VMware HA should already be configured where applicable. After the environment is provisioned, anti-affinity rules are established to ensure maximum uptime and optimal performance.

2.1. Networking

An existing port group and virtual LAN (VLAN) are required for deployment. The environment can utilize a vSphere Distributed Switch (vDS) or vSwitch. The specifics of that are unimportant. However, to utilize network IO control and some of the quality of service (QoS) technologies that VMware employs, a vDS is required.

2.2. Shared Storage

The vSphere hosts should have shared storage for the VMware virtual machine disk files (VMDKs). A best practice recommendation is to enable storage I/O control (SIOC) to address any performance issues caused by latency. This article discusses in depth how to do this.

Note

Some storage providers such as Dell Equallogic advise to disable storage I/O control (SIOC) as the array optimizes it. Check with the storage provider for details.

2.3. Resource Pool, Cluster Name and Folder Location

  • Create a resource pool for the deployment
  • Create a folder for the Red Hat OpenShift VMs for use with the vSphere Cloud Provider.

    • Ensure this folder exists under the datacenter then the cluster used for deployment

2.4. VMware vSphere Cloud Provider (VCP)

OpenShift Container Platform can be configured to access VMware vSphere VMDK Volumes, including using VMware vSphere VMDK Volumes as persistent storage for application data.

Note

The vSphere Cloud Provider steps below are for manual configuration. The OpenShift Ansible installer configures the cloud provider automatically when the proper variables are assigned during runtime. For more information on configuring masters and nodes see Appendix C, Configuring Masters

The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Container Platform and supports:

  • Volumes
  • Persistent Volumes
  • Storage Classes and provisioning of volumes.

2.4.1. Enabling VCP

To enable VMware vSphere cloud provider for OpenShift Container Platform:

  1. Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
  2. Verify that the VM node names comply with the regex:

    [a-z](([-0-9a-z]+)?[0-9a-z])?([a-z0-9](([-0-9a-z]+)?[0-9a-z])?)*
    Important

    VM Names can not:

    • Begin with numbers
    • Have any capital letters
    • Have any special characters except ‘-’
    • Be shorter than three characters and longer than 63 characters
  3. Set the disk.EnableUUID parameter to TRUE for each Node VM. This ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly. For every virtual machine node in the cluster, follow the steps below using the GOVC tool

    1. Download and install govc:
$ curl -LO https://github.com/vmware/govmomi/releases/download/v0.15.0/govc_linux_amd64.gz
$ gunzip govc_linux_amd64.gz
$ chmod +x govc_linux_amd64
$ cp govc_linux_amd64 /usr/bin/govc
  1. Set up the GOVC environment:
$ export GOVC_URL='vCenter IP OR FQDN'
$ export GOVC_USERNAME='vCenter User'
$ export GOVC_PASSWORD='vCenter Password'
$ export GOVC_INSECURE=1
  1. Find the Node VM paths:
$ govc ls /<datacenter>/vm/<vm-folder-name>
  1. Set disk.EnableUUID to true for all VMs:
for VM in $(govc ls /<datacenter>/vm/<vm-folder-name>);do govc vm.change -e="disk.enableUUID=1" -vm="$VM";done
Note

If Red Hat OpenShift Container Platform node VMs are created from a template VM, then disk.EnableUUID=1 can be set on the template VM. VMs cloned from this template inherit this property.

  1. Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter. See the vSphere Documentation Center for steps to create a custom role, user, and role assignment.

    RolesPrivilegesEntitiesPropagate to Children

    manage-k8s-node-vms

    Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete

    Cluster, Hosts, VM Folder

    Yes

    manage-k8s-volumes

    Datastore.AllocateSpace Datastore.FileManagement (Low level file operations)

    Datastore

    No

    k8s-system-read-and-spbm-profile-view

    StorageProfile.View (Profile-driven storage view)

    vCenter

    No

    ReadOnly

    System.Anonymous System.Read System.View

    Datacenter, Datastore Cluster, Datastore Storage Folder

    No

2.4.2. The VCP Configuration File

Configuring Red Hat OpenShift Container Platform for VMware vSphere requires the /etc/origin/cloudprovider/vsphere.conf file on each node.

If the file does not exist, create it, and add the following:

[Global]
        user = "username" 1
        password = "password" 2
        server = "10.10.0.2" 3
        port = "443" 4
        insecure-flag = "1" 5
        datacenter = "datacenter-name" 6
        datastore = "datastore-name" 7
        working-dir = "vm-folder-path" 8

[Disk]
        scsicontrollertype = pvscsi

[Network]
        network = "VM Network" 9
1
vCenter username for the vSphere cloud provider.
2
vCenter password for the specified user.
3
IP Address or FQDN for the vCenter server.
4
(Optional) Port number for the vCenter server. Defaults to port 443.
5
Set to 1 if the vCenter uses a self-signed cert.
6
Name of the data center on which Node VMs are deployed.
7
Name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. If datastore is located in a storage folder or datastore is a member of datastore cluster, specify the full datastore path. Verify that vSphere Cloud Provider user has the read privilege set on the datastore cluster or storage folder to be able to find datastore.
8
(Optional) The vCenter VM folder path in which the node VMs are located. It can be set to an empty path(working-dir = ""), if Node VMs are located in the root VM folder. The syntax resembles: /<datacenter>/vm/<folder-name>/
9
Specify the VM network portgroup to mark for the Internal Address of the node.

2.5. Docker Volume

During the installation of Red Hat OpenShift Container Platform, the VMware VM instances created should include various VMDK volumes to ensure various OpenShift directories do not fill up the disk or cause disk contention in the /var partition.

Container images are stored locally on the nodes running Red Hat OpenShift Container Platform pods. The container-storage-setup script uses the /etc/sysconfig/docker-storage-setup file to specify the storage configuration.

The /etc/sysconfig/docker-storage-setup file must be created before starting the docker service, otherwise the storage is configured using a loopback device. The container storage setup is performed on all hosts running containers, therefore masters, infrastructure, and application nodes.

Note

The optional VM deployment in Note takes care of Docker and other volume creation in addition to other machine preparation tasks like installing chrony, open-vm-tools, etc.

# cat /etc/sysconfig/docker-storage-setup
DEVS="/dev/sdb"
VG="docker-vol"
DATA_SIZE="95%VG"
STORAGE_DRIVER=overlay2
CONTAINER_ROOT_LV_NAME="dockerlv"
CONTAINER_ROOT_LV_MOUNT_PATH="/var/lib/docker"
Note

The value of the docker volume size should be at least 15 GB.

2.6. etcd Volume

A VMDK volume should be created on the master instances for the storage of /var/lib/etcd. Storing etcd allows the similar benefit of protecting /var but more importantly provides the ability to perform snapshots of the volume when performing etcd maintenance.

Note

The value of the etcd volume size should be at least 25 GB.

2.7. OpenShift Local Volume

A VMDK volume should be created for the directory of /var/lib/origin/openshift.local.volumes that is used with the perFSGroup setting at installation and with the mount option of gquota. These settings and volumes set a quota to ensure that containers cannot grow to an unreasonable size.

Note

The value of OpenShift local volume size should be at least 30 GB.

# mkfs -t xfs -n -n ftype=1 /dev/sdc
# vi /etc/fstab
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=8665acc0-22ee-4e45-970c-ae20c70656ef /boot                   xfs     defaults        0 0
/dev/sdc /var/lib/origin/openshift.local.volumes xfs gquota 0 0

2.8. Execution Environment

Red Hat Enterprise Linux 7 is the only OS supported by the Red Hat OpenShift Container Platform installer therefore provider infrastructure deployment and installer must be run from one of the following locations:

  • Local workstation/server/virtual machine
  • Bastion instance
  • Jenkins CI/CD build environment

This reference architecture focuses on deploying and installing Red Hat OpenShift Container Platform from local workstation/server/virtual machine. Jenkins CI/CD and Bastion are out of scope.

2.9. Preparations

2.9.1. Deployment host

2.9.1.1. Creating an SSH Keypair for Ansible

The VMware infrastructure requires an SSH key on the VMs for Ansible’s use.

Note

The following task should be performed on the workstation/server/virtual machine where the Ansible playbooks are launched.

$ ssh-keygen -N '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:aaQHUf2rKHWvwwl4RmYcmCHswoouu3rdZiSH/BYgzBg root@ansible-test
The key's randomart image is:
+---[RSA 2048]----+
|   .. o=..       |
|E   ..o.. .      |
| * .  .... .     |
|. * o  +=.  .    |
|.. + o.=S    .   |
|o   + =o= . .    |
|.  . * = = +     |
|... . B . = .    |
+----[SHA256]-----+
Note

Add the ssh keys to the deployed virtual machines via ssh-copy-id or to the template prior to deployment.

2.9.1.2. Enable Required Repositories and Install Required Playbooks

Red Hat Subscription Manager registration and activate yum repositories

$ subscription-manager register

$ subscription-manager attach \
    --pool {{ pool_id }}

$ subscription-manager repos \
    --disable="*" \
    --enable=rhel-7-server-rpms \
    --enable=rhel-7-server-extras-rpms \
    --enable=rhel-7-server-ansible-2.7-rpms \
    --enable=rhel-7-server-ose-3.11-rpms

$ yum install -y \
    openshift-ansible git

2.9.1.3. Configure Ansible

ansible is installed on the deployment instance to perform the registration, installation of packages, and the deployment of the Red Hat OpenShift Container Platform environment on the master and node instances.

Before running playbooks, it is important to create a ansible.cfg to reflect the deployed environment:

$ cat ~/ansible.cfg

[defaults]
forks = 20
host_key_checking = False
roles_path = roles/
gathering = smart
remote_user = root
private_key = ~/.ssh/id_rsa
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 600
log_path = $HOME/ansible.log
nocows = 1
callback_whitelist = profile_tasks

[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=900s -o GSSAPIAuthentication=no -o PreferredAuthentications=publickey
control_path = %(directory)s/%%h-%%r
pipelining = True
timeout = 10

[persistent_connection]
connect_timeout = 30
connect_retries = 30
connect_interval = 1

2.9.1.4. Prepare the Inventory File

This section provides an example inventory file required for an advanced installation of Red Hat OpenShift Container Platform.

The inventory file contains both variables and instances used for the configuration and deployment of Red Hat OpenShift Container Platform. In the example below, some values are bold and must reflect the deployed environment from the previous chapter.

The openshift_cloudprovider_vsphere_* values are required for Red Hat OpenShift Container Platform to be able to create vSphere resources such as (VMDK)s on datastores for persistent volumes.

$ cat /etc/ansible/hosts

    [OSEv3:children]
    ansible
    masters
    infras
    apps
    etcd
    nodes
    lb

    [OSEv3:vars]

    deployment_type=openshift-enterprise
    openshift_release="v3.11"

    # Authentication for registry images and RHN network
    oreg_auth_user="registry_user"
    oreg_auth_password="registry_password"
    rhsub_user=username
    rhsub_pass=password
    rhsub_pool=8a85f9815e9b371b015e9b501d081d4b

    # Authentication settings for OCP
    openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt
    openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}]

    # Registry
    openshift_hosted_registry_storage_kind=vsphere
    openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
    openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume']
    openshift_hosted_registry_replicas=1

    # vSphere Cloud provider
    openshift_cloudprovider_kind=vsphere
    openshift_cloudprovider_vsphere_username="administrator@vsphere.local"
    openshift_cloudprovider_vsphere_password="password"
    openshift_cloudprovider_vsphere_host="vcenter.example.com"
    openshift_cloudprovider_vsphere_datacenter=datacenter
    openshift_cloudprovider_vsphere_cluster=cluster
    openshift_cloudprovider_vsphere_resource_pool=ocp311
    openshift_cloudprovider_vsphere_datastore="datastore"
    openshift_cloudprovider_vsphere_folder=ocp311

    #VM deployment
    openshift_cloudprovider_vsphere_template="ocp-server-template"
    openshift_cloudprovider_vsphere_vm_network="VM Network"
    openshift_cloudprovider_vsphere_vm_netmask="255.255.255.0"
    openshift_cloudprovider_vsphere_vm_gateway="192.168.1.1"
    openshift_cloudprovider_vsphere_vm_dns="192.168.2.250"
    openshift_required_repos=['rhel-7-server-rpms', 'rhel-7-server-extras-rpms', 'rhel-7-server-ose-3.11-rpms']

    # OCP vars
    openshift_master_cluster_method=native
    openshift_node_local_quota_per_fsgroup=512Mi
    default_subdomain=example.com
    openshift_master_cluster_hostname=openshift.example.com
    openshift_master_cluster_public_hostname=openshift.example.com
    openshift_master_default_subdomain=apps.example.com
    os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'

    [ansible]
    localhost

    [masters]
    master-0 vm_name=master-0 ipv4addr=10.x.y.103
    master-1 vm_name=master-1 ipv4addr=10.x.y.104
    master-2 vm_name=master-2 ipv4addr=10.x.y.105

    [infras]
    infra-0  vm_name=infra-0 ipv4addr=10.x.y.100
    infra-1  vm_name=infra-1 ipv4addr=10.x.y.101
    infra-2  vm_name=infra-2 ipv4addr=10.x.y.102

    [apps]
    app-0  vm_name=app-0 ipv4addr=10.x.y.106
    app-1  vm_name=app-1 ipv4addr=10.x.y.107
    app-2  vm_name=app-2 ipv4addr=10.x.y.108

    [etcd]
    master-0
    master-1
    master-2

    [lb]
    haproxy-0 vm_name=haproxy-0 ipv4addr=10.x.y.200

    [nodes]
    master-0 openshift_node_group_name="node-config-master"
    master-1 openshift_node_group_name="node-config-master"
    master-2 openshift_node_group_name="node-config-master"
    infra-0  openshift_node_group_name="node-config-infra"
    infra-1  openshift_node_group_name="node-config-infra"
    infra-2  openshift_node_group_name="node-config-infra"
    app-0  openshift_node_group_name="node-config-compute"
    app-1  openshift_node_group_name="node-config-compute"
    app-2  openshift_node_group_name="node-config-compute"
Note

For a downloadable copy of this inventory file please see the following repo

2.10. vSphere VM Instance Requirements for RHOCP

This reference environment should consist of the following instances:

  • three master instances
  • three infrastructure instances
  • three application instances
  • one loadbalancer instance

2.10.1. Virtual Machine Hardware Requirements

Table 2.1. Virtual Machine Node Requirements

Node TypeHardware

Master

2 vCPU

16GB RAM

1 x 60GB - OS RHEL 7.6

1 x 40GB - Docker volume

1 x 40Gb - EmptyDir volume

1 x 40GB - ETCD volume

App or Infra Node

2 vCPU

8GB RAM

1 x 60GB - OS RHEL 7.6

1 x 40GB - Docker volume

1 x 40Gb - EmptyDir volume

Note

If using Red Hat OpenShift Container Storage (OCS) for persistent storage. Additional, space will be required for the data volumes for it. This could be located on the infra or compute nodes.

The master instances should contain three extra disks used for Docker storage and ETCD and OpenShift volumes. The application node instances use their additional disks for Docker storage and OpenShift volumes.

etcd requires that an odd number of cluster members exist. Three masters were chosen to support high availability and etcd clustering. Three infrastructure instances allow for minimal to zero downtime for applications running in the OpenShift environment. Applications instance can be one to many instances depending on the requirements of the organization.

See Note for steps on deploying the vSphere environment.

Note

infra and app node instances can easily be added after the initial install.

2.11. Set up DNS for Red Hat OpenShift Container Platform

The installation process for Red Hat OpenShift Container Platform depends on a reliable name service that contains an address record for each of the target instances.

An example DNS configuration is listed below:

Note

Using /etc/hosts is not valid, a proper DNS service must exist.

$ORIGIN apps.example.com.
*           A       10.x.y.200
$ORIGIN example.com.
haproxy-0       A   10.x.y.200
infra-0         A   10.x.y.100
infra-1         A   10.x.y.101
infra-2         A   10.x.y.102
master-0        A   10.x.y.103
master-1        A   10.x.y.104
master-2        A   10.x.y.105
app-0           A   10.x.y.106
app-1           A   10.x.y.107
app-2           A   10.x.y.108

Table 2.2. Subdomain for RHOCP Network

Domain NameDescription

example.com

All interfaces on the internal only network

Table 2.3. Sample FQDNs

Fully Qualified NameDescription

master-0.example.com

Name of the network interface on the master-0 instance

infra-0.example.com

Name of the network interface on the infra-0 instance

app-0.example.com

Name of the network interface on the app-0 instance

openshift.example.com

Name of the Red Hat OpenShift Container Platform console using the address of the haproxy-0 instance on the network

2.11.1. Confirm Instance Deployment

After the 3 master, infra and app instances have been created in vCenter, verify the creation of the VM instances via:

$ govc ls /<datacenter>/vm/<folder>/

Using the values provided by the command, update the DNS master zone.db file as shown in with the appropriate IP addresses. Do not proceed to the next section until the DNS resolution is configured.

Attempt to ssh into one of the Red Hat OpenShift Container Platform instances now that the ssh identity is setup.

Note

No password should be prompted if working properly.

$ ssh master-1

2.12. Create and Configure an HAProxy VM Instance

If an organization currently does not have a load balancer in place then HAProxy can be deployed. A load balancer such as HAProxy provides a single view of the Red Hat OpenShift Container Platform master services for the applications. The master services and the applications use different TCP ports so a single TCP load balancer can handle all of the inbound connections.

The load balanced DNS name that developers use must be in a DNS A record pointing to the HAProxy server before installation. For applications, a wildcard DNS entry must point to the HAProxy host.

The configuration of the HAProxy instance is completed within the subsequent steps as the deployment host configures the Red Hat subscriptions for all the instances and the Red Hat OpenShift Container Platform installer auto configures the HAProxy instance based upon the information found within the Red Hat OpenShift Container Platform inventory file.

2.13. Enable Required Repositories and Packages to OpenShift Infrastructure

Note

The optional VM deployment in Note takes care of this and all volume creation and other machine preparation like chrony, open-vm-tools, etc. Additionally, the haproxy instance will be deployed from it.

Ensure connectivity to all instances via the deployment instance via:

$ ansible all -m ping

Once connectivity to all instances has been established, register the instances via Red Hat Subscription Manager. This is accomplished using credentials or an activation key.

Via credentials the ansible command is as follows:

$ ansible all -m command -a "subscription-manager register --username <user> --password '<password>'"

Via activation key, the ansible command is as follows:

$ ansible all -m command -a "subscription-manager register --org=<org_id> --activationkey=<keyname>"

where the following options:

  • -m module to use
  • -a module argument

Once all the instances have been successfully registered, enable all the required RHOCP repositories on all the instances via:

$ ansible all -m command -a "subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.11-rpms" \
    --enable="rhel-7-server-ansible-2.7-rpms""

2.14. OpenShift Authentication

Red Hat OpenShift Container Platform provides the ability to use many different authentication platforms. For this reference architecture, LDAP is the preferred authentication mechanism. A listing of other authentication options are available at Configuring Authentication and User Agent.

When configuring LDAP as the authentication provider the following parameters can be added to the ansible inventory. An example is shown below.

openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=openshift,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=openshift,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}]
Note

If using LDAPS, all the masters must have the relevant ca.crt file for LDAP in place prior to the installation, otherwise the installation fails. The file should be placed locally on the deployment instance and be called within the inventory file from the variable openshift_master_ldap_ca_file

2.15. Node Verification

It can be useful to check for potential issues or misconfigurations in the nodes before continuing the installation process. Connect to every node using the deployment host and verify the disks are properly created and mounted.

$ ssh deployment.example.com
$ ssh <instance>
$ lsblk
$ sudo journalctl
$ free -m
$ sudo yum repolist

where node is for example master-0.example.com

For reference, below is example output of lsblk for the master nodes.

$ lsblk
NAME                     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                        8:0    0   60G  0 disk
├─sda1                     8:1    0  500M  0 part /boot
├─sda2                     8:2    0 39.5G  0 part
│ ├─rhel-root            253:0    0   55G  0 lvm  /
│ └─rhel-swap            253:1    0  3.9G  0 lvm
└─sda3                     8:3    0   20G  0 part
  └─rhel-root            253:0    0   55G  0 lvm  /
sdb                        8:16   0   40G  0 disk
└─sdb1                     8:17   0   40G  0 part
  └─docker--vol-dockerlv 253:2    0   40G  0 lvm  /var/lib/docker
sdc                        8:32   0   40G  0 disk /var/lib/origin/openshift.local.volumes
sdd                        8:48   0   40G  0 disk /var/lib/etcd

For reference, below is an example of output of lsblk for the infra and app nodes.

$ lsblk
sda                        8:0    0   60G  0 disk
├─sda1                     8:1    0  500M  0 part /boot
├─sda2                     8:2    0 39.5G  0 part
│ ├─rhel-root            253:0    0   55G  0 lvm  /
│ └─rhel-swap            253:1    0  3.9G  0 lvm
└─sda3                     8:3    0   20G  0 part
  └─rhel-root            253:0    0   55G  0 lvm  /
sdb                        8:16   0   40G  0 disk
└─sdb1                     8:17   0   40G  0 part
  └─docker--vol-dockerlv 253:2    0   40G  0 lvm  /var/lib/docker
sdc                        8:32   0   40G  0 disk /var/lib/origin/openshift.local.volumes
sdd                        8:48   0  300G  0 disk
Note

The docker-vol LVM volume group may not be configured on sdb on all nodes at this stage, as this step is completed via the prerequisites playbook in the following section.

2.16. Prior to Ansible Installation

Prior to Chapter 4, Operational Management, create DRS anti-affinity rules to ensure maximum availability for the cluster.

  1. Open the VMware vCenter web client, select the cluster, choose configure.
aa 1
  1. Under Configuration, select VM/Host Rules.
aa 2
  1. Click add, and create a rules to keep the masters separate.
aa 3

The following VMware documentation goes over creating and configuring anti-affinity rules in depth.

Lastly, all of the VMs created should have their latency sensitivity set to High.

This enables some additional tuning recommended by VMware for latency sensitive workloads as described here.

  1. Open the VMware vCenter web client and under the virtual machines summary tab, in the 'VM Hardware' box select 'Edit Settings'.
  2. Under, 'VM Options', expand 'Advanced'.
  3. Select the 'Latency Sensitivity' dropbox and select 'High'.

Figure 2.1. VMware High Latency

OCP High Latency

The next section discusses VMware NSX integration with OpenShift Container Platform.

To skip this section, Section 2.18, “Red Hat OpenShift Container Platform Prequisites Playbook” discusses the required prerequisites for installation of OCP.

2.17. Installing Red Hat OpenShift Container Platform with VMware NSX-T (Optional)

NSX-T (NSX Transformer) provides network virtualization for hypervisors. NSX Container Plug-in (NCP) provides integration between NSX-T Data Center and container orchestrators such as Kubernetes, as well as integration between NSX-T Data Center and container based PaaS (platform as a service) software products such as OpenShift. This guide describes setting up NCP with OpenShift.

Table 2.4. Compatibility Requirements

Software ProductVersion

NSX-T Data Center

2.3, 2.4

vSphere Server

6.5 U2, 6.7 U1, 6.7 U2

Container Host Operating System

RHEL 7.4, 7.5, 7.6

Platform as a Service

OpenShift 3.10, 3.11

Container Host Open vSwitch

2.10.2 (packaged with NSX-T Data Center 2.4)

The setup and configuration of VMware NSX is outside of the scope of this reference architecture. For more information on how to prepare NSX for the use of the VMware NSX Container Plug-in the following article describes in depth how to prepare the environment.

2.17.1. Installing the VMware NSX Container Plug-in (NCP)

OpenShift node VMs must have two vNICs:

  • A management vNIC connected to the logical switch that has an uplink to the management tier-1 router.
  • The second vNIC on all VMs must have the following tags in NSX-T so that NCP knows which port is used as a parent VIF for all PODs running on the particular OpenShift node.
{'ncp/node_name': '<node_name>'}
{'ncp/cluster': '<cluster_name>'}

NSX-T Requirements:

  • A tier-0 router.
  • An overlay transport zone.
  • An IP block for POD networking.
  • (Optional) An IP Block for routed (no NAT) POD networking.
  • An IP Pool for SNAT. By default the IP Block for POD networking is routable only inside NSX-T. NCP uses this IP Pool to provide connectivity to the outside.
  • (Optional) Top and bottom firewall sections. NCP will place Kubernetes network policy rules between these two sections.
  • Open vSwitch and CNI plugin RPMs must be hosted on an HTTP server reachable from the OpenShift node VMs.

2.17.1.1. NCP Docker Image

Currently the NCP docker image is not publicly available.

You must have the container image nsx-ncp downloaded.

The prerequisites playbook will install and configure a container runtime.

Next, load the container image on the cluster nodes locally with a container runtime present.

$ docker load -i nsx-ncp-rhel-xxx.yyyyyyyy.tar
$ docker image tag registry.local/xxx.yyyyyyyy/nsx-ncp-rhel nsx-ncp

Now, the normal installation process of OpenShift can take place after the proper ansible inventory options for NSX and vSphere are added.

Preparing the Ansible Hosts File for NSX-T

You must specify NCP parameters in the Ansible hosts file for NCP to be integrated with OpenShift. After you specify the following parameters in the Ansible hosts file, installing OpenShift will install NCP automatically.

openshift_use_nsx=True
openshift_use_openshift_sdn=False
os_sdn_network_plugin_name='cni'
nsx_openshift_cluster_name='ocp-cluster1'

# (Required) This is required because multiple Openshift/Kubernetes clusters can connect to the same
NSX Manager.

nsx_api_managers='10.10.10.10'

# (Required) IP address of NSX Manager. For an NSX Manager cluster, specify comma-separated IP
addresses.

nsx_tier0_router='MyT0Router'

# (Required) Name or UUID of the tier-0 router that the project's tier-1 routers will connect to.

nsx_overlay_transport_zone='my_overlay_tz'

# (Required) Name or UUID of the overlay transport zone that will be used to create logical switches.

nsx_container_ip_block='ip_block_for_my_ocp_cluster'

# (Required) Name or UUID of an IP block configured on NSX-T. There will be a subnet per project out
of this IP block. These networks will be behind SNAT and not routable.

nsx_ovs_uplink_port='ens224'

# (Required) If in HOSTVM mode. NSX-T needs second a vNIC for POD networking on the OCP nodes,
different from the management vNIC. It is highly recomended that both vNICs be connected to NSX-T
logical switches. The second (non-management) vNIC must be supplied here. For bare metal, this
parameter is not needed.

nsx_cni_url='http://myserver/nsx-cni.rpm'

# (Required) Temporary requirement until NCP can bootstrap the nodes. Nsx-cni needs to be on
an http server.

nsx_ovs_url='http://myserver/openvswitch.rpm'
nsx_kmod_ovs_url='http://myserver/kmod-openvswitch.rpm'
# (Required) Temporary parameters until NCP can bootstrap the nodes. Can be ignored in bare metal
setup.

nsx_node_type='HOSTVM'

# (Optional) Defaults to HOSTVM. Set to BAREMETAL if OpenShift is not running in VMs.

nsx_k8s_api_ip=192.168.10.10

# (Optional) If set, NCP will talk to this IP address, otherwise to Kubernetes service IP.

nsx_k8s_api_port=192.168.10.10

# (Optional) Default to 443 for Kubernetes service. Set to 8443 if you use it in combination with
nsx_k8s_api_ip to specify master node IP.

nsx_insecure_ssl=true

# (Optional) Default is true as NSX Manager comes with untrusted certificate. If you have changed the
certificate with a trusted one you can set it to false.

nsx_api_user='admin'
nsx_api_password='super_secret_password'
nsx_subnet_prefix=24

# (Optional) Defaults to 24. This is the subnet size that will be dedicated per Openshift project. If the
number of PODs exceeds the subnet size a new logical switch with the same subnet size will be
added to the project.

nsx_use_loadbalancer=true

# (Optional) Defaults to true. Set to false if you do not want to use NSX-T load balancers for
OpenShift routes and services of type LoadBalancer.

nsx_lb_service_size='SMALL'

# (Optional) Defaults to SMALL. Depending on the NSX Edge size MEDIUM or LARGE is also possible.

nsx_no_snat_ip_block='router_ip_block_for_my_ocp_cluster'

# (Optional) If the ncp/no_snat=true annotation is applied on a project or namespace the subnet will
be taken from this IP block and there will be no SNAT for it. It is expected to be routable.

nsx_external_ip_pool='external_pool_for_snat'

# (Requred) IP pool for SNAT and load balancer if nsx_external_ip_pool_lb is not defined.

nsx_external_ip_pool_lb='my_ip_pool_for_lb'

# (Optional) Set this if you want a distinct IP pool for Router and SvcTypeLB.

nsx_top_fw_section='top_section'

# (Optional) Kubernetes network policy rules will be translated to NSX-T firewall rules and placed below
this section.

nsx_bottom_fw_section='bottom_section'

# (Optional) Kubernetes network policy rules will be translated to NSX-T firewall rules and placed above
this section.

nsx_api_cert='/path/to/cert/nsx.crt'
nsx_api_private_key='/path/to/key/nsx.key'

# (Optional) If set, nsx_api_user and nsx_api_password will be ignored. The certificate must be
uploaded to NSX-T and a Principal Identity user authenticating with this certificate must be manually
created.

nsx_lb_default_cert='/path/to/cert/nsx.crt'
nsx_lb_default_key='/path/to/key/nsx.key'

# (Optional) NSX-T load balancer requires a default certificate in order to be able to crate SNIs for TLS
based Routes. This certificate will be presented only if there is no Route configured. If not provided, a
self-signed certificate will be generated.
[subs=+quotes]

Sample Ansible Hosts File

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true',
'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'yasen' : 'password'}
openshift_master_default_subdomain=example.com
openshift_use_nsx=true
os_sdn_network_plugin_name=cni
openshift_use_openshift_sdn=false
openshift_node_sdn_mtu=1500
# NSX specific configuration
nsx_openshift_cluster_name='ocp-cluster1'
nsx_api_managers='192.168.110.201'
nsx_api_user='admin'
nsx_api_password='VMware1!'
nsx_tier0_router='DefaultT0Router'
nsx_overlay_transport_zone='overlay-tz'
nsx_container_ip_block='ocp-pod-networking'
nsx_no_snat_ip_block='ocp-nonat-pod-networking'
nsx_external_ip_pool='ocp-external'
nsx_top_fw_section='openshift-top'
nsx_bottom_fw_section='openshift-bottom'
nsx_ovs_uplink_port='ens224'
nsx_cni_url='http://1.1.1.1/nsx-cni-2.3.2.x86_64.rpm'
nsx_ovs_url='http://1.1.1.1/openvswitch-2.9.1.rhel75-1.x86_64.rpm'
nsx_kmod_ovs_url='http://1.1.1.1/kmod-openvswitch-2.9.1.rhel75-1.el7.x86_64.rpm'

[masters]
master-0.example.com

[etcd]
master-0.example.com

[nodes]
master-0.example.com ansible_ssh_host=10.x.y.103 openshift_node_group_name='node-config-master'
infra-0.example.com ansible_ssh_host=10.x.y.100 openshift_node_group_name='node-config-infra'
infra-1.example.com ansible_ssh_host=10.x.y.101 openshift_node_group_name='node-config-infra'
app-0.example.com ansible_ssh_host=10.x.y.106 openshift_node_group_name='node-config-compute'
app-1.example.com ansible_ssh_host=10.x.y.107 openshift_node_group_name='node-config-compute'
Note

For a downloadable copy of this inventory file please see the following repo

2.18. Red Hat OpenShift Container Platform Prequisites Playbook

The Red Hat OpenShift Container Platform Ansible installation provides a playbook to ensure all prerequisites are met prior to the installation of Red Hat OpenShift Container Platform. This includes steps such as registering all the nodes with Red Hat Subscription Manager and setting up the docker on the docker volumes.

Via the ansible-playbook command on the deployment instance, ensure all the prerequisites are met using prerequisites.yml playbook:

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

In the event that OpenShift fails to install or the prerequisites playbook fails, follow the steps in Appendix Appendix F, Troubleshooting Ansible by Red Hat to troubleshoot Ansible.