Chapter 2. Red Hat OpenShift Container Platform Prerequisites
A successful deployment of Red Hat OpenShift Container Platform requires many prerequisites. This consists of a set of infrastructure and host configuration steps prior to the actual installation of Red Hat OpenShift Container Platform using Ansible. In the following sections, details regarding the prerequisites and configuration changes required for an Red Hat OpenShift Container Platform on a VMware vSphere environment are discussed in detail.
For simplicity’s sake, assume the vCenter environment is pre-existing and is configured with best practices for the infrastructure.
Technologies such as SIOC and VMware HA should already be configured where applicable. After the environment is provisioned, anti-affinity rules are established to ensure maximum uptime and optimal performance.
2.1. Networking
An existing port group and virtual LAN (VLAN) are required for deployment. The environment can utilize a virtual dedicated server (vDS) or vSwitch. The specifics of that are unimportant. However, to utilize network IO control and some of the quality of service (QoS) technologies that VMware employs, a vDS is required.
2.3. Resource Pool, Cluster Name and Folder Location
- Create a resource pool for the deployment
Create a folder for the Red Hat OpenShift VMs for use with the vSphere Cloud Provider.
- Ensure this folder exists under the datacenter then the cluster used for deployment
2.4. VMware vSphere Cloud Provider (VCP)
OpenShift Container Platform can be configured to access VMware vSphere VMDK Volumes, including using VMware vSphere VMDK Volumes as persistent storage for application data.
The vSphere Cloud Provider steps below are for manual configuration. The OpenShift Ansible installer configures the cloud provider automatically when the proper variables are assigned during runtime. For more information on configuring masters and nodes see Appendix C, Configuring Masters
The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Container Platform and supports:
- Volumes
- Persistent Volumes
- Storage Classes and provisioning of volumes.
2.4.1. Enabling VCP
To enable VMware vSphere cloud provider for OpenShift Container Platform:
- Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
Verify that the VM node names comply with the regex:
[a-z](([-0-9a-z]+)?[0-9a-z])?([a-z0-9](([-0-9a-z]+)?[0-9a-z])?)*ImportantVM Names can not:
- Begin with numbers
- Have any capital letters
- Have any special characters except ‘-’
- Be shorter than three characters and longer than 63 characters
Set the
disk.EnableUUIDparameter toTRUEfor each Node VM. This ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly. For every virtual machine node in the cluster, follow the steps below using the GOVC tool- Download and install govc:
$ curl -LO https://github.com/vmware/govmomi/releases/download/v0.15.0/govc_linux_amd64.gz $ gunzip govc_linux_amd64.gz $ chmod +x govc_linux_amd64 $ cp govc_linux_amd64 /usr/bin/govc
- Set up the GOVC environment:
$ export GOVC_URL='vCenter IP OR FQDN' $ export GOVC_USERNAME='vCenter User' $ export GOVC_PASSWORD='vCenter Password' $ export GOVC_INSECURE=1
- Find the Node VM paths:
$ govc ls /<datacenter>/vm/<vm-folder-name>
- Set disk.EnableUUID to true for all VMs:
for each VM in $(govc ls /<datacenter>/vm/<vm-folder-name>);do govc vm.change -e="disk.enableUUID=1" -vm="$VM"
If Red Hat OpenShift Container Platform node VMs are created from a template VM, then disk.EnableUUID=1 can be set on the template VM. VMs cloned from this template inherit this property.
Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter. See the vSphere Documentation Center for steps to create a custom role, user, and role assignment.
Roles Privileges Entities Propagate to Children manage-k8s-node-vms
Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete
Cluster, Hosts, VM Folder
Yes
manage-k8s-volumes
Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View
Datastore
No
k8s-system-read-and-spbm-profile-view
StorageProfile.View System.Anonymous System.Read System.View
vCenter
No
ReadOnly
System.Anonymous System.Read System.View
Datacenter, Datastore Cluster, Datastore Storage Folder
No
2.4.2. The VCP Configuration File
Configuring Red Hat OpenShift Container Platform for VMware vSphere requires the /etc/origin/cloudprovider/vsphere.conf file on each node.
If the file does not exist, create it, and add the following:
[Global]
user = "username" 1
password = "password" 2
server = "10.10.0.2" 3
port = "443" 4
insecure-flag = "1" 5
datacenter = "datacenter-name" 6
datastore = "datastore-name" 7
working-dir = "vm-folder-path" 8
vm-uuid = "vm-uuid" 9
[Disk]
scsicontrollertype = pvscsi
network = "VM Network" 10- 1
- vCenter username for the vSphere cloud provider.
- 2
- vCenter password for the specified user.
- 3
- IP Address or FQDN for the vCenter server.
- 4
- (Optional) Port number for the vCenter server. Defaults to port
443. - 5
- Set to
1if the vCenter uses a self-signed cert. - 6
- Name of the data center on which Node VMs are deployed.
- 7
- Name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. If datastore is located in a storage folder or datastore is a member of datastore cluster, specify the full datastore path. Verify that vSphere Cloud Provider user has the read privilege set on the datastore cluster or storage folder to be able to find datastore.
- 8
- (Optional) The vCenter VM folder path in which the node VMs are located. It can be set to an empty path(
working-dir = ""), if Node VMs are located in the root VM folder. The syntax resembles: /<datacenter>/vm/<folder-name>/ - 9
- (Optional) VM Instance UUID of the Node VM. It can be set to empty (
vm-uuid = ""). If this is set to empty, this is retrieved from /sys/class/dmi/id/product_serial file on virtual machine (requires root access). - 10
- Specify the VM network portgroup to mark for the Internal Address of the node
2.5. Docker Volume
During the installation of Red Hat OpenShift Container Platform, the VMware instances created for RHOCP should include various VMDK volumes to ensure various OpenShift directories do not fill up the disk or cause disk contention in the /var partition.
Container images are stored locally on the nodes running Red Hat OpenShift Container Platform pods. The container-storage-setup script uses the /etc/sysconfig/docker-storage-setup file to specify the storage configuration.
The /etc/sysconfig/docker-storage-setup file must be created before starting the docker service, otherwise the storage is configured using a loopback device. The container storage setup is performed on all hosts running containers, therefore masters, infrastructure, and application nodes.
The optional VM deployment in Appendix B, Deploying a working vSphere Environment (Optional) takes care of Docker and other volume creation in addition to other machine preparation tasks like installing chrony, open-vm-tools, etc.
# cat /etc/sysconfig/docker-storage-setup
DEVS="/dev/sdb"
VG="docker-vol"
DATA_SIZE="95%VG"
STORAGE_DRIVER=overlay2
CONTAINER_ROOT_LV_NAME="dockerlv"
CONTAINER_ROOT_LV_MOUNT_PATH="/var/lib/docker"The value of the docker volume size should be at least 15 GB.
2.6. etcd Volume
A VMDK volume should be created on the master instances for the storage of /var/lib/etcd. Storing etcd allows the similar benefit of protecting /var but more importantly provides the ability to perform snapshots of the volume when performing etcd maintenance.
The value of the etcd volume size should be at least 25 GB.
2.7. OpenShift Local Volume
A VMDK volume should be created for the directory of /var/lib/origin/openshift.local.volumes that is used with the perFSGroup setting at installation and with the mount option of gquota. These settings and volumes set a quota to ensure that containers cannot grow to an unreasonable size.
The value of OpenShift local volume size should be at least 30 GB.
# mkfs -t xfs /dev/sdc # vi /etc/fstab /dev/mapper/rhel-root / xfs defaults 0 0 UUID=8665acc0-22ee-4e45-970c-ae20c70656ef /boot xfs defaults 0 0 /dev/sdc /var/lib/origin/openshift.local.volumes xfs gquota 0 0
2.8. Execution Environment
Red Hat Enterprise Linux 7 is the only OS supported by the Red Hat OpenShift Container Platform installer therefore provider infrastructure deployment and installer must be run from one of the following locations:
- Local workstation/server/virtual machine
- Bastion instance
- Jenkins CI/CD build environment
This reference architecture focuses on deploying and installing Red Hat OpenShift Container Platform from local workstation/server/virtual machine. Jenkins CI/CD and Bastion are out of scope.
2.9. Preparations
2.9.1. Deployment host
2.9.1.1. Creating an SSH Keypair for Ansible
The VMware infrastructure requires an SSH key on the VMs for Ansible’s use.
The following task should be performed on the workstation/server/virtual machine where the Ansible playbooks are launched.
$ ssh-keygen -N '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:aaQHUf2rKHWvwwl4RmYcmCHswoouu3rdZiSH/BYgzBg root@ansible-test
The key's randomart image is:
+---[RSA 2048]----+
| .. o=.. |
|E ..o.. . |
| * . .... . |
|. * o +=. . |
|.. + o.=S . |
|o + =o= . . |
|. . * = = + |
|... . B . = . |
+----[SHA256]-----+
Add the ssh keys to the deployed virtual machines via ssh-copy-id or to the template prior to deployment.
2.9.1.2. Enable Required Repositories and Install Required Playbooks
Red Hat Subscription Manager registration and activate yum repositories
$ subscription-manager register
$ subscription-manager attach \
--pool {{ pool_id }}
$ subscription-manager repos \
--disable="*" \
--enable=rhel-7-server-rpms \
--enable=rhel-7-server-extras-rpms \
--enable=rhel-7-server-ansible-2.4-rpms \
--enable=rhel-7-server-ose-3.9-rpms
$ yum install -y \
atomic-openshift-utils2.9.1.3. Configure Ansible
ansible is installed on the deployment instance to perform the registration, installation of packages, and the deployment of the Red Hat OpenShift Container Platform environment on the master and node instances.
Before running playbooks, it is important to create a ansible.cfg to reflect the deployed environment:
$ cat ~/ansible.cfg
[defaults]
forks = 20
host_key_checking = False
roles_path = roles/
gathering = smart
remote_user = root
private_key = ~/.ssh/id_rsa
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 600
log_path = $HOME/ansible.log
nocows = 1
callback_whitelist = profile_tasks
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=900s -o GSSAPIAuthentication=no -o PreferredAuthentications=publickey
control_path = %(directory)s/%%h-%%r
pipelining = True
timeout = 10
[persistent_connection]
connect_timeout = 30
connect_retries = 30
connect_interval = 12.9.1.4. Prepare the Inventory File
This section provides an example inventory file required for an advanced installation of Red Hat OpenShift Container Platform.
The inventory file contains both variables and instances used for the configuration and deployment of Red Hat OpenShift Container Platform. In the example below, some values are bold and must reflect the deployed environment from the previous chapter.
The openshift_cloudprovider_vsphere_* values are required for Red Hat OpenShift Container Platform to be able to create vSphere resources such as (VMDK)s on datastores for persistent volumes.
$ cat /etc/ansible/hosts [OSEv3:children] ansible masters infras apps etcd nodes lb [OSEv3:vars] ansible_ssh_user=cloud-user deployment_type=openshift-enterprise debug_level=2 openshift_vers=v3_9 openshift_enable_service_catalog=false ansible_become=true # See https://access.redhat.com/solutions/3480921 oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true console_port=8443 openshift_debug_level="{{ debug_level }}" openshift_node_debug_level="{{ node_debug_level | default(debug_level, true) }}" openshift_master_debug_level="{{ master_debug_level | default(debug_level, true) }}" openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}] openshift_hosted_router_replicas=3 openshift_hosted_registry_replicas=1 openshift_master_cluster_method=native openshift_node_local_quota_per_fsgroup=512Mi openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username="administrator@vsphere.local" openshift_cloudprovider_vsphere_password="password" openshift_cloudprovider_vsphere_host="vcenter.example.com" openshift_cloudprovider_vsphere_datacenter=datacenter openshift_cloudprovider_vsphere_cluster=cluster openshift_cloudprovider_vsphere_resource_pool=ocp39 openshift_cloudprovider_vsphere_datastore="datastore" openshift_cloudprovider_vsphere_folder=stretch-ocp39 openshift_cloudprovider_vsphere_template="ocp-server-template" openshift_cloudprovider_vsphere_vm_network="VM Network" openshift_cloudprovider_vsphere_vm_netmask="255.255.255.0" openshift_cloudprovider_vsphere_vm_gateway="192.168.1.1" openshift_cloudprovider_vsphere_vm_dns="192.168.2.250" default_subdomain=example.com openshift_master_cluster_hostname=openshift.example.com openshift_master_cluster_public_hostname=openshift.example.com openshift_master_default_subdomain=apps.example.com os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' osm_use_cockpit=true # red hat subscription name and password rhsub_user=username rhsub_pass=password rhsub_pool=8a85f9815e9b371b015e9b501d081d4b #registry openshift_public_hostname=openshift.example.com [ansible] localhost [masters] master-0 openshift_node_labels="{'region': 'master'}" ipv4addr=10.x.y.103 master-1 openshift_node_labels="{'region': 'master'}" ipv4addr=10.x.y.104 master-2 openshift_node_labels="{'region': 'master'}" ipv4addr=10.x.y.105 [infras] infra-0 openshift_node_labels="{'region': 'infra'}" ipv4addr=10.x.y.100 infra-1 openshift_node_labels="{'region': 'infra'}" ipv4addr=10.x.y.101 infra-2 openshift_node_labels="{'region': 'infra'}" ipv4addr=10.x.y.102 [apps] app-0 openshift_node_labels="{'region': 'app'}" ipv4addr=10.x.y.106 app-1 openshift_node_labels="{'region': 'app'}" ipv4addr=10.x.y.107 app-2 openshift_node_labels="{'region': 'app'}" ipv4addr=10.x.y.108 [etcd] master-0 master-1 master-2 [lb] haproxy-0 openshift_node_labels="{'region': 'haproxy'}" ipv4addr=10.x.y.200 [nodes] master-0 openshift_node_labels="{'region': 'master'}" openshift_schedulable=true openshift_hostname=master-0 master-1 openshift_node_labels="{'region': 'master'}" openshift_schedulable=true openshift_hostname=master-1 master-2 openshift_node_labels="{'region': 'master'}" openshift_schedulable=true openshift_hostname=master-2 infra-0 openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra-0 infra-1 openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra-1 infra-2 openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra-2 app-0 openshift_node_labels="{'region': 'app'}" openshift_hostname=app-0 app-1 openshift_node_labels="{'region': 'app'}" openshift_hostname=app-1 app-2 openshift_node_labels="{'region': 'app'}" openshift_hostname=app-2
For a downloadable copy of this inventory file please see the following repo
2.10. vSphere VM Instance Requirements for RHOCP
This reference environment should consist of the following instances:
- three master instances
- three infrastructure instances
- three application instances
- one loadbalancer instance
2.10.1. Virtual Machine Hardware Requirements
Table 2.1. Virtual Machine Node Requirements
| Node Type | Hardware |
|---|---|
| Master | 2 vCPU |
| 16GB RAM | |
| 1 x 60GB - OS RHEL 7.4 | |
| 1 x 40GB - Docker volume | |
| 1 x 40Gb - EmptyDir volume | |
| 1 x 40GB - ETCD volume | |
| App or Infra Node | 2 vCPU |
| 8GB RAM | |
| 1 x 60GB - OS RHEL 7.4 | |
| 1 x 40GB - Docker volume | |
| 1 x 40Gb - EmptyDir volume |
The master instances should contain three extra disks used for Docker storage and ETCD and OpenShift volumes. The application node instances use their additional disks for Docker storage and OpenShift volumes.
etcd requires that an odd number of cluster members exist. Three masters were chosen to support high availability and etcd clustering. Three infrastructure instances allow for minimal to zero downtime for applications running in the OpenShift environment. Applications instance can be one to many instances depending on the requirements of the organization.
See Appendix B, Deploying a working vSphere Environment (Optional) for steps on deploying the vSphere environment.
infra and app node instances can easily be added after the initial install.
2.11. Set up DNS for Red Hat OpenShift Container Platform
The installation process for Red Hat OpenShift Container Platform depends on a reliable name service that contains an address record for each of the target instances.
An example DNS configuration is listed below:
Using /etc/hosts is not valid, a proper DNS service must exist.
$ORIGIN apps.example.com. * A 10.x.y.200 $ORIGIN example.com. haproxy-0 A 10.x.y.200 infra-0 A 10.x.y.100 infra-1 A 10.x.y.101 infra-2 A 10.x.y.102 master-0 A 10.x.y.103 master-1 A 10.x.y.104 master-2 A 10.x.y.105 app-0 A 10.x.y.106 app-1 A 10.x.y.107 app-2 A 10.x.y.108
Table 2.2. Subdomain for RHOCP Network
| Domain Name | Description |
|---|---|
|
| All interfaces on the internal only network |
Table 2.3. Sample FQDNs
| Fully Qualified Name | Description |
|---|---|
|
|
Name of the network interface on the |
|
|
Name of the network interface on the |
|
|
Name of the network interface on the |
|
|
Name of the Red Hat OpenShift Container Platform console using the address of the |
2.11.1. Confirm Instance Deployment
After the 3 master, infra and app instances have been created in vCenter, verify the creation of the VMware vSphere instances via:
$ govc ls /<datacenter>/vm/<folder>/
Using the values provided by the command, update the DNS master zone.db file as shown in with the appropriate IP addresses. Do not proceed to the next section until the DNS resolution is configured.
Attempt to ssh into one of the Red Hat OpenShift Container Platform instances now that the ssh identity is setup.
No password should be prompted if working properly.
$ ssh master-1
2.12. Create and Configure an HAProxy VMware vSphere Instance
If an organization currently does not have a load balancer in place then HAProxy can be deployed. A load balancer such as HAProxy provides a single view of the Red Hat OpenShift Container Platform master services for the applications. The master services and the applications use different TCP ports so a single TCP load balancer can handle all of the inbound connections.
The load balanced DNS name that developers use must be in a DNS A record pointing to the haproxy server before installation. For applications, a wildcard DNS entry must point to the haproxy host.
The configuration of the HAProxy instance is completed within the subsequent steps as the deployment host configures the Red Hat subscriptions for all the instances and the Red Hat OpenShift Container Platform installer auto configures the HAProxy instance based upon the information found within the Red Hat OpenShift Container Platform inventory file.
2.13. Enable Required Repositories and Packages to OpenShift Infrastructure
The optional VM deployment in Appendix B, Deploying a working vSphere Environment (Optional) takes care of this and all volume creation and other machine preparation like chrony, open-vm-tools, etc.
Ensure connectivity to all instances via the deployment instance via:
$ ansible all -m pingOnce connectivity to all instances has been established, register the instances via Red Hat Subscription Manager. This is accomplished using credentials or an activation key.
Via credentials the ansible command is as follows:
$ ansible all -m command -a "subscription-manager register --username <user> --password '<password>'"
Via activation key, the ansible command is as follows:
$ ansible all -m command -a "subscription-manager register --org=<org_id> --activationkey=<keyname>"where the following options:
- -m module to use
- -a module argument
Once all the instances have been successfully registered, enable all the required RHOCP repositories on all the instances via:
$ ansible all -m command -a "subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-ose-3.9-rpms" \
--enable="rhel-7-fast-datapath-rpms" \
--enable="rhel-7-server-ansible-2.4-rpms""2.14. OpenShift Authentication
Red Hat OpenShift Container Platform provides the ability to use many different authentication platforms. For this reference architecture, LDAP is the preferred authentication mechanism. A listing of other authentication options are available at Configuring Authentication and User Agent.
When configuring LDAP as the authentication provider the following parameters can be added to the ansible inventory. An example is shown below.
openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=openshift,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=openshift,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}]If using LDAPS, all the masters must have the relevant ca.crt file for LDAP in place prior to the installation, otherwise the installation fails. The file should be placed locally on the deployment instance and be called within the inventory file from the variable openshift_master_ldap_ca_file
2.15. Instance Verification
It can be useful to check for potential issues or misconfigurations in the instances before continuing the installation process. Connect to every instance using the deployment host and verify the disks are properly created and mounted.
$ ssh deployment.example.com $ ssh <instance> $ lsblk $ sudo journalctl $ free -m $ sudo yum repolist
where instance is for example master-0.example.com
For reference, below is example output of lsblk for the master nodes.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 60G 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 39.5G 0 part │ ├─rhel-root 253:0 0 55G 0 lvm / │ └─rhel-swap 253:1 0 3.9G 0 lvm └─sda3 8:3 0 20G 0 part └─rhel-root 253:0 0 55G 0 lvm / sdb 8:16 0 40G 0 disk └─sdb1 8:17 0 40G 0 part └─docker--vol-dockerlv 253:2 0 40G 0 lvm /var/lib/docker sdc 8:32 0 40G 0 disk /var/lib/origin/openshift.local.volumes sdd 8:48 0 40G 0 disk /var/lib/etcd
For reference, below is an example of output of lsblk for the infra and app nodes.
$ lsblk sda 8:0 0 60G 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 39.5G 0 part │ ├─rhel-root 253:0 0 55G 0 lvm / │ └─rhel-swap 253:1 0 3.9G 0 lvm └─sda3 8:3 0 20G 0 part └─rhel-root 253:0 0 55G 0 lvm / sdb 8:16 0 40G 0 disk └─sdb1 8:17 0 40G 0 part └─docker--vol-dockerlv 253:2 0 40G 0 lvm /var/lib/docker sdc 8:32 0 40G 0 disk /var/lib/origin/openshift.local.volumes sdd 8:48 0 300G 0 disk
The docker-vol LVM volume group may not be configured on sdb on all nodes at this stage, as this step is completed via the prerequisites playbook in the following section.
2.16. Prior to Ansible Installation
Prior to Chapter 4, Operational Management, create DRS anti-affinity rules to ensure maximum availability for the cluster.
- Open the VMware vCenter web client, select the cluster, choose configure.

- Under Configuration, select VM/Host Rules.

- Click add, and create a rules to keep the masters separate.

The following VMware documentation goes over creating and configuring anti-affinity rules in depth.
Lastly, set all of the VMs created to High VM Latency to ensure some additional tuning recommended by VMware for latency sensitive workloads as described here.
- Open the VMware vCenter web client and under the virtual machines summary tab, in the 'VM Hardware' box select 'Edit Settings'.
- Under, 'VM Options', expand 'Advanced'.
- Select the 'Latency Sensitivity' dropbox and select 'High'.
Figure 2.1. VMware High Latency

2.17. Red Hat OpenShift Container Platform Prequisites Playbook
The Red Hat OpenShift Container Platform Ansible installation provides a playbook to ensure all prerequisites are met prior to the installation of Red Hat OpenShift Container Platform. This includes steps such as registering all the nodes with Red Hat Subscription Manager and setting up the docker on the docker volumes.
Via the ansible-playbook command on the deployment instance, ensure all the prerequisites are met using prerequisites.yml playbook:
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.ymlIn the event that OpenShift fails to install or the prerequisites playbook fails, follow the steps in Appendix Appendix F, Troubleshooting Ansible by Red Hat to troubleshoot Ansible.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.