Chapter 2. Preparing Instances for Red Hat OpenShift Container Platform
A successful deployment of Red Hat OpenShift Container Platform requires a number of prerequisites. These prerequisites include the deployment of instances in Red Hat Virtualization and the required configuration steps prior to the actual installation of Red Hat OpenShift Container Platform using Ansible. The subsequent sections detail the prerequisites and configuration changes required for an Red Hat OpenShift Container Platform on Red Hat Virtualization environment.
2.1. Bastion Instance
The bastion instance itself may have a relatively minimal installation of Red Hat Enterprise Linux 7. It should meet recommended requirements for the operating system itself with enough spare disk space to hold downloaded QCOW2 images (around 700 MiB).
The Ansible commands to set up the RHV instances and OpenShift may be run as a normal user. Several of the following prerequisite tasks, however, require root access, preferably through sudo.
2.1.1. Generate the SSH Key Pair
Ansible works by communicating with target servers via the Secure Shell (SSH) protocol. SSH keys are used in place of passwords in the Red Hat OpenShift Container Platform installation process. The public key will be installed on Red Hat Virtualization instances by Cloud-Init.
Employ an ssh-agent on the bastion instance to avoid repeatedly being asked for the pass phrase.
SSH Key Generation
If SSH keys do not currently exist then it is required to create them. Generate an RSA key pair with a blank pass phrase by typing the following at a shell prompt:
$ ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsaA message similar to the following prints to indicate the keys are created successfully.
Your identification has been saved in ~/.ssh/id_rsa. Your public key has been saved in ~/.ssh/id_rsa.pub. The key fingerprint is: e7:97:c7:e2:0e:f9:0e:fc:c4:d7:cb:e5:31:11:92:14 USER@bastion.example.com The key's randomart image is: +--[ RSA 2048]----+ | E. | | . . | | o . | | . .| | S . . | | + o o ..| | * * +oo| | O +..=| | o* o.| +-----------------+
2.1.2. Subscribe the Bastion
If the bastion instance is not already registered, do so, then list available subscription pools:
$ sudo subscription-manager register $ sudo subscription-manager list --available
Note the Pool IDs of available subscriptions for both Red Hat Virtualization and Red Hat OpenShift Container Platform, then attach those pools:
$ sudo subscription-manager attach --pool <RHV Pool ID> $ sudo subscription-manager attach --pool <RHOCP Pool ID>
Ensure the following repositories are enabled, and no others.
$ sudo subscription-manager repos --disable=*
$ sudo subscription-manager repos \
--enable=rhel-7-server-rpms \
--enable=rhel-7-server-extras-rpms \
--enable=rhel-7-server-ose-3.9-rpms \
--enable=rhel-7-server-rhv-4-mgmt-agent-rpms2.1.3. Install Packages
Install the following packages to enable the installation and deployment of Red Hat OpenShift Container Platform:
$ sudo yum install -y \
ansible \
atomic-openshift-utils \
git \
python-ovirt-engine-sdk4 \
ovirt-ansible-roles2.1.4. (Optional) Clone OpenShift Ansible Contrib GitHub Repository
Example variable and inventory files relevant to this reference architecture may be found in the OpenShift Ansible Contrib GitHub Repository at https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/rhv-ansible
Code and examples in the OpenShift Ansible Contrib GitHub Repository are unsupported by Red Hat.
2.2. Configure Ansible
Before running playbooks, modify ansible.cfg to reflect the deployed environment:
$ sudo vi /etc/ansible/ansible.cfg
The code block below implements the following changes:
-
forksis increased to twenty from the default five. This allows tasks on the ten instances in this reference architecture to run simultaneously. -
host_key_checkingis set to false to allow Ansible commands to run on instances without first manually logging in to add them to the~/.ssh/known_hostsfile. An alternative is to usessh-keyscanto generate the known-hosts entries. -
remote_useris changed torootfrom the default of whichever user calls Ansible. -
gatheringis changed from the defaultimplicittosmartto enable caching of host facts. - Logging is activated and retry files are disabled to provide more helpful debugging of issues during Ansible runs.
-
A
vault_password_fileis specified for protected values like passwords. See the following section for more information on setting up an Ansible Vault file.
[defaults] forks = 20 host_key_checking = False remote_user = root gathering = smart log_path=./ansible.log retry_files_enabled=False vault_password_file=~/.test_vault_pw
2.3. Variables and Static Inventory
According to the Red Hat OpenShift Container Platform Advanced Installation Guide, cluster variables should be configured in the system inventory (/etc/ansible/hosts) within the [OSEv3:vars] section.
The oVirt Ansible roles may be configured in a separate variable file and included during ansible-playbook runs using the -e flag, however it is also possible to include these in the same inventory file under a group created just for localhost.
The following snippets of the inventory for this reference architecture define localhost as part of a group called workstation:
[workstation] localhost ansible_connection=local
Variables assigned to localhost include:
Credentials for the RHV Engine
In the following snippet, the credentials for the RHV engine are assigned to special variables which are then encrypted by Ansible Vault because they contain sensitive information like administrative passwords. Creation of this Ansible Vault file is explained in the next section.
[workstation:vars]
# RHV Engine
engine_url="{{ vault_engine_url }}"
engine_user="{{ vault_engine_user }}"
engine_password="{{ vault_engine_password }}"
# CA file copied from engine:/etc/pki/ovirt-engine/ca.pem
# path is relative to playbook directory
engine_cafile=../ca.pemGuest Image URL and Path
# QCOW2 KVM Guest Image
#qcow_url=https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2c
qcow_url=https://access.cdn.redhat.com//content/origin/files/XXXX/rhel-server-7.5-x86_64-kvm.qcow2?_auth_=XXXX
template_name=rhel75
image_path="{{ lookup('env', 'HOME') }}/Downloads/{{ template_name }}.qcow2"RHV Cluster information and SSH key
# RHV VM Cluster Info
rhv_cluster=Default
rhv_data_storage=vmstore
root_ssh_key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
Some variables pertain to all hosts, including localhost and the OpenShift instances. They belong in a special group called all:
[all:vars]
app_dns_prefix=apps
public_hosted_zone=example.com
load_balancer_hostname=lb.{{public_hosted_zone}}
openshift_master_cluster_hostname="{{ load_balancer_hostname }}"
openshift_master_cluster_public_hostname=openshift-master.{{ public_hosted_zone }}
openshift_master_default_subdomain="{{ app_dns_prefix }}.{{ public_hosted_zone }}"
openshift_public_hostname="{{openshift_master_cluster_public_hostname}}"
OpenShift specific variables belong in the previously mentioned [OSEv3:vars] section:
[OSEv3:vars]
# General variables
ansible_ssh_user=root
debug_level=2
deployment_type=openshift-enterprise
openshift_debug_level="{{ debug_level }}"
openshift_deployment_type="{{ deployment_type }}"
openshift_master_cluster_method=native
openshift_node_debug_level="{{ node_debug_level | default(debug_level, true) }}"
openshift_release=3.9As Red Hat Virtualization does not yet provide dynamic storage allocation capabilities to Red Hat OpenShift Container Platform, the Service Catalog must be disabled at install time:
openshift_enable_service_catalog=False
For easier readability, the inventory is broken down further into logical sections:
# Docker container_runtime_docker_storage_setup_device=/dev/vdb container_runtime_docker_storage_type=overlay2 openshift_docker_use_system_container=False openshift_node_local_quota_per_fsgroup=512Mi openshift_use_system_containers=False
# Pod Networking os_sdn_network_plugin_name=redhat/openshift-ovs-networkpolicy
Here, an external NFS server is used as the backing store for the OpenShift Registry.
A value is required for openshift_hosted_registry_storage_host.
# Registry openshift_hosted_registry_replicas=1 openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_selector='region=infra' openshift_hosted_registry_storage_host= openshift_hosted_registry_storage_nfs_directory=/var/lib/exports openshift_hosted_registry_storage_volume_name=registryvol openshift_hosted_registry_storage_volume_size=20Gi
# Red Hat Subscription Management
rhsub_pool=Red Hat OpenShift Container Platform*
rhsub_user="{{ vault_rhsub_user }}"
rhsub_password="{{ vault_rhsub_password }}"The embedded json here was converted from yaml. The yaml version of the inventory is also in the OpenShift Ansible Contrib GitHub Repo.
# Load Balancer Config
openshift_loadbalancer_additional_frontends=[{"name":"apps-http","option":"tcplog","binds":["*:80"],"default_backend":"apps-http"},{"name":"apps-https","option":"tcplog","binds":["*:443"],"default_backend":"apps-http"}]
openshift_loadbalancer_additional_backends=[{"name":"apps-http","balance":"source","servers":[{"name":"infra0","address":"{{ groups['infras'].0 }}:80","opts":"check"},{"name":"infra1","address":"{{ groups['infras'].1 }}:80","opts":"check"},{"name":"infra2","address":"{{ groups['infras'].2 }}:80","opts":"check"}]},{"name":"apps-https","balance":"source","servers":[{"name":"infra0","address":"{{ groups['infras'].0 }}:443","opts":"check"},{"name":"infra1","address":"{{ groups['infras'].1 }}:443","opts":"check"},{"name":"infra2","address":"{{ groups['infras'].2 }}:443","opts":"check"}]}]Finally, the host names for the OpenShift instances are grouped and provided below:
[OSEv3:children] nodes masters etcd lb
[masters]
master0.example.com
master1.example.com
master2.example.com
[etcd]
master0.example.com
master1.example.com
master2.example.com
[infras]
infra0.example.com
infra1.example.com
infra2.example.com
[lb]
lb.example.com
[nodes]
master0.example.com openshift_node_labels="{'region': 'master'}" openshift_hostname=master0.example.com
master1.example.com openshift_node_labels="{'region': 'master'}" openshift_hostname=master1.example.com
master2.example.com openshift_node_labels="{'region': 'master'}" openshift_hostname=master2.example.com
infra0.example.com openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra0.example.com
infra1.example.com openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra1.example.com
infra2.example.com openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra2.example.com
app0.example.com openshift_node_labels="{'region': 'primary'}" openshift_hostname=app0.example.com
app1.example.com openshift_node_labels="{'region': 'primary'}" openshift_hostname=app1.example.com
app2.example.com openshift_node_labels="{'region': 'primary'}" openshift_hostname=app2.example.comFor the complete inventory, see https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/rhv-ansible/example/inventory
2.4. Ansible Vault
To avoid leaving secret passwords for critical infrastructure in clear-text on the file system, create an encrypted Ansible Vault variable file for inclusion in the Ansible commands in the next chapter.
To create the encrypted file, for example in ~/vault.yaml, run the following command:
$ ansible-vault create ~/vault.yaml
To edit the newly created vault file, run the following command:
$ ansible-vault edit ~/vault.yaml
This reference architecture employs the following process:
-
Identify a variable which requires protection, e.g.
engine_password -
In the vault file, create an entry prefixed by
vault_for the variable, e.g.vault_engine_password - In the variables file, assign the vault variable to the regular variable:
engine_password: '{{ vault_engine_password }}'
If using a vault file to protect some values, it must be included alongside the variables file that references it, e.g. -e @~/vault.yaml
2.5. Red Hat Virtualization Engine Credentials and CA
The oVirt Ansible roles responsible for setting up instances in Red Hat Virtualization require credentials in order to connect to the engine and perform operations.
engine_url: https://engine.example.com/ovirt-engine/api
engine_user: admin@internal
engine_password: '{{ vault_engine_password }}'
In order to establish a trusted SSL connection with the engine, the oVirt API requires a copy of the engine’s certificate authority in the form of a self-signed certificate PEM.
To download the certificate from the engine itself, run:
$ curl --output ca.pem 'http://engine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA'
Provide the CA file to oVirt Ansible with the following variable:
engine_cafile: ../ca.pem
The path of the CA file is relative to the playbook where the oVirt roles are run. Typically this is in a subdirectory of the path where the CA file is downloaded, thus the ../. If everything is in the same directory, just use ca.pem instead.
2.6. OpenShift Authentication
Red Hat OpenShift Container Platform provides the ability to use a number of different authentication platforms. For this reference architecture, LDAP is the preferred authentication mechanism. A listing of other authentication options is available at Configuring Authentication and User Agent.
When configuring LDAP as the authentication provider, the following parameters can be added to the Ansible inventory:
openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=ocp3,dc=openshift,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.ocp3.example.com/cn=users,cn=accounts,dc=ocp3,dc=openshift,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=ocp3,dc=openshift,dc=com)'}]If using LDAPS, all master nodes must have the relevant ca.crt file in place prior to the installation, otherwise the installation fails.
For more information about the required LDAP parameters, see LDAP Authentication
2.7. Deploy Red Hat Virtualization Instances
Creation of OpenShift instances in Red Hat Virtualization takes place in two oVirt Ansible roles: oVirt.image-template and oVirt.vm-infra. Documentation and examples of these roles can be found on their respective GitHub project pages.
The following is a playbook which calls the Image Template and VM Infrastructure roles from oVirt Ansible:
ovirt-vm-infra.yaml
---
- name: oVirt 39 all-in-one playbook
hosts: localhost
connection: local
gather_facts: false
roles:
- oVirt.image-template
- oVirt.vm-infra
vars:
# Common
compatibility_version: 4.2
data_center_name: Default
debug_vm_create: true
wait_for_ip: true
vm_infra_wait_for_ip_retries: 10
vm_infra_wait_for_ip_delay: 40
# oVirt Image Template vars
template_cluster: "{{ rhv_cluster }}"
template_memory: 8GiB
template_cpu: 1
template_disk_storage: "{{ rhv_data_storage }}"
template_disk_size: 60GiB
template_nics:
- name: nic1
profile_name: ovirtmgmt
interface: virtio
# Profiles
master_vm:
cluster: "{{ rhv_cluster }}"
template: "{{ template_name }}"
memory: 16GiB
cores: 2
high_availability: true
disks:
- size: 100GiB
storage_domain: "{{ rhv_data_storage }}"
name: docker_disk
interface: virtio
- size: 50GiB
storage_domain: "{{ rhv_data_storage }}"
name: localvol_disk
interface: virtio
state: running
node_vm:
cluster: "{{ rhv_cluster }}"
template: "{{ template_name }}"
memory: 8GiB
cores: 2
disks:
- size: 100GiB
storage_domain: "{{ rhv_data_storage }}"
name: docker_disk
interface: virtio
- size: 50GiB
storage_domain: "{{ rhv_data_storage }}"
name: localvol_disk
interface: virtio
state: running
# Cloud Init Script
cloud_init_script: |
runcmd:
- mkdir -p '/var/lib/origin/openshift.local.volumes'
- /usr/sbin/mkfs.xfs -L localvol /dev/vdc
- sleep "$(($RANDOM % 60))"
- sync
- reboot
mounts:
- [ '/dev/vdc', '/var/lib/origin/openshift.local.volumes', 'xfs', 'defaults,gquota' ]
rh_subscription:
username: {{vault_rhsub_user}}
password: {{vault_rhsub_password}}
add-pool: [{{vault_rhsub_pool}}]
server-hostname: {{vault_rhsub_server}}
enable-repo: ['rhel-7-server-rpms', 'rhel-7-server-extras-rpms', 'rhel-7-fast-datapath-rpms', 'rhel-7-server-ose-3.9-rpms']
disable-repo: []
vms:
# Master VMs
- name: "master0.{{ public_hosted_zone }}"
profile: "{{ master_vm }}"
tag: openshift_master
cloud_init:
host_name: "master0.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
- name: "master1.{{ public_hosted_zone }}"
tag: openshift_master
profile: "{{ master_vm }}"
cloud_init:
host_name: "master1.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
- name: "master2.{{ public_hosted_zone }}"
tag: openshift_master
profile: "{{ master_vm }}"
cloud_init:
host_name: "master2.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
# Infra VMs
- name: "infra0.{{ public_hosted_zone }}"
tag: openshift_infra
profile: "{{ node_vm }}"
cloud_init:
host_name: "infra0.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
- name: "infra1.{{ public_hosted_zone }}"
tag: openshift_infra
profile: "{{ node_vm }}"
cloud_init:
host_name: "infra1.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
- name: "infra2.{{ public_hosted_zone }}"
tag: openshift_infra
profile: "{{ node_vm }}"
cloud_init:
host_name: "infra2.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
# Node VMs
- name: "app0.{{ public_hosted_zone }}"
tag: openshift_node
profile: "{{ node_vm }}"
cloud_init:
host_name: "app0.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
- name: "app1.{{ public_hosted_zone }}"
tag: openshift_node
profile: "{{ node_vm }}"
cloud_init:
host_name: "app1.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
- name: "app2.{{ public_hosted_zone }}"
tag: openshift_node
profile: "{{ node_vm }}"
cloud_init:
host_name: "app2.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
# Load balancer
- name: "lb.{{ public_hosted_zone }}"
tag: openshift_lb
profile: "{{ node_vm }}"
cloud_init:
host_name: "lb.{{ public_hosted_zone }}"
authorized_ssh_keys: "{{ root_ssh_key }}"
custom_script: "{{ cloud_init_script }}"
affinity_groups:
- name: masters_ag
cluster: "{{ rhv_cluster }}"
vm_enforcing: false
vm_rule: negative
vms:
- "master0.{{ public_hosted_zone }}"
- "master1.{{ public_hosted_zone }}"
- "master2.{{ public_hosted_zone }}"
wait: true
- name: infra_ag
cluster: "{{ rhv_cluster }}"
vm_enforcing: false
vm_rule: negative
vms:
- "infra0.{{ public_hosted_zone }}"
- "infra1.{{ public_hosted_zone }}"
- "infra2.{{ public_hosted_zone }}"
wait: true
- name: app_ag
cluster: "{{ rhv_cluster }}"
vm_enforcing: false
vm_rule: negative
vms:
- "app0.{{ public_hosted_zone }}"
- "app1.{{ public_hosted_zone }}"
- "app2.{{ public_hosted_zone }}"
pre_tasks:
- name: Log in to oVirt
ovirt_auth:
url: "{{ engine_url }}"
username: "{{ engine_user }}"
password: "{{ engine_password }}"
ca_file: "{{ engine_cafile | default(omit) }}"
insecure: "{{ engine_insecure | default(true) }}"
tags:
- always
post_tasks:
- name: Logout from oVirt
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
tags:
- always
...
Run this playbook from the bastion host as follows:
$ ansible-playbook -e@~vault.yaml ovirt-vm-infra.yaml
Upon successful completion of this playbook, ten new virtual machines will be running in Red Hat Virtualization, ready for deployment.
2.8. Add Instances to DNS
After deploying the instances, ensure they are properly updated in DNS.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.