Chapter 2. Red Hat OpenShift Container Platform Prerequisites
A successful deployment of Red Hat OpenShift Container Platform requires many prerequisites. This consists of a set of infrastructure and host configuration steps prior to the actual installation of Red Hat OpenShift Container Platform using Ansible. In the following subsequent sections, details regarding the prerequisites and configuration changes required for an Red Hat OpenShift Container Platform on a Red Hat OpenStack Platform environment are discussed in detail.
All the Red Hat OpenStack Platform CLI commands in this reference environment are executed using the CLI openstack commands within the Red Hat OpenStack Platform director node. If using a workstation or laptop to execute these commands instead of the Red Hat OpenStack Platform director node, please ensure to install the python-openstackclient RPM found within the rhel-7-server-openstack-10-rpms repository.
Example:
Install python-openstackclient package.
$ sudo subscription-manager repos --enable rhel-7-server-openstack-10-rpms $ sudo yum install -y python-openstackclient
Verify the package version for python-openstackclient is at least version 3.2.1-1
$ rpm -q python-openstackclient python-openstackclient-3.2.1-1.el7ost.noarch
To use bash completion feature, where the openstack commands are completed by pressing <tab> keystroke, use the following command in the host where the python-openstackclient has been installed:
$ openstack complete | sudo tee /etc/bash_completion.d/osc.bash_completion > /dev/null
Logout and login again and the openstack commands can be autocompleted by pressing the <tab> keystroke as for openstack network list:
$ openstack netw<tab> li<tab>
2.1. Creating OpenStack User Accounts, Projects and Roles
Before installing Red Hat OpenShift Container Platform, the Red Hat OpenStack Platform environment requires a project (tenant) that stores the Red Hat OpenStack Platform instances that are to install Red Hat OpenShift Container Platform. This project requires ownership by a user and role of that user to be set to member.
The following steps show how to accomplish the above.
As the Red Hat OpenStack Platform overcloud administrator,
Create a project (tenant) that is to store the RHOSP instances
$ openstack project create <project>Create a Red Hat OpenStack Platform user that has ownership of the previously created project
$ openstack user create --password <password> <username>Set the role of the user
$ openstack role add --user <username> --project <project> member
Once the above is complete, an OpenStack administrator can create an RC file with all the required information to the user(s) implementing the Red Hat OpenShift Container Platform environment.
An example RC file:
$ cat path/to/examplerc export OS_USERNAME=<username> export OS_TENANT_NAME=<project> export OS_PASSWORD=<password> export OS_CLOUDNAME=<overcloud> export OS_AUTH_URL=http://<ip>:5000/v2.0
As the user(s) implementing the Red Hat OpenShift Container Platform environment, within the Red Hat OpenStack Platform director node or workstation, ensure to source the credentials as follows:
$ source path/to/examplerc
2.2. Creating a Red Hat Enterprise Linux Base Image
After the user and project are created, a cloud image is required for all the Red Hat OpenStack Platform instances that are to install Red Hat OpenShift Container Platform. For this particular reference environment the image used is Red Hat Enterprise Linux 7.5.
Red Hat Enterprise Linux 7.5 includes support for overlay2 file system used in this reference architecture for container storage purposes. If using prior Red Hat Enterprise Linux versions, do not use overlay2 file system.
The Red Hat Enterprise Linux 7.5 cloud image is located at Red Hat Enterprise Linux 7.5 KVM Guest Image Copy the image to the director or workstation node and store the image within glance. Glance image services provide the ability to discover, register and retrieve virtual machine (VM) images.
Depending on the glance storage backend, it can be required to convert the image format from qcow2 to a different format. For Ceph storage backends, it is required the image to converted to raw image format.
The following steps show the process of incorporating the Red Hat Enterprise Linux 7.5 image.
$ source /path/to/examplerc
$ mkdir ~/images
$ sudo cp /path/to/downloaded/rhel-server-7.5-x86_64-kvm.qcow2 ~/images
$ cd ~/images
$ qemu-img convert \
-f qcow2 -O raw \
rhel-server-7.5-x86_64-kvm.qcow2 ./<img-name>.raw
$ openstack image create <img-name> \
--disk-format=raw \
--container-format=bare < <img-name>.raw
An Red Hat OpenStack Platform administrator may allow this glance Red Hat Enterprise Linux image to be readily available to all projects within the Red Hat OpenStack Platform environment. This may be done by source an RC file with OpenStack administrator privileges and including the --public option prior to creating the raw image.
Confirm the creation of the image via openstack image list. An example output of the reference environment is shown.
$ openstack image list +--------------------------------------+-------+--------+ | ID | Name | Status | +--------------------------------------+-------+--------+ | 7ce292d7-0e7f-495d-b2a6-aea6c7455e8c | rhel7 | active | +--------------------------------------+-------+--------+
2.3. Create an OpenStack Flavor
Within OpenStack, flavors define the size of a virtual server by defining the compute, memory, and storage capacity of nova computing instances. Since the base image within this reference architecture is Red Hat Enterprise Linux 7.5, a m1.node and m1.master sized flavor is created with the following specifications as shown in Table 2.1, “Minimum System Requirements for OpenShift”.
Table 2.1. Minimum System Requirements for OpenShift
| Node Type | CPU | RAM | Root Disk | Flavor |
|---|---|---|---|---|
| Masters | 2 | 16 GB | 45 GB |
|
| Nodes | 1 | 8 GB | 20 GB |
|
As an OpenStack administrator,
$ openstack flavor create <flavor_name> \ --id auto \ --ram <ram_in_MB> \ --disk <disk_in_GB> \ --vcpus <num_vcpus>
An example below showing the creation of flavors within this reference environment.
$ openstack flavor create m1.master \
--id auto \
--ram 16384 \
--disk 45 \
--vcpus 2
$ openstack flavor create m1.node \
--id auto \
--ram 8192 \
--disk 20 \
--vcpus 1If access to OpenStack administrator privileges to create new flavors is unavailable, use existing flavors within the OpenStack environment that meet the requirements in Table 2.1, “Minimum System Requirements for OpenShift”
2.4. Creating an OpenStack Keypair
Red Hat OpenStack Platform uses cloud-init to place an ssh public key on each instance as it is created to allow ssh access to the instance. Red Hat OpenStack Platform expects the user to hold the private key.
In order to generate a keypair use the following command
$ openstack keypair create <keypair-name> > /path/to/<keypair-name>.pem
Once the keypair is created, set the permissions to 600 thus only allowing the owner of the file to read and write to that file.
$ chmod 600 /path/to/<keypair-name>.pem2.5. Creation of Red Hat OpenShift Container Platform Networks via OpenStack
When deploying Red Hat OpenShift Container Platform on Red Hat OpenStack Platform as described in this reference environment, the requirements are two networks — public and internal network.
2.5.1. Public Network
The public network is a network that contains external access and can be reached by the outside world. The public network creation can be only done by an OpenStack administrator.
The following commands provide an example of creating an OpenStack provider network for public network access.
As an OpenStack administrator,
$ source /path/to/examplerc
$ neutron net-create public_network \
--router:external \
--provider:network_type flat \
--provider:physical_network datacentre
$ neutron subnet-create \
--name public_network \
--gateway <ip> \
--allocation-pool start=<float_start>,end=<float_end> \
public_network <CIDR>
<float_start> and <float_end> are the associated floating IP pool provided to the network labeled public network. The Classless Inter-Domain Routing (CIDR) uses the format <ip>/<routing_prefix>, i.e. 10.5.2.1/24
2.5.2. Internal Network
The internal network is connected to the public network via a router during the network setup. This allows each Red Hat OpenStack Platform instance attached to the internal network the ability to request a floating IP from the public network for public access.
The following commands create the internal network.
If there currently is not a DNS service within the organization proceed to Appendix A, Creating the DNS Red Hat OpenStack Platform Instance before continuing as this is required. The internal network is required for the DNS instance thus it can be created without specifying the dns-nameserver setting then add it as openstack subnet set --dns-nameserver <dns-ip> <subnet> once the DNS server is properly installed.
$ source /path/to/examplerc $ openstack router create <router-name> $ openstack network create <private-net-name> $ openstack subnet create --net <private-net-name> \ --dns-nameserver <dns-ip> \ --subnet-range <cidr> \ <private-net-name> $ openstack subnet list $ openstack router add subnet <router-name> <private-subnet-uuid> $ neutron router-gateway-set <router-name> <public-subnet-uuid>
2.6. Setting up DNS for Red Hat OpenShift Container Platform
The installation process for Red Hat OpenShift Container Platform depends on a reliable name service that contains an address record for each of the target instances. If a DNS is currently not set, please refer to the appendix section on Appendix A, Creating the DNS Red Hat OpenStack Platform Instance.
Using /etc/hosts is not valid, a proper DNS service must exist.
2.7. Creating Red Hat OpenStack Platform Security Groups
Red Hat OpenStack Platform networking allows the user to define inbound and outbound traffic filters that can be applied to each instance on a network. This allows the user to limit network traffic to each instance based on the function of the instance services and not depend on host based filtering.
This section describes the ports and services required for each type of host and how to create the security groups in Red Hat OpenStack Platform.
The following table shows the security group association to every instance type:
Table 2.2. Security Group association
| Instance type | Security groups associated |
|---|---|
| Bastion |
|
| Masters |
|
| Infra nodes |
|
| App nodes |
|
As the <node-sg-name> security group is applied to all the instance types, the common rules are applied to it.
$ source /path/to/examplerc $ for SG in <bastion-sg-name> <master-sg-name> <infra-sg-name> <node-sg-name> do openstack security group create $SG done
The following tables and commands describe the security group network access controls for the bastion host, master, infrastructure and app instances.
2.7.1. Bastion Security Group
The bastion instance only needs to allow inbound ssh. This instance exists to give operators a stable base to deploy, monitor and manage the Red Hat OpenShift Container Platform environment.
Table 2.3. Bastion Security Group TCP ports
| Port/Protocol | Service | Remote source | Purpose |
|---|---|---|---|
| ICMP | ICMP | Any | Allow ping, traceroute, etc. |
| 22/TCP | SSH | Any | Secure shell login |
Creation of the above security group is as follows:
$ source /path/to/examplerc
$ openstack security group rule create \
--ingress \
--protocol icmp \
<bastion-sg-name>
$ openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 22 \
<bastion-sg-name>Verification of the security group is as follows:
$ openstack security group show <bastion-sg-name>2.7.2. Master Security Group
The Red Hat OpenShift Container Platform master security group requires the most complex network access controls. In addition to the ports used by the API and master console, these nodes contain the etcd servers that form the cluster.
Table 2.4. Master Host Security Group Ports
| Port/Protocol | Service | Remote source | Purpose |
|---|---|---|---|
| 2379/TCP | etcd | Masters | Client → Server connections |
| 2380/TCP | etcd | Masters | Server → Server cluster communications |
| 8053/TCP | DNS | Masters and nodes | Internal name services (3.2+) |
| 8053/UDP | DNS | Masters and nodes | Internal name services (3.2+) |
| 8443/TCP | HTTPS | Any | Master WebUI and API |
As masters nodes are in fact nodes, the node security group is applied, as well as, the master security group.
Creation of the above security group is as follows:
$ source /path/to/examplerc
$ openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 2379:2380 \
--src-group <master-sg-name> \
<master-sg-name>
for PROTO in tcp udp;
do
openstack security group rule create \
--ingress \
--protocol $PROTO \
--dst-port 8053 \
--src-group <node-sg-name> \
<master-sg-name>;
done
$ openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 8443 \
<master-sg-name>Verification of the security group is as follows:
$ openstack security group show <master-sg-name>2.7.3. Infrastructure Node Security Group
The infrastructure nodes run the Red Hat OpenShift Container Platform router and the local registry. It must accept inbound connections on the web ports that are forwarded to their destinations.
Table 2.5. Infrastructure Node Security Group Ports
| Port/Protocol | Services | Remote source | Purpose |
|---|---|---|---|
| 80/TCP | HTTP | Any | Cleartext application web traffic |
| 443/TCP | HTTPS | Any | Encrypted application web traffic |
| 9200/TCP | ElasticSearch | Any | ElasticSearch API |
| 9300/TCP | ElasticSearch | Any | Internal cluster use |
Creation of the above security group is as follows:
$ source /path/to/examplerc
$ for PORT in 80 443 9200 9300;
do
openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port $PORT \
<infra-sg-name>;
doneVerification of the security group is as follows:
$ openstack security group show <infra-sg-name>2.7.4. Node Security Group
All instances run OpenShift node service. All instances should only accept ICMP from any source, ssh traffic from the bastion host or other nodes, pod to pod communication via SDN traffic and kubelet communication via Kubernetes.
Table 2.6. Node Security Group Ports
| Port/Protocol | Services | Remote source | Purpose |
|---|---|---|---|
| ICMP | Ping et al. | Any | Debug connectivity issues |
| 22/TCP | SSH | Bastion | Secure shell login |
| 4789/UDP | SDN | Nodes | Pod to pod communications |
| 10250/TCP | kubernetes | Nodes | Kubelet communications |
Creation of the above security group is as follows:
$ source /path/to/examplerc
$ openstack security group rule create \
--ingress \
--protocol icmp \
<node-sg-name>
$ for SRC in <bastion-sg-name> <node-sg-name>;
do
openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 22 \
--src-group $SRC \
<node-sg-name>;
done
$ openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 10250 \
--src-group <node-sg-name> \
<node-sg-name>
$ openstack security group rule create \
--ingress \
--protocol udp \
--dst-port 4789 \
--src-group <node-sg-name> \
<node-sg-name>Verification of the security group is as follows:
$ openstack security group show <node-sg-name>2.8. Creating Red Hat OpenStack Platform Cinder Volumes
The master instances contain volumes to store docker images, OpenShift local volumes, and etcd storage. Each volume has a role in keeping the /var filesystem partition from corruption and provides the ability for the use of cinder volume snapshots.
The node instances contain two volumes - a docker and OpenShift local storage volume. These volumes do not require snapshot capability, but serve the purpose of ensuring that a large image or container does not compromise node performance or abilities.
Table 2.7. Cinder Volumes
| Instance type | Volumes | Role | Minimum recommended size |
|---|---|---|---|
| Masters only |
|
| 25 GB |
| Masters, infra and app nodes |
| Store docker images | 15 GB |
| Masters, infra and app nodes |
| Pod local storage | 30 GB |
Those volumes are created at boot time, there is not need to precreate them prior creating the instance.
2.8.1. Docker Volume
During the installation of Red Hat OpenShift Container Platform, the Red Hat OpenStack Platform instances created for RHOCP should include various cinder volumes to ensure that various OpenShift directories do not fill up the disk or cause disk contention in the /var partition.
The value of the docker volume size should be at least 15 GB.
2.8.2. etcd Volume
A cinder volume is created on the master instances for the storage of /var/lib/etcd. Storing etcd allows the similar benefit of protecting /var but more importantly provides the ability to perform snapshots of the volume when performing etcd maintenance. The cinder etcd volume is created only on the master instances.
The value of the etcd volume size should be at least 25 GB.
2.8.3. OpenShift Local Volume
A cinder volume is created for the directory of /var/lib/origin/openshift.local.volumes that is used with the perFSGroup setting at installation and with the mount option of gquota. These settings and volumes set a quota to ensure that containers cannot grow to an unreasonable size.
The value of OpenShift local volume size should be at least 30 GB.
2.8.4. Registry volume
The OpenShift image registry requires a cinder volume to ensure that images are saved in the event that the registry needs to migrate to another node.
$ source /path/to/examplerc $ openstack volume list
$ openstack volume create --size <volume-size-in-GB> openshift-registryThe registry volume size should be at least 30GB.
2.9. Creating RHOSP Instances for RHOCP
This reference environment consists of the following instances:
- one bastion instance
- three master instances
- three infrastructure instances
- three application instances
etcd requires that an odd number of cluster members exist. Three masters were chosen to support high availability and etcd clustering. Three infrastructure instances allow for minimal to zero downtime for applications running in the OpenShift environment. Applications instance can be one to many instances depending on the requirements of the organization.
infra and app node instances can easily be added after the initial install.
The creation of each instance within Red Hat OpenStack Platform varies based upon the following:
- Hostname
-
Attached
cinderstorage volumes - Assigned security group based upon instance type
Red Hat OpenShift Container Platform uses the hostnames of all the nodes to identify, control and monitor the services on them. It is critical that the instance hostnames, the DNS hostnames and IP addresses are mapped correctly and consistently.
Without any inputs, Red Hat OpenStack Platform uses the nova instance name as the hostname and the domain as novalocal. The bastion host’s FQDN would result in bastion.novalocal. This would suffice if Red Hat OpenStack Platform populated a DNS service with these names thus allowing each instance to find the IP addresses by name.
However, using the novalocal domain requires creating a zone in the external DNS service named novalocal. Since the RHOSP instance names are unique only within a project, this risks name collisions with other projects. To remedy this issue, creation of a subdomain for the internal network is implemented under the project domain, i.e. example.com
Table 2.8. Subdomain for RHOCP Internal Network
| Domain Name | Description |
|---|---|
|
| All interfaces on the internal only network |
The nova instance name and the instance hostname are the same. Both are in the internal subdomain. The floating IPs are assigned to the top level domain example.com.
Table 2.9. Sample FQDNs
| Fully Qualified Name | Description |
|---|---|
|
|
Name of the internal network interface on the |
|
|
Name of the internal network interface on the |
|
|
Name of the internal network interface on the |
|
|
Name of the Red Hat OpenShift Container Platform console using the address of the |
2.9.1. Cloud-Init
Red Hat OpenStack Platform provides a way for users to pass in information to be applied when an instance boots. The --user-data switch to nova boot command makes the contents of the provided file available to the instance through cloud-init. cloud-init is a set of init scripts for cloud instances. It is available via the rhel-7-server-rh-common-rpms repository and queries a standard URL for the user-data file and processes the contents to initialize the OS of the deployed Red Hat OpenStack Platform instance.
The user-data is placed in files named <hostname>.yaml where <hostname> is the name of the instance.
An example of the <hostname>.yaml file is shown below. This is required for every Red Hat OpenStack Platform instance that is to be used for the Red Hat OpenShift Container Platform deployment.
Create a user-data directory to store the <hostname>.yaml files.
$ mkdir /path/to/user-data
nova boot command is used in conjunction with the --block-device option to ensure that at boot time the proper volume is attached to the device that is specified. Currently, this cannot be done via openstack server create as it may attach different volume (i.e. attach vdb as vdc) then specified. More information: https://bugzilla.redhat.com/show_bug.cgi?id=1489001
2.9.1.1. Master Cloud Init
The master instances require docker storage, partitions, and certain mounts be available for a successful deployment using the cinder volumes created in the previous steps.
#cloud-config cloud_config_modules: - disk_setup - mounts hostname: <hostname> fqdn: <hostname>.<domain> write_files: - path: "/etc/sysconfig/docker-storage-setup" permissions: "0644" owner: "root" content: | DEVS='/dev/vdb' VG=docker_vol DATA_SIZE=95%VG STORAGE_DRIVER=overlay2 CONTAINER_ROOT_LV_NAME=dockerlv CONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/docker CONTAINER_ROOT_LV_SIZE=100%FREE fs_setup: - label: emptydir filesystem: xfs device: /dev/vdc partition: auto - label: etcd_storage filesystem: xfs device: /dev/vdd partition: auto runcmd: - mkdir -p /var/lib/origin/openshift.local.volumes - mkdir -p /var/lib/etcd mounts: - [ /dev/vdc, /var/lib/origin/openshift.local.volumes, xfs, "defaults,gquota" ] - [ /dev/vdd, /var/lib/etcd, xfs, "defaults" ]
2.9.1.2. Node Cloud Init
The node instances, regardless if the instance is an application or infrastructure node require docker storage, partitions, and certain mounts be available for a successful deployment using the cinder volumes created in the previous steps.
#cloud-config cloud_config_modules: - disk_setup - mounts hostname: <hostname> fqdn: <hostname>.<domain> write_files: - path: "/etc/sysconfig/docker-storage-setup" permissions: "0644" owner: "root" content: | DEVS='/dev/vdb' VG=docker_vol DATA_SIZE=95%VG STORAGE_DRIVER=overlay2 CONTAINER_ROOT_LV_NAME=dockerlv CONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/docker CONTAINER_ROOT_LV_SIZE=100%FREE fs_setup: - label: emptydir filesystem: xfs device: /dev/vdc partition: auto runcmd: - mkdir -p /var/lib/origin/openshift.local.volumes mounts: - [ /dev/vdc, /var/lib/origin/openshift.local.volumes, xfs, "defaults,gquota" ]
Once the yaml files are created for each Red Hat OpenStack Platform instance, create the instances using the openstack command.
2.9.2. Master Instance Creation
The bash lines below are a loop that allow for all of the master instances to be created and the specific mounts, instance size, and security groups are configured at launch time. Create the master instances by filling in the bold items relevant to the current OpenStack values.
$ domain=<domain> $ netid1=$(openstack network show <internal-network-name> -f value -c id) $ for node in master-{0..2}; do nova boot \ --nic net-id=$netid1 \ --flavor <flavor> \ --image <image> \ --key-name <keypair> \ --security-groups <master-sg-name>,<node-sg-name> \ --user-data=/path/to/user-data/$node.yaml \ --block-device source=blank,dest=volume,device=vdb,size=<docker-volume-size>,shutdown=preserve \ --block-device source=blank,dest=volume,device=vdc,size=<openshift-local-volume-size>,shutdown=preserve \ --block-device source=blank,dest=volume,device=vdd,size=<etcd-volume-size>,shutdown=preserve \ $node.$domain; done
Assuming the user-data yaml files are labeled master-<num>.yaml
The volume device order is important as cloud-init creates filesystems based on the volume order.
2.9.3. Infrastructure Instance Creation
The bash lines below are a loop that allow for all of the infrastructure instances to be created and the specific mounts, instance size, and security groups are configured at launch time. Create the infrastructure instances by filling in the bold items relevant to the current OpenStack values.
$ domain = <domain> $ netid1=$(openstack network show <internal-network-name> -f value -c id) $ for node in infra{0..2}; do nova boot \ --nic net-id=$netid1 \ --flavor <flavor> \ --image <image> \ --key-name <keypair> \ --security-groups <infra-sg-name>,<node-sg-name> \ --user-data=/path/to/user-data/$node.yaml \ --block-device source=blank,dest=volume,device=vdb,size=<docker-volume-size>,shutdown=preserve \ --block-device source=blank,dest=volume,device=vdc,size=<openshift-local-volume-size>,shutdown=preserve \ $node.$domain; done
Assuming the user-data yaml files are labeled infra<num>.yaml
The volume device order is important as cloud-init creates filesystems based on the volume order.
2.9.4. Application Instance Creation
The bash lines below are a loop that allow for all of the application instances to be created and the specific mounts, instance size, and security groups are configured at launch time. Create the application instances by filling in the bold items relevant to the current OpenStack values.
$ domain = <domain> $ netid1=$(openstack network show <internal-network-name> -f value -c id) $ for node in app{0..2}; do nova boot \ --nic net-id=$netid1 \ --flavor <flavor> \ --image <image> \ --key-name <keypair> \ --security-groups <node-sg-name> \ --user-data=/path/to/user-data/$node.yaml \ --block-device source=blank,dest=volume,device=vdb,size=<docker-volume-size>,shutdown=preserve \ --block-device source=blank,dest=volume,device=vdc,size=<openshift-local-volume-size>,shutdown=preserve \ $node.$domain; done
Assuming the user-data yaml files are labeled app<num>.yaml
The volume device order is important as cloud-init creates filesystems based on the volume order.
2.9.5. Confirming Instance Deployment
The above creates 3 master, infra and app instances.
Verify the creation of the Red Hat OpenStack Platform instances via:
$ openstack server list
Using the values provided by the openstack server list command, update the DNS master zone.db file as shown in Section A.1, “Setting up the DNS Red Hat OpenStack Platform Instance” with the appropriate IP addresses. Do not proceed to the next section until the DNS resolution is configured.
2.9.6. Rename volumes (optional)
cinder volumes are created using the instance id making them confusing to identify.
To rename the cinder volumes and make them human readable, use the following snippet:
$ for node in master-{0..2} app{0..2} infra{0..2};
do
dockervol=$(openstack volume list -f value -c ID -c "Attached to" | awk "/$node/ && /vdb/ {print \$1}")
ocplocalvol=$(openstack volume list -f value -c ID -c "Attached to" | awk "/$node/ && /vdc/ {print \$1}")
openstack volume set --name $node-docker $dockervol
openstack volume set --name $node-ocplocal $ocplocalvol
done
for master in master{0..2};
do
etcdvol=$(openstack volume list -f value -c ID -c "Attached to" | awk "/$master/ && /vdd/ {print \$1}")
openstack volume set --name $master-etcd $etcdvol
done2.10. Creating and Configuring an HAProxy Red Hat OpenStack Platform Instance
If an organization currently does not have a load balancer in place then HAProxy can be deployed. A load balancer such as HAProxy provides a single view of the Red Hat OpenShift Container Platform master services for the applications. The master services and the applications use different TCP ports so a single TCP load balancer can handle all of the inbound connections.
The load balanced DNS name that developers use must be in a DNS A record pointing to the haproxy server before installation. For applications, a wildcard DNS entry must point to the haproxy host.
The configuration of the load balancer is created after all of the Red Hat OpenShift Container Platform instances have been created and floating IP addresses assigned.
The following steps show how to create the security group and add specific rules to the security group.
$ source /path/to/examplerc $ openstack security group create <haproxy-sg-name> $ for PORT in 22 80 443 8443; do openstack security group rule create \ --ingress \ --protocol tcp \ --dst-port $PORT \ <haproxy-sg-name>; done
If the m1.small flavor does not exist by default then create a flavor with 1vcpu and 2GB of RAM.
$ domain=<domain> $ netid1=$(openstack network show <internal-network-name> -f value -c id) $ openstack server create \ --nic net-id=$netid1 \ --flavor <flavor> \ --image <image> \ --key-name <keypair> \ --security-group <haproxy-sg-name> \ haproxy.$domain
Assign a floating ip via:
$ openstack floating ip create public_network
$ openstack server add floating ip haproxy.$domain <ip>
Using the floating IP, verify logging into the HAProxy server.
$ ssh -i /path/to/<keypair-name>.pem cloud-user@<IP>
The configuration of the HAProxy instance is completed within the subsequent steps as the bastion host configures the Red Hat subscriptions for all the instances and the Red Hat OpenShift Container Platform installer auto configures the HAProxy instance based upon the information found within the Red Hat OpenShift Container Platform inventory file.
In order to have a single entry point for applications, the load balancer either requires making changes to the existing DNS that allows for wildcards to use the Round Robin algorithm across the different infra nodes. An example can be seen on the Appendix A, Creating the DNS Red Hat OpenStack Platform Instance section.
If changes within the DNS server are not possible, the following entries within the HAproxy instance, specifically the /etc/haproxy/haproxy.cfg file is required.
Below is an example of the haproxy.cfg file with the appropriate requirements if adding the DNS entries is not possible.
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen stats :9000
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats auth admin:password
stats refresh 30
mode http
frontend main *:80
default_backend router80
backend router80
balance source
mode tcp
# INFRA_80
server infra0.example.com <IP>:80 check
server infra1.example.com <IP>:80 check
server infra2.example.com <IP>:80 check
frontend main *:443
default_backend router443
backend router443
balance source
mode tcp
# INFRA_443
server infra0.example.com <IP>:443 check
server infra1.example.com <IP>:443 check
server infra2.example.com <IP>:443 check
frontend main *:8443
default_backend mgmt8443
backend mgmt8443
balance source
mode tcp
# MASTERS 8443
server master0.example.com <IP>:8443 check
server master1.example.com <IP>:8443 check
server master2.example.com <IP>:8443 check2.11. Creating and Configuring the Bastion Instance
The bastion servers two functions in this deployment of Red Hat OpenShift Container Platform. The first function is a ssh jump box allowing administrators to access instances within the private network. The other role of the bastion instance is to serve as a utility host for the deployment and management of Red Hat OpenShift Container Platform.
2.11.1. Deploying the Bastion Instance
Create the bastion instance via:
If the m1.small flavor does not exist by default then create a flavor with 1 vCPU and 2GB of RAM.
$ domain=<domain> $ netid1=$(openstack network show <internal-network-name> -f value -c id) $ openstack server create \ --nic net-id=$netid1 \ --flavor <flavor> \ --image <image> \ --key-name <keypair> \ --security-group <bastion-sg-name> \ bastion.$domain
2.11.2. Creating and Adding Floating IPs to Instances
Once the instances are created, floating IPs must be created and then can be allocated to the instance. The following shows an example
$ source /path/to/examplerc
$ openstack floating ip create <public-network-name>
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-08-24T22:44:03Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 10.20.120.150 |
| floating_network_id | 084884f9-d9d2-477a-bae7-26dbb4ff1873 |
| headers | |
| id | 2bc06e39-1efb-453e-8642-39f910ac8fd1 |
| port_id | None |
| project_id | ca304dfee9a04597b16d253efd0e2332 |
| project_id | ca304dfee9a04597b16d253efd0e2332 |
| revision_number | 1 |
| router_id | None |
| status | DOWN |
| updated_at | 2017-08-24T22:44:03Z |
+---------------------+--------------------------------------+
Within the above output, the floating_ip_address field shows that the floating IP 10.20.120.150 is created. In order to assign this IP to one of the Red Hat OpenShift Container Platform instances, run the following command:
$ source /path/to/examplerc
$ openstack server add floating ip <instance-name> <ip>
For example, if instance bastion.example.com is to be assigned IP 10.20.120.150 the command would be:
$ source /path/to/examplerc $ openstack server add floating ip bastion.example.com 10.20.120.150
Within this reference architecture, the only Red Hat OpenShift Container Platform instances requiring a floating IP are the bastion host and the haproxy host. In case the load balancer is located in a different Red Hat OpenStack Platform tenant or somewhere else, the masters and infra nodes need to have a floating IP each as the load balancer needs to reach the masters and infra nodes from outside.
2.11.3. Bastion Configuration for Red Hat OpenShift Container Platform
The following subsections describe all the steps needed to properly configure the bastion instance.
2.11.3.1. Configure ~/.ssh/config to use Bastion as Jumphost
To easily connect to the Red Hat OpenShift Container Platform environment, follow the steps below.
On the Red Hat OpenStack Platform director node or local workstation with private key, <keypair-name>.pem:
$ exec ssh-agent bash $ ssh-add /path/to/<keypair-name>.pem Identity added: /path/to/<keypair-name>.pem (/path/to/<keypair-name>.pem)
Add to the ~/.ssh/config file:
Host bastion
HostName <bastion_fqdn_hostname OR IP address>
User cloud-user
IdentityFile /path/to/<keypair-name>.pem
ForwardAgent yes
ssh into the bastion host with the -A option that enables forwarding of the authentication agent connection.
$ ssh -A cloud-user@bastion
Once logged into the bastion host, verify the ssh agent forwarding is working via checking for the SSH_AUTH_SOCK
$ echo "$SSH_AUTH_SOCK" /tmp/ssh-NDFDQD02qB/agent.1387
Attempt to ssh into one of the Red Hat OpenShift Container Platform instances using the ssh agent forwarding.
No password should be prompted if working properly.
$ ssh master1
2.11.3.2. Subscription Manager and Enabling Red Hat OpenShift Container Platform Repositories
Via the bastion instance, register the instance via Red Hat Subscription Manager. This is accomplished via using credentials or an activation key.
Via credentials:
$ sudo subscription-manager register --username <user> --password '<password>'
OR
Via activation key:
$ sudo subscription-manager register --org="<org_id>" --activationkey=<keyname>
Once registered, enable the following repositories as follows.
$ sudo subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-ose-3.9-rpms" \
--enable="rhel-7-fast-datapath-rpms" \
--enable="rhel-7-server-ansible-2.4-rpms"
Install the following package: atomic-openshift-utils via:
$ sudo yum -y install atomic-openshift-utils
2.11.3.3. Configure Ansible
ansible is installed on the bastion instance to perform the registration, installation of packages, and the deployment of the Red Hat OpenShift Container Platform environment on the master and node instances.
Before running playbooks, it is important to create a ansible.cfg to reflect the deployed environment:
$ cat ~/ansible.cfg
[defaults]
forks = 20
host_key_checking = False
remote_user = cloud-user
roles_path = roles/
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 600
log_path = $HOME/ansible.log
nocows = 1
callback_whitelist = profile_tasks
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=false
control_path = %(directory)s/%%h-%%r
pipelining = True
timeout = 10
[persistent_connection]
connect_timeout = 30
connect_retries = 30
connect_interval = 1The code block above can overwrite the default values in the file. Ensure to populate <keypair-name> with the keypair that was copied to the bastion instance.
2.11.3.4. Simple Ansible Inventory
Create a simple Ansible inventory file in order to facilitate the enablement of the required Red Hat OpenShift Container Platform repositories for all the instances. For this reference environment, a file labeled hosts with the the instances include:
$ cat ~/hosts
haproxy.example.com
master0.example.com
master1.example.com
master2.example.com
infra0.example.com
infra1.example.com
infra2.example.com
app0.example.com
app1.example.comEnsure connectivity to all instances via the bastion instances via:
$ ansible all -i ~/hosts -m pingOnce connectivity to all instances has been established, register the instances via Red Hat Subscription Manager. This is accomplished via using credentials or an activation key.
Via credentials the ansible command is as follows:
$ ansible all -b -i ~/hosts -m command -a "subscription-manager register --username <user> --password '<password>'"
Via activation key, the ansible command is as follows:
$ ansible all -b -i ~/hosts -m command -a "subscription-manager register --org=<org_id> --activationkey=<keyname>"where the following options:
- -b - referes to privileged escalation
- -i - location of inventory file
- -m - module to use
- -a - module argument
Once all the instances have been successfully registered, enable all the required RHOCP repositories on all the instances via:
$ ansible all -b -i ~/hosts -m command -a "subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-ose-3.9-rpms" \
--enable="rhel-7-fast-datapath-rpms" \
--enable="rhel-7-server-ansible-2.4-rpms""2.11.3.5. OpenShift Authentication
Red Hat OpenShift Container Platform provides the ability to use many different authentication platforms. For this reference architecture, LDAP is the preferred authentication mechanism. A listing of other authentication options are available at Configuring Authentication and User Agent.
When configuring LDAP as the authentication provider the following parameters can be added to the ansible inventory. An example is shown below.
openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=openshift,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=openshift,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}]If using LDAPS, all the masters must have the relevant ca.crt file for LDAP in place prior to the installation, otherwise the installation fails. The file should be placed locally on the bastion instance and be called within the inventory file from the variable openshift_master_ldap_ca_file
2.11.3.6. Preparing the Inventory File
This section provides an example inventory file required for an advanced installation of Red Hat OpenShift Container Platform.
The inventory file contains both variables and instances used for the configuration and deployment of Red Hat OpenShift Container Platform. In the example below, some values are bold and must reflect the deployed environment from the previous chapter.
To gather the openshift_hosted_registry_storage_openstack_volumeID value use:
$ openstack volume show openshift-registry -f value -c idThe above command runs on the Red Hat OpenStack Platform director node or the workstation that can access the Red Hat OpenStack Platform environment.
The openshift_cloudprovider_openstack_* values are required for Red Hat OpenShift Container Platform to be able to create Red Hat OpenStack Platform resources such as cinder volumes for persistent volumes. It is recommended to create a dedicated OpenStack user within the tenant as shown in Section 2.1, “Creating OpenStack User Accounts, Projects and Roles”.
$ cat ~/inventory [OSEv3:children] masters etcd nodes lb [OSEv3:vars] ansible_ssh_user=cloud-user deployment_type=openshift-enterprise debug_level=2 openshift_vers=v3_9 openshift_enable_service_catalog=false ansible_become=true openshift_master_api_port=8443 openshift_master_console_port=8443 openshift_debug_level="{{ debug_level }}" openshift_node_debug_level="{{ node_debug_level | default(debug_level, true) }}" openshift_master_debug_level="{{ master_debug_level | default(debug_level, true) }}" openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}] openshift_hosted_router_replicas=3 openshift_hosted_registry_replicas=1 openshift_master_cluster_method=native openshift_node_local_quota_per_fsgroup=512Mi openshift_cloudprovider_kind=openstack openshift_cloudprovider_openstack_auth_url=http://10.19.114.177:5000/v2.0 openshift_cloudprovider_openstack_username=<project-user> openshift_cloudprovider_openstack_password=<password> openshift_cloudprovider_openstack_tenant_name=openshift-tenant openshift_master_cluster_hostname=openshift.example.com openshift_master_cluster_public_hostname=openshift.example.com openshift_master_default_subdomain=apps.example.com os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' osm_use_cockpit=true oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true #registry openshift_hosted_registry_storage_kind=openstack openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem=ext4 openshift_hosted_registry_storage_openstack_volumeID=d5a1dfd6-3561-4784-89d8-61217a018787 openshift_hosted_registry_storage_volume_size=15Gi openshift_public_hostname=openshift.example.com [masters] master0.example.com master1.example.com master2.example.com [etcd] master0.example.com master1.example.com master2.example.com [lb] haproxy.example.com [nodes] master0.example.com openshift_node_labels="{'region': 'master'}" openshift_hostname=master0.example.com master1.example.com openshift_node_labels="{'region': 'master'}" openshift_hostname=master1.example.com master2.example.com openshift_node_labels="{'region': 'master'}" openshift_hostname=master2.example.com infra0.example.com openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra0.example.com infra1.example.com openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra1.example.com infra2.example.com openshift_node_labels="{'region': 'infra'}" openshift_hostname=infra2.example.com app0.example.com openshift_node_labels="{'region': 'app'}" openshift_hostname=app0.example.com app1.example.com openshift_node_labels="{'region': 'app'}" openshift_hostname=app1.example.com
If openshift_cloudprovider_openstack_password contains a hash symbol (), please ensure to double escape the password. For example, openshift_cloudprovider_openstack_password='"password##"'. For more information, visit: https://bugzilla.redhat.com/show_bug.cgi?id=1583523
The following two parameters are added to the inventory file due to the the following bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1588768 oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
2.11.3.7. Instance Verification
It can be useful to check for potential issues or misconfigurations in the instances before continuing the installation process. Connect to every instance using the bastion host and verify the disks are properly created and mounted, the cloud-init process finished successfully and verify for potential errors in the log files to ensure everything is ready for the Red Hat OpenShift Container Platform installation:
$ ssh bastion.example.com $ ssh <instance> $ lsblk $ sudo journalctl $ sudo less /var/log/cloud-init.log $ free -m $ sudo yum repolist
where instance is for example master0.example.com
For reference, below is example output of lsblk for the master nodes.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk └─vda1 253:1 0 40G 0 part / vdb 253:16 0 15G 0 disk vdc 253:32 0 30G 0 disk /var/lib/origin/openshift.local.volumes vdd 253:48 0 25G 0 disk /var/lib/etcd
For reference, below is an example of output of lsblk for the infra and app nodes.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk └─vda1 253:1 0 40G 0 part / vdb 253:16 0 15G 0 disk vdc 253:32 0 30G 0 disk /var/lib/origin/openshift.local.volumes
vdb on all nodes stores the docker images is not set as expected as this step is completed via the prerequisites playbook in the following section.
2.11.3.8. Red Hat OpenShift Container Platform Prerequisites Playbook
The Red Hat OpenShift Container Platform Ansible installation provides a playbook to ensure all prerequisites are met prior to the installation of Red Hat OpenShift Container Platform. This includes steps such as registering all the nodes with Red Hat Subscription Manager and setting up the docker on the docker volumes.
Via the ansible-playbook command on the bastion instance, ensure all the prerequisites are met using prerequisites.yml playbook:
$ ansible-playbook -i hosts /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.