Chapter 5. Manual Deployment

The following section is designed to provide a step-by-step of the installation of RHOCP on RHOSP.

Note

It is intended as a learning tool in order to deploy and better understand the requirements of real-world installations. However, the manual installation process is not the recommended method for deployment.

In a manual deployment of RHOCP, the workflow is as follows:

diag d21fa5fe968c13da0376f6b54839175a

It is important to note that a manual deployment does have limitations. These limitations include:

  • cannot easily respond to load by auto-scaling
  • no infrastructure dashboard or views that show details of instance network components within a RHOCP service.
  • requires manual intervention for any changes made in the environment inviting human error in maintenance and monitoring.

5.1. OCP CLI clients

The manual deployment process starts with a series of steps that are executed using the RHOSP CLI tools. These CLI tools are available in the rhel-7-server-openstack-10-rpms repository via the package python-openstackclient. This package installs all the underlying clients that include nova, neutron, cinder and glance. The minimum package version for python-openstackclient recommended is version 3.2.1-1. This can be verified via:

# rpm -q python-openstackclient
python-openstackclient-3.2.1-1.el7ost.noarch

Using the openstack command, one can create and configure the infrastructure that hosts the RHOCP service.

5.2. Script Fragments

The procedures in this section consists of a set of RHOSP CLI script fragments. These fragments are to be executed on an operator’s workstation that has the ability to install the python-openstackclient package. These script fragments allow the operator to create the infrastructure and the instances within the RHOSP environment as required for a successful RHOCP installation. Once the operator can create RHOSP instances, the remainder of the script fragments are run on the bastion host to complete the RHOCP deployment using Ansible.

5.2.1. Default Values and Environment Variables

Several of the script fragments set default values for the purpose of providing an example. In real deployments, replace those values with specific values for the target workload and environment.

For example, the control network script defines the OCP3_DNS_NAMESERVER variable with a default value. To override the default, set the OCP3_DNS_NAMESERVER variable in the workstation environment before executing the script fragment.

There are several fragments that execute on one or more of the instances. In order to keep the code size managable, these scripts use a set of variables that define the set of instance names to touch.

5.2.2. Bastion Host Scripts and the Root User

The commands to be executed on the bastion host are required to run as the root user, but the scripts do not have this same requirement. The instance’s user is the cloud-user, a non-priviliged user. When required, the scripts run privileged commands with sudo.

5.3. Create Networks

This section describes the RHOSP network components required by RHOCP and how to create them.

RHOCP depends on three networks.

  • public network - provides external access controlled by the RHOSP operators
  • control network - communication between RHOCP instances and service components
  • tenant network - communications between RHOCP client containers

The public network is a prerequisite requirement prior to continuing with the RHOCP installation. Creation of a public network requires administrative access to the RHOSP environment. Consult the local RHOSP administrator if a public network is not currently available.

Once the public network is accessible, a pool of floating IP addresses is required for RHOSP instances that are to be created, specifically the bastion host, master nodes and infrastructure nodes. Consult the local RHOSP administrator if a pool of floating IP addresses is not currently available.

The control network and the tenant network can be created by a RHOSP user that within their respective RHOSP project thus not requiring any additional RHOSP administrator privileges.

This reference environment refers to the public network name as public_network

The following two tables describe the internal RHOCP networks.

Table 5.1. Control Network Specification

NameValue

Network Name

control-network

Subnet Name

control-subnet

Network Address

172.18.10.0

CIDR Mask

/24

DNS Nameserver

X.X.X.X

Router Name

control-router

Table 5.2. Tenant Network Specification

NameValue

Network Name

tenant-network

Subnet Name

tenant-subnet

Network address

172.18.20.0

CIDR Mask

/24

Flags

--no-gateway

Create the control network as indicated below. Update the default values as required.

Note

The control network is tied to the public network through a router labeled control-router.

Create Control Network

#!/bin/sh
OCP3_DNS_NAMESERVER=${OCP3_DNS_NAMESERVER:-8.8.8.8}
PUBLIC_NETWORK=${PUBLIC_NETWORK:-public_network}
CONTROL_SUBNET_CIDR=${CONTROL_SUBNET_CIDR:-172.18.10.0/24}

openstack network create control-network
openstack subnet create --network control-network --subnet-range ${CONTROL_SUBNET_CIDR} \
--dns-nameserver ${OCP3_DNS_NAMESERVER} control-subnet
openstack router create control-router
openstack router add subnet control-router control-subnet
neutron router-gateway-set control-router ${PUBLIC_NETWORK}

In a similar fashion, create the tenant network. This network is for internal use only and does not have a gateway to connect to other networks.

Create Tenant Network

#!/bin/sh
TENANT_SUBNET_CIDR=${TENANT_SUBNET_CIDR:-172.18.20.0/24}

openstack network create tenant-network
openstack subnet create --network tenant-network \
  --subnet-range ${TENANT_SUBNET_CIDR} --gateway none tenant-subnet

At this point, three networks exist. Two of those networks created by the operator (control and tenant network), while the public network is provided by the RHOSP administrator. The control network is bound to a router that provides external access.

Verify control and tenant networks

openstack network list
+--------------------------------------+-----------------+--------------------------------------+
| ID                                   | Name            | Subnets                              |
+--------------------------------------+-----------------+--------------------------------------+
| c913bfd7-94e9-4caa-abe5-df9dcd0c0eab | control-network | e804600d-67fc-40bb-a7a8-2f1ce49a81e6 |
| 53377111-4f85-401e-af9d-5166a4be7f99 | tenant-network  | 830845ca-f35c-44d9-be90-abe532a32cbe |
| 957dae95-5b93-44aa-a2c3-37eb16c1bd2f | public_network  | 4d07867c-34d8-4a7c-84e3-fdc485de66be |
+--------------------------------------+-----------------+--------------------------------------+

5.4. Network Security Groups

RHOSP networking allows the user to define inbound and outbound traffic filters that can be applied to each instance on a network. This allows the user to limit network traffic to each instance based on the function of the instance services and not depend on host based filtering.

This section describes the ports and services required for each type of host and how to create the security groups in RHOSP.

The details of the communication ports and their purposes are defined in the Prerequisites section of the OpenShift Container Platform Installation and Configuration Guide.

5.4.1. Required Network Services

Each of the instance types in the RHOCP service communicate using a well defined set of network ports. This procedure includes defining a network security_group for each instance type.

All of these instances allow icmp (ping) packets to test that the hosts are booted and connected to their respective networks.

All the hosts accept inbound ssh connections for provisioning by Ansible and for ops access during normal operations.

5.4.2. Bastion Host Security Group

The bastion host only needs to allow inbound ssh. This host exists to give operators and automation a stable base to monitor and manage the rest of the services.

All other security groups accept ssh inbound from this security group.

The following commands create and define the bastion network security group.

Table 5.3. Bastion Security Group TCP ports

Port/ProtocolServicePurpose

22/TCP

SSH

Secure shell login

Create Bastion Security Group

#!/bin/sh
openstack security group create bastion-sg
openstack security group rule create --ingress --protocol icmp bastion-sg
openstack security group rule create --protocol tcp \
--dst-port 22 bastion-sg
#Verification of security group
openstack security group show bastion-sg

5.4.3. Master Host Security Group

The RHOCP master service requires the most complex network access controls.

In addition to the secure http ports used by the developers, these hosts contain the etcd servers that form the cluster.

These commands create and define master host network security group.

Table 5.4. Master Host Security Group Ports

Port/ProtocolServicePurpose

22/TCP

SSH

secure shell login

53/TCP

DNS

internal name services (pre 3.2)

53/UDP

DNS

internal name services (pre 3.2)

2379/TCP

etcd

client → server connections

2380/TCP

etcd

server → server cluster communications

4789/UDP

SDN

pod to pod communications

8053/TCP

DNS

internal name services (3.2+)

8053/UDP

DNS

internal name services (3.2+)

8443/TCP

HTTPS

Master WebUI and API

10250/TCP

kubernetes

kubelet communications

24224/TCP

fluentd

Docker logging

Create Master Security Group

#!/bin/sh
openstack security group create master-sg
openstack security group rule create --protocol icmp master-sg
neutron security-group-rule-create master-sg \
 --protocol tcp --port-range-min 22 --port-range-max 22 \
 --remote-group-id bastion-sg


neutron security-group-rule-create master-sg \
 --protocol tcp --port-range-min 2380 --port-range-max 2380 \
 --remote-group-id master-sg

for PORT in 53 2379 2380 8053 8443 10250 24224
do
  openstack security group rule create --protocol tcp --dst-port $PORT master-sg
done

for PORT in 53 4789 8053 24224
do
  openstack security group rule create --protocol udp --dst-port $PORT master-sg
done

5.4.4. Node Host Security Group

The node hosts execute the containers within the RHOCP service. The nodes are broken into two sets. The first set runs the Red Hat OpenShift Container Platform infrastructure, the Red Hat OpenShift Container Platform router, and a local docker registry. The second set runs the developer applications.

5.4.4.1. Infrastructure Node Security Group

The infrastructure nodes run the Red Hat OpenShift Container Platform router and the local registry. It must accept inbound connections on the web ports that are forwarded to their destinations.

Table 5.5. Infrastructure Node Security Group Ports

Port/ProtocolServicesPurpose

22/TCP

SSH

secure shell login

80/TCP

HTTP

cleartext application web traffic

443/TCP

HTTPS

encrypted application web traffic

4789/UDP

SDN

pod to pod communications

10250/TCP

kubernetes

kubelet communications

Infrastructure Node Security Group

#!/bin/sh
openstack security group create infra-node-sg
openstack security group rule create --protocol icmp infra-node-sg
neutron security-group-rule-create infra-node-sg \
  --protocol tcp --port-range-min 22 --port-range-max 22 \
  --remote-group-id bastion-sg

for PORT in 80 443 10250 4789
do
 openstack security group rule create --protocol tcp --dst-port $PORT infra-node-sg
done

5.4.4.2. App Node Security Group

The application nodes are only accept traffic from the control and tenant network. app nodes only accept the etcd control connections and the flannel SDN traffic for container communications.

Table 5.6. Application Node Security Group Ports

Port/ProtocolServicesPurpose

22/TCP

SSH

secure shell login

4789/UDP

SDN

pod to pod communications

10250/TCP

kubernetes

kubelet communications

Create App Node Security Group

#!/bin/sh
openstack security group create app-node-sg
openstack security group rule create --protocol icmp app-node-sg
neutron security-group-rule-create app-node-sg \
     --protocol tcp --port-range-min 22 --port-range-max 22 \
     --remote-group-id bastion-sg
openstack security group rule create --protocol tcp --dst-port 10250 app-node-sg
openstack security group rule create --protocol udp --dst-port 4789 app-node-sg

5.5. Host Instances

With the networks established, begin building the RHOSP instances to build the RHOCP environment.

This reference environment consists of:

  • 1 bastion host
  • 3 master nodes
  • 2 infrastructure nodes (infra nodes)
  • 3 application nodes (app nodes)

With three exceptions, the specifications all of the instances in this service are identical. Each instance has a unique name and each of the node hosts have external storage. Each one is assigned a security group based on its function. The common elements are presented first followed by the differences.

Note

The commands to create a new instance are only presented once. Repeat for each instance, altering the name for each instance.

5.5.1. Host Names, DNS and cloud-init

RHOCP uses the host names of all the nodes to identify, control and monitor the services on them. It is critical that the instance hostnames, the DNS hostnames and IP addresses are mapped correctly and consistently.

Without any inputs, RHOSP uses the nova instance name as the hostname and the domain as novalocal. The bastion host’s FQDN would result in bastion.novalocal. This would be sufficient if RHOSP populated a DNS service with these names thus allowing each instance to find the IP addresses by name.

Using .novalocal domain name requires creating a zone in the external DNS service named .novalocal. However, since the RHOSP instance names are unique only within a project, this risks name collisions with other projects. Instead, create a subdomain for the control and tenant networks under the project domain, i.e. ocp3.example.com.

Table 5.7. Subdomains for RHOCP internal networks

Domain nameDescription

control.ocp3.example.com

all interfaces on the control network

tenant.ocp3.example.com

all interfaces on the tenant network

The nova instance name and the instance hostname are the same. Both are in the control subdomain. The floating IPs are assigned to the top level domain ocp3.example.com.

Table 5.8. Sample FQDNs

Full NameDescription

master-0.control.ocp3.example.com

Name of the control network interface on the master-0 instance

master-0.tenant.ocp3.example.com

Name of the tenant network interface on the master-0 instance

master-0.ocp3.example.com

Name of the floating IP for the master-0 instance

RHOSP provides a way for users to pass in information to be applied when an instance boots. The --user-data switch to openstack server create makes the contents of the provided file available to the instance through cloud-init. cloud-init is a set of init scripts for cloud instances. It is available via the rhel-7-server-rh-common-rpms repository. It queries a standard URL for the user-data file and processes the contents to initialize the OS.

This deployment process uses cloud-init to control three values:

  1. hostname
  2. fully qualified domain name (FQDN)
  3. enable sudo via ssh

The user-data file is a multi-part MIME file with two parts. One is the cloud-data section that sets the hostname and FQDN. The other is a short one line shell script.

The user-data is placed in files named <hostname>.yaml where '<hostname>' is the name of the instance. The script below generates user-data for all of the instances.

Create user-data directory

mkdir -p ~/user-data

Generate user-data for instance hostnames

#!/bin/sh

#set DOMAIN and SUBDOMAIN to override
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
CONTROL_DOMAIN=${CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}

BASTION="bastion"
MASTERS=$(for M in $(seq 0 $(($MASTER_COUNT-1))) ; do echo master-$M ; done)
INFRA_NODES=$(for I in $(seq 0 $(($INFRA_NODE_COUNT-1))) ; do echo infra-node-$I ; done)
APP_NODES=$(for A in $(seq 0 $(($APP_NODE_COUNT-1))) ; do echo app-node-$A ; done)
ALL_NODES="$INFRA_NODES $APP_NODES"
ALL_HOSTS="$BASTION $MASTERS $ALL_NODES"

function generate_userdata_mime() {
  cat <<EOF
From nobody Fri Oct  7 17:05:36 2016
Content-Type: multipart/mixed; boundary="===============6355019966770068462=="
MIME-Version: 1.0

--===============6355019966770068462==
MIME-Version: 1.0
Content-Type: text/cloud-config; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="$1.yaml"

#cloud-config
hostname: $1
fqdn: $1.$2

write_files:
  - path: "/etc/sysconfig/network-scripts/ifcfg-eth1"
    permissions: "0644"
    owner: "root"
    content: |
      DEVICE=eth1
      TYPE=Ethernet
      BOOTPROTO=dhcp
      ONBOOT=yes
      DEFTROUTE=no
      PEERDNS=no

--===============6355019966770068462==
MIME-Version: 1.0
Content-Type: text/x-shellscript; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="allow-sudo-ssh.sh"

#!/bin/sh
sed -i "/requiretty/s/^/#/" /etc/sudoers

--===============6355019966770068462==--

EOF
}

[ -d user-data ] || mkdir -p user-data
for HOST in $ALL_HOSTS
do
  generate_userdata_mime ${HOST} ${CONTROL_DOMAIN} > user-data/${HOST}.yaml
done

5.5.2. Create Bastion Host Instance

This script creates a single instance to access and control all the nodes within the RHOCP environment. Set the DOMAIN environment variable before invoking these scripts to customize the target domain for the entire deployment.

Note

The script requires the operator to know the image name, key name and the network ids of the control and tenant network. The following commands provide that information.

$ openstack image list
$ openstack keypair list
$ openstack network list

Create Bastion Host instance

#!/bin/sh
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
OCP3_CONTROL_DOMAIN=${OCP3_CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}
IMAGE=${IMAGE:-rhel7}
FLAVOR=${FLAVOR:-m1.small}
OCP3_KEY_NAME=${OCP3_KEY_NAME:-ocp3}
netid1=$(openstack network list | awk "/control-network/ { print \$2 }")
netid2=$(openstack network list | awk "/tenant-network/ { print \$2 }")
openstack server create --flavor ${FLAVOR} --image ${IMAGE} \
--key-name ${OCP3_KEY_NAME} \
--nic net-id=$netid1 \
--nic net-id=$netid2 \
--security-group bastion-sg --user-data=user-data/bastion.yaml \
bastion.${OCP3_CONTROL_DOMAIN}

Note

This reference environment uses rhel7 as the image name, ocp3 as the key name, and the network ids found within openstack network list.

5.5.3. Create Master Host instances

The following script creates three RHOCP master instances.

Create Master Host instances

#!/bin/sh
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
OCP3_CONTROL_DOMAIN=${OCP3_CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}
OCP3_KEY_NAME=${OCP3_KEY_NAME:-ocp3}
IMAGE=${IMAGE:-rhel7}
FLAVOR=${FLAVOR:-m1.small}
MASTER_COUNT=${MASTER_COUNT:-3}
netid1=$(openstack network list | awk "/control-network/ { print \$2 }")
netid2=$(openstack network list | awk "/tenant-network/ { print \$2 }")
for HOSTNUM in $(seq 0 $(($MASTER_COUNT-1))) ; do
    openstack server create --flavor ${FLAVOR} --image ${IMAGE} \
      --key-name ${OCP3_KEY_NAME} \
      --nic net-id=$netid1 --nic net-id=$netid2 \
      --security-group master-sg --user-data=user-data/master-${HOSTNUM}.yaml \
      master-${HOSTNUM}.${OCP3_CONTROL_DOMAIN}
done

Note

This reference environment uses rhel7 as the image name, ocp3 as the key name, and the network ids found within openstack network list.

5.5.4. Cinder Volumes for /var/lib/docker

All of the node instances need a cinder volume. These are mounted into the instance to provide additional space for docker image and container storage.

Create a volume for each node instance:

Create Node Volumes

#!/bin/sh
VOLUME_SIZE=${VOLUME_SIZE:-8}
BASTION="bastion"
INFRA_NODE_COUNT=${INFRA_NODE_COUNT:-2}
APP_NODE_COUNT=${APP_NODE_COUNT:-3}

INFRA_NODES=$(for I in $(seq 0 $(($INFRA_NODE_COUNT-1))) ; do echo infra-node-$I ; done)
APP_NODES=$(for I in $(seq 0 $(($APP_NODE_COUNT-1))) ; do echo app-node-$I ; done)
ALL_NODES="$INFRA_NODES $APP_NODES"

for NODE in $ALL_NODES ; do
    cinder create --name ${NODE}-docker ${VOLUME_SIZE}
done

openstack volume list

5.5.5. Create the Node instances

The openstack server create command has an argument to mount cinder volumes into new instances. This argument requires the volume ID rather than the name. The following function returns the volume ID given a volume name.

The script below creates two infrastructure nodes, mounting the matching cinder volume on device vdb.

Create the Infrastructure Node instances

#!/bin/sh
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
OCP3_CONTROL_DOMAIN=${OCP3_CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}
OCP3_KEY_NAME=${OCP3_KEY_NAME:-ocp3}
IMAGE=${IMAGE:-rhel7}
FLAVOR=${FLAVOR:-m1.small}
netid1=$(openstack network list | awk "/control-network/ { print \$2 }")
netid2=$(openstack network list | awk "/tenant-network/ { print \$2 }")
INFRA_NODE_COUNT=${INFRA_NODE_COUNT:-2}
for HOSTNUM in $(seq 0 $(($INFRA_NODE_COUNT-1))) ; do
    HOSTNAME=infra-node-$HOSTNUM
    VOLUMEID=$(cinder list | awk "/${HOSTNAME}-docker/ { print \$2 }")
    openstack server create --flavor ${FLAVOR} --image ${IMAGE} \
       --key-name ${OCP3_KEY_NAME} \
       --nic net-id=$netid1 --nic net-id=$netid2 \
       --security-group infra-node-sg --user-data=user-data/${HOSTNAME}.yaml \
       --block-device-mapping vdb=${VOLUMEID} \
       ${HOSTNAME}.${OCP3_CONTROL_DOMAIN}
done

The script below creates three application nodes, mounting the matching Cinder volume on device vdb.

Create the App Node instances

#!/bin/sh
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
OCP3_CONTROL_DOMAIN=${OCP3_CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}
OCP3_KEY_NAME=${OCP3_KEY_NAME:-ocp3}
IMAGE=${IMAGE:-rhel7}
FLAVOR=${FLAVOR:-m1.small}
netid1=$(openstack network list | awk "/control-network/ { print \$2 }")
netid2=$(openstack network list | awk "/tenant-network/ { print \$2 }")
APP_NODE_COUNT=${APP_NODE_COUNT:-3}
for HOSTNUM in $(seq 0 $(($APP_NODE_COUNT-1))) ; do
    HOSTNAME=app-node-${HOSTNUM}
    VOLUMEID=$(cinder list | awk "/${HOSTNAME}-docker/ { print \$2 }")
    openstack server create --flavor ${FLAVOR} --image ${IMAGE} \
        --key-name ${OCP3_KEY_NAME} \
        --nic net-id=$netid1 --nic net-id=$netid2 \
        --security-group app-node-sg --user-data=user-data/${HOSTNAME}.yaml \
        --block-device-mapping vdb=${VOLUMEID} \
        ${HOSTNAME}.${OCP3_CONTROL_DOMAIN}
done

Note

This reference environment uses rhel7 as the image name, ocp3 as the key name, and the network ids found within openstack network list.

5.5.6. Disable Port Security on Nodes

In this installation flannel provides the software defined network (SDN). Current versions of neutron enforce port security on ports by default. This prevents the port from sending or receiving packets with a MAC address different from that on the port itself. flannel creates virtual MACs and IP addresses and must send and receive packets on the port. Port security must be disabled on the ports that carry flannel traffic.

This configuration runs flannel on a private network that is bound to the eth1 device on the node instances.

Disable Port Security on Tenant Network Ports

#!/bin/sh
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
OCP3_CONTROL_DOMAIN=${OCP3_CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}
TENANT_NETWORK=${TENANT_NETWORK:-tenant-network}

MASTER_COUNT=${MASTER_COUNT:-3}
INFRA_NODE_COUNT=${INFRA_NODE_COUNT:-2}
APP_NODE_COUNT=${APP_NODE_COUNT:-3}

MASTERS=$(for M in $(seq 0 $(($MASTER_COUNT-1))) ; do echo master-$M ; done)
INFRA_NODES=$(for I in $(seq 0 $(($INFRA_NODE_COUNT-1))) ; do echo infra-node-$I ; done)
APP_NODES=$(for A in $(seq 0 $(($APP_NODE_COUNT-1))) ; do echo app-node-$A ; done)

function tenant_ip() {
  # HOSTNAME=$1
  nova show ${1} | grep ${TENANT_NETWORK} | cut -d\| -f3 | cut -d, -f1 | tr -d ' '
}

function port_id_by_ip() {
  # IP=$1
  neutron port-list --field id --field fixed_ips | grep $1 | cut -d' ' -f2
}

for NAME in $MASTERS $INFRA_NODES $APP_NODES
do
  TENANT_IP=$(tenant_ip ${NAME}.${OCP3_CONTROL_DOMAIN})
  PORT_ID=$(port_id_by_ip $TENANT_IP)
  neutron port-update $PORT_ID --no-security-groups --port-security-enabled=False
done

5.5.7. Adding Floating IP addresses

The bastion, master and infrastructure nodes all need to be accessible from outside the control network. An IP address on the public network is required.

Create and Assign Floating IP Addresses

#!/bin/sh
OCP3_DOMAIN=${OCP3_DOMAIN:-ocp3.example.com}
OCP3_CONTROL_DOMAIN=${OCP3_CONTROL_DOMAIN:-control.${OCP3_DOMAIN}}
PUBLIC_NETWORK=${PUBLIC_NETWORK:-public_network}
MASTER_COUNT=${MASTER_COUNT:-3}
INFRA_NODE_COUNT=${INFRA_NODE_COUNT:-2}
APP_NODE_COUNT=${APP_NODE_COUNT:3}

BASTION="bastion"
MASTERS=$(for M in $(seq 0 $(($MASTER_COUNT-1))) ; do echo master-$M ; done)
INFRA_NODES=$(for I in $(seq 0 $(($INFRA_NODE_COUNT-1))) ; do echo infra-node-$I ; done)
for HOST in $BASTION $MASTERS $INFRA_NODES
do
  openstack floating ip create ${PUBLIC_NETWORK}
  FLOATING_IP=$(openstack floating ip list | awk "/None/ { print \$4 }")
  openstack server add floating ip ${HOST}.${OCP3_CONTROL_DOMAIN} ${FLOATING_IP}
done

5.6. Publishing Host IP Addresses

While end users only interact with the RHOCP services through the public interfaces, it is important to assign a name to each interface to ensure each RHOCP component is able to identify and communicate across the environment. The simplest way to accomplish this is to update the DNS entries for each interface within each RHOSP instance.

All of the RHOSP instances connect to the control-network and the tenant-network. The bastion, master and infra nodes contain floating IPs that are assigned on the public_network and must be published in a zone. This reference environment uses the ocp3.example.com zone. The IP addresses on the control and tenant networks reside in a subdomain named for these networks. For example, DNS updates require the IP address of the primary DNS server for the zone and a key to enable updates.

When using BIND (Berkeley Internet Name Domain), the DNS server included in Red Hat Enterprise Linux, the key service it provides is the named program. When using the named service, the key is located in the /etc/rndc.key file. Create a copy of the contents of the file to a local file such as ocp3_dns.key. In order to implement updates, the bind-utils RPM must be installed. This contains the /usr/bin/nsupdate binary that implements dynamic updates.

Prior to making DNS updates, acquire the hostname, FQDN and floating IP for each instance.

On a server that has access to the key, i.e. /etc/rndc.key, perform the following for all nodes:

Change the IP address of a DNS name

nsupdate -k ocp3_dns.key <<EOF
server <dns-server-ip>
zone ocp3.example.com
update add <FQDN> 3600 A <ip-address>
.
.
.
update add <FQDN> 3600 A <ip-address>
send
quit
EOF

An example of adding the public, tenant and control networks of particular node.

Adding Master Node entries

nsupdate -k ocp3_dns.key <<EOF
server <dns-server-ip>
zone ocp3.example.com
update add master-0.control.ocp3.example.com 3600 A 172.x.10.7
update add master-0.ocp3.example.com 3600 A 10.x.0.166
send
quit
EOF

Table 5.9. Table Example Hostname/IP Address Mappings

ip-addressFQDNComment

10.x.0.151

bastion.ocp3.example.com

Provides deployement and operations access

10.x.0.166

master-0.ocp3.example.com

load balanced developer access

10.x.0.167

master-1.ocp3.example.com

load balanced developer access

10.x.0.168

master-2.ocp3.example.com

load balanced developer access

10.x.0.164

infra-node-0.ocp3.example.com

runs OpenShift router

10.x.0.164

infra-node-1.ocp3.example.com

runs OpenShift router

10.x.0.164

infra-node-2.ocp3.example.com

runs OpenShift router

Sample Host Table - DNS entries

10.19.114.153 bastion.ocp3.example.com bastion
172.18.10.6 bastion.control.example.com

10.19.114.114 master-0.ocp3.example.com
172.18.10.7 master-0.control.example.com

10.19.114.115 master-1.ocp3.example.com
172.18.10.8 master-1.control.example.com

10.19.114.116 master-2.ocp3.example.com
172.18.10.9 master-2.control.example.com

10.19.114.120 infra-node-0.ocp3.example.com
172.18.10.10 infra-node-0.control.example.com

10.19.114.121 infra-node-1.ocp3.example.com
172.18.10.11 infra-node-1.control.example.com

172.18.10.17 app-node-0.control.example.com
172.18.10.18 app-node-1.control.example.com
172.18.10.19 app-node-2.control.example.com

5.7. Load-balancer

This reference environment consists of a standalone server running a HAProxy instance with the following configuration. Please include the floating IPs for the master and infra nodes to the load balancer.

The file that follows is a sample configuration for HAProxy.

/etc/haproxy/haproxy.cfg

global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
listen  stats :9000
        stats enable
        stats realm Haproxy\ Statistics
        stats uri /haproxy_stats
        stats auth admin:password
        stats refresh 30
        mode http

frontend  main *:80
    default_backend             router80

backend router80
    balance source
    mode tcp
    server infra-node-0.ocp3.example.com 10.19.114.120:80 check
    server infra-node-1.ocp3.example.com 10.19.114.121:80 check
    server infra-node-2.ocp3.example.com 10.19.114.122:80 check

frontend  main *:443
    default_backend             router443

backend router443
    balance source
    mode tcp
    server infra-node-0.ocp3.example.com 10.19.114.120:443 check
    server infra-node-1.ocp3.example.com 10.19.114.121:443 check
    server infra-node-2.ocp3.example.com 10.19.114.122:443 check


frontend  main *:8443
    default_backend             mgmt8443

backend mgmt8443
    balance source
    mode tcp
    server master-0.ocp3.example.com 10.19.114.114:8443 check
    server master-1.ocp3.example.com 10.19.114.115:8443 check
    server master-2.ocp3.example.com 10.19.114.116:8443 check

5.8. Host Preparation

With access to the RHOSP instances, this section turns its focus to running the openshift-ansible playbooks. For a successful deployment of RHOCP, the RHOSP instances require additional configuration.

The additional configuration on the RHOCP nodes is all done via the bastion host. The bastion host acts as a jump server to access all RHOCP nodes via an ssh session using the control network.

Note

The bastion host is the only node that is accessible via ssh from outside the cluster.

THE RHOCP master nodes require software updates and accessibility to install ansible. The infra and app nodes require access to multiple software repositories, and a cinder volume i.e. /dev/vdb, to store the Docker container files.

Bastion Customization

  • Register for Software Updates
  • Install openshift-ansible-playbooks RPM package
  • Copy the private key to the cloud-init user

Common Customization (master, infra, app nodes)

  • Enable sudo via ssh
  • Register for software repositories
  • Enable eth1 (tenant network)

Infra and App Node Customization

  • Mount cinder volume
  • Install docker RPM

5.8.1. Logging into the Bastion Host

Log into the bastion host as follows:

Log into Bastion Host

ssh -i ocp3.pem  cloud-user@bastion.control.ocp3.example.com

Note

This example assumes the private key is in a file named ocp3.pem.

5.8.1.1. Register for Software Updates

Red Hat software installation and updates requires a valid subscription and registration. Access to the RHOCP and RHOSP repositories may require a specific subscription pool.

Register For Software Updates (RHN)

#!/bin/sh
# Set RHN_USERNAME, RHN_PASSWORD RHN_POOL_ID for your environment
sudo subscription-manager register \
  --username $RHN_USERNAME \
  --password $RHN_PASSWORD
sudo subscription-manager subscribe --pool $RHN_POOL_ID

Enable the standard server repositories and RHOCP software specific repository as follows:

Enable Required Repositories

#!/bin/sh
OCP3_VERSION=${OCP3_VERSION:-3.4}
sudo subscription-manager repos --disable="*"
sudo subscription-manager repos \
  --enable=rhel-7-server-rpms \
  --enable=rhel-7-server-extras-rpms \
  --enable=rhel-7-server-optional-rpms \
  --enable=rhel-7-server-ose-${OCP3_VERSION}-rpms

Note

The RHOCP software is in a specific product repository. The version this reference environment uses is version 3.4.

5.8.1.2. Ansible Preperation

  • Install openshift-ansible-playbooks

5.8.1.3. Install Ansible and OpenShift Ansible

Installing the Ansible components is requires installing the openshift-ansible-playbooks RPM. This package installs all of the dependencies required.

Install openshift-ansible-playbooks

#!/bin/sh
sudo yum install -y openshift-ansible-playbooks

5.8.2. Preparing to Access the RHOCP Instances

The bastion host requires access to all the RHOCP instances as the cloud-user. This is required to ensure that Ansible can log into each instance and make the appropriate changes when running the Ansible playbook. The following sections show how to enable access to all the RHOCP instances.

The first step is to install the RHOSP ssh key for the cloud-user account on the bastion host. The second step is to enable the ssh command to call sudo without opening a shell. This ensures Ansible to execute commands as root on all the RHOCP instances directly from the bastion host.

5.8.2.1. Set the Host Key On the Bastion

A copy of the private key is needed on the bastion host to allow ssh access from the bastion host to the RHOCP instances.

On the system with the private key file, copy the private key to the bastion host and place it in .ssh/id_rsa. An example is shown below.

Copy the SSH key to the bastion host

scp -i ocp3.pem ocp3.pem cloud-user@bastion.control.ocp3.example.com:.ssh/id_rsa

ssh won’t accept key files that are readable by other users. The key file permissions should only have read and write access to the owner. Log into the bastion host, and change the key file’s permissions.

Restrict access to the SSH key on the bastion host

chmod 600 ~/.ssh/id_rsa

5.8.3. Instance Types and Environment Variables

The RHOCP instances are separated into three categories: master, infrastructure and application nodes. The following sections run a series of short script fragments that contain environment variables to allow for customization and to configure multiple hosts.

Below are the variables that are found in the script fragments and the conventions for their use.

Host Class Environment Variables

MASTERS="master-0 master-1 master-2"
INFRA_NODES="infra-node-0 infra-node-1"
APP_NODES="app-node-0 app-node-1 app-node-2"
ALL_HOSTS="$MASTERS $INFRA_NODES $APP_NODES"

5.8.3.1. Log into RHOCP Instances

The RHOCP service instances only allow inbound ssh from the bastion host. The connections are made over the control network interface.

In the DNS entries for each instance, the control network IP address is placed in the .control sub-domain in this reference environment. This means that the control network name for an instance is <hostname>.control.ocp3.example.com.

Once the hostname and FQDN DNS entries are defined, cloud-init puts the control network subdomain in the DNS search path in /etc/resolv.conf. This enables the bastion host to log into any instance using its hostname.

ssh master-0
Note

Log in to each instance to verify connectivity and to accept the host key.

5.8.3.2. Enable sudo via ssh

Attempting to run sudo commands via ssh by default is forbidden. However, it is necessary to relax this restriction to enable the Ansible playbook to run all the commands directly from the bastion host.

For each master, infrastructure and app node, log in and modify the /etc/sudoers file:

From the bastion host as the cloud-user,

Enable sudo via ssh

ssh <node-name>
sudo cp /etc/sudoers{,.orig}
sudo sed -i "/requiretty/s/^/#/" /etc/sudoers

To ensure the execution of command as the root user on the RHOCP instances is possible, run the following command on the bastion host for each RHOCP instance. An example below is shown on attempting to ssh to the master-0 node while running the id command using sudo.

Demonstrate sudo via ssh

ssh master-0 sudo id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

5.8.4. Common Configuration Steps

The following steps within these subsections are to be executed on all of the RHOCP instances.

5.8.4.1. Enable Software Repositories

Register all of the RHOCP instances for software updates before attempting to install and update the environments.

Register All instances for Software Updates

#!/bin/sh
RHN_USERNAME=${RHN_USERNAME:-changeme}
RHN_PASSWORD=${RHN_PASSWORD:-changeme}
RHN_POOL_ID=${RHN_POOL_ID:-changeme}

for H in $ALL_HOSTS
do
  ssh $H sudo subscription-manager register \
      --username ${RHN_USERNAME} \
      --password ${RHN_PASSWORD}
  ssh $H sudo subscription-manager attach \
      --pool ${RHN_POOL_ID}
done

Disable the default enabled repositories, and re-enable the minimal standard set of repositories.

Note

The RHOCP software repository is set to the version that is used within this reference environment, version 3.4.

Enable Standard Software Repositories

#!/bin/sh
OCP3_VERSION=${OCP3_VERSION:-3.4}

for H in $ALL_HOSTS
do
  ssh $H sudo subscription-manager repos --disable="*"
  ssh $H sudo subscription-manager repos \
      --enable="rhel-7-server-rpms" \
      --enable="rhel-7-server-extras-rpms" \
      --enable="rhel-7-server-optional-rpms" \
      --enable="rhel-7-server-ose-${OCP3_VERSION}-rpms"
done

For all nodes, install the following base packages:

Install Base Packages

#!/bin/sh
for H in $ALL_HOSTS
do
  ssh $H sudo yum install -y \
    wget git net-tools bind-utils iptables-services \
    bridge-utils bash-completion atomic-openshift-excluder \
    atomic-openshift-docker-excluder
done

5.8.5. Node Configuration Steps

The infrastructure and application nodes require configuration updates not needed on the bastion nor the master nodes.

The infrastructure and application nodes must be able to communicate over a private network, and mount an external volume to hold the Docker container and image storage.

5.8.5.1. Enable Tenant Network Interface

The containers within the RHOCP service communicate over a private back-end network, called the tenant network. The infrastructure and application nodes are configured with a second interface , eth1, to carry this traffic. RHOCP uses flannel to route the traffic within and between nodes.

The file below configures that second interface on each node. RHOSP assigns each node an address on that subnet and provides a DHCP address to the hosts.

This file is copied to each instance and then the interface is started. When RHOCP is installed, the configuration file must specify this interface name for flannel.

ifcfg-eth1

# /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="dhcp"
BOOTPROTOv6="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="no"
IPV6INIT="yes"
PERSISTENT_DHCLIENT="1"

The script below copies the ifcfg-eth1 file to the hosts and brings interface up.

On the bastion host,

Copy and Install eth1 configuration

#!/bin/sh
for H in $ALL_HOSTS ; do
    scp ifcfg-eth1 $H:
    ssh $H sudo cp ifcfg-eth1 /etc/sysconfig/network-scripts
    ssh $H sudo ifup eth1
done

5.8.5.2. Add Docker Storage

The infrastructure and application nodes each get an external cinder volume that provides local space for the Docker images and live containers. Docker must be installed before the storage is configured and mounted.

5.8.5.3. Install Docker

Install the docker RPM package but do not enable on all the infrastructure and app nodes.

Install Docker

#!/bin/sh
for H in $INFRA_NODES $APP_NODES
do
  ssh $H sudo yum install -y docker
done

5.8.5.4. Configure and Enable Instance Metadata Daemon

The instance Metadata daemon discovers and maps the cinder volume into the device file system. Install and configure the instance Metadata daemon as follows.

Enable instance updates

#!/bin/sh
for H in $INFRA_NODES $APP_NODES
do
  ssh $H sudo systemctl enable lvm2-lvmetad
  ssh $H sudo systemctl start lvm2-lvmetad
done

5.8.5.5. Set the Docker Storage Location

The Docker storage configuration is set by a file named /etc/sysconfig/docker-storage-setup. This file specifies that the Docker storage is placed on the device /dev/vdb and creates an instance volume group called docker-vg.

docker-storage-setup

DEVS=/dev/vdb
VG=docker-vg

This script fragment copies the storage configuration to the infrastructure and app nodes then executes the Docker script to configure the storage.

Copy and Install Storage Setup Config

#!/bin/sh
for H in $INFRA_NODES $APP_NODES
do
  scp docker-storage-setup $H:
  ssh $H sudo cp ./docker-storage-setup /etc/sysconfig
  ssh $H sudo /usr/bin/docker-storage-setup
done

5.8.5.6. Enable and Start Docker

This script fragment starts and enables the docker service on all infrastructure and app nodes.

Enable and Start Docker

#!/bin/sh
for H in $INFRA_NODES $APP_NODES
do
  ssh $H sudo systemctl enable docker
  ssh $H sudo systemctl start docker
done

5.9. Deploy Red Hat OpenShift Container Platform

The OpenShift Ansible playbooks are separated into three input files.

  • inventory
  • group_vars/OSEv3.yml
  • ansible.cfg

The inventory is an ini formatted set of host names, and groupings of hosts with common configuration attributes. It defines the configuration variables for each group and individual host. The user defines a set of groups at the top level and fills out a section for each group defined. Each group section list the hosts that are members of that group.

The OSEv3.yml file is a yaml formatted data file. It defines a structured set of key/value pairs that are used to configure all of the RHOSP instances.

The ansible.cfg file is an ini formatted file that defines Ansible run-time customizations.

Note

It is recommended to create a directory, i.e. mkdir -p ~/ansible_files/group_vars, within the bastion host as the cloud-user that contains all three files including the group_vars directory.

5.9.1. Ansible inventory File

The inventory file defines the set of servers for Ansible to configure. The servers are grouped into classes, and the members of a given class get the same configuration.

Ansible control file: inventory

[OSEv3:children]
masters
nodes
etcd

[masters]
master-[0:2].control.ocp3.example.com openshift_public_hostname=master-[0:2].ocp3.example.com

[masters:vars]
openshift_schedulable=false
openshift_router_selector="region=infra"
openshift_registry_selector="region=infra"

[etcd]
master-[0:2].control.ocp3.example.com openshift_public_hostname=master-[0:2].ocp3.example.com

[infra-nodes]
infra-node-[0:1].control.ocp3.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_public_hostname=infra-node-[0:1].ocp3.example.com

[app-nodes]
app-node-[0:2].control.ocp3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

[nodes]
master-[0:2].control.ocp3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}" openshift_schedulable=false openshift_public_hostname=master-[0:2].ocp3.example.com

[nodes:children]
infra-nodes
app-nodes

Note

The above is just a sample inventory file for reference. For more examples, please visit the Installation and Configuration Guide, Multiple-Masters

5.9.1.1. group_vars and the OSv3.yml file

The following defines the global values in the OSEv3 section and all of its children in a file named group_vars/OSEv3.yml.

Note

The group_vars/OSEv3.yml file exists in the same directory previously created within the bastion host file.

Global configuration values: group_vars/OSEv3.yml

deployment_type: openshift-enterprise
openshift_master_default_subdomain: apps.ocp3.example.com 1

# Developer access to WebUI and API
openshift_master_cluster_hostname: devs.ocp3.example.com 2
openshift_master_cluster_public_hostname: devs.ocp3.example.com 3
openshift_master_cluster_method: native

openshift_override_hostname_check: true
openshift_set_node_ip: true
openshift_use_dnsmasq: false
osm_default_node_selector: 'region=primary'

# Enable Flannel and set interface
openshift_use_openshift_sdn: false
openshift_use_flannel: true
flannel_interface: eth1

openshift_cloudprovider_kind: openstack
openshift_cloudprovider_openstack_auth_url: http://10.0.0.1:5000/v2.0 4
openshift_cloudprovider_openstack_username: <username> 5
openshift_cloudprovider_openstack_password: <password> 6
openshift_cloudprovider_openstack_tenant_name: <tenant name> 7

#If enabling a certificate, uncomment the following line.
#openshift_master_ldap_ca_file: /path/to/certificate.crt

# NOTE: Ensure to include the certificate name i.e certificate.crt within the
# ca field under openshift_master_identity_providers
# Change insecure value from true to false if using a certificate in the
# openshift_master_identity_providers section

openshift_master_identity_providers:
  - name: ldap_auth
    kind: LDAPPasswordIdentityProvider
    challenge: true
    login: true
    bindDN: cn=openshift,cn=users,dc=example,dc=com 8
    bindPassword: password 9
    url: ldap://ad1.example.com:389/cn=users,dc=example,dc=com?sAMAccountName 10
    attributes:
      id: ['dn']
      email: ['mail']
      name: ['cn']
      preferredUsername: ['sAMAccountName']
    ca: ''
    insecure: True

1
Apps are named <name>.apps.ocp3.example.com. Adjust as required.
Note

This name must resolve to the IP address of the app load-balancer.

2 3
Developers access the WebUI and API at this name. Adjust as required.
Note

This name must resolve to the IP address of the master load-balancer.

4
The URL of the RHOSP API service. Adjust as required.
5
The RHOSP username.
6
The RHOSP password.
7
The RHOSP project that contains the RHOCP service.
8
An LDAP user DN authorized to make LDAP user queries.
9
The password for the LDAP query user.
10
The query URL for LDAP authentication. Adjust the server name and the user base DN as required.

5.9.1.2. Ansible Configuration file: ansible.cfg

The final file is ansible.cfg. The following is a representation of the reference environment’s ansible.cfg file.

Note

Within this reference environment, this file resides in the bastion host /home/cloud-user/ansible_files directory.

Ansible customization: ansible.cfg

# config file for ansible -- http://ansible.com/
# ==============================================
[defaults]
remote_user = cloud-user
forks = 50
host_key_checking = False
gathering = smart
retry_files_enabled = false
nocows = true

[privilege_escalation]
become = True

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=900s -o GSSAPIAuthentication=no
control_path = /var/tmp/%%h-%%r

5.9.2. Run Ansible Installer

Prior to deploying using OpenShift Ansible playbooks to deploy RHOCP, update all the RHOCP nodes, and install the atomic-openshift-utils package.

Preinstallation packages update and required packages install

#!/bin/sh
ansible nodes -i ~/ansible_files/inventory -b -m yum -a "name=* state=latest"
ansible nodes -i ~/ansible_files/inventory -b -m yum -a "name=atomic-openshift-utils state=latest"

With the configurations on all the nodes set, invoke the OpenShift Ansible playbooks to deploy RHOCP.

Install OpenShift Container Platform

#!/bin/sh
export OCP_ANSIBLE_ROOT=${OCP_ANSIBLE_ROOT:-/usr/share/ansible}
export ANSIBLE_ROLES_PATH=${ANSIBLE_ROLES_PATH:-${OCP_ANSIBLE_ROOT}/openshift-ansible/roles}
export ANSIBLE_HOST_KEY_CHECKING=False

ansible-playbook -i ~/ansible_files/inventory \
  ${OCP_ANSIBLE_ROOT}/openshift-ansible/playbooks/byo/config.yml

The final step is to configure iptables to allow traffic on the master, infrastructure, and app nodes. These commands makes use of Ansible to run the command on all the nodes.

  1. Create a copy of the iptables rules

    Backup iptables rules

    #!/bin/sh
    ansible nodes -i ~/ansible_files/inventory -b -m copy -a "remote_src=true src=/etc/sysconfig/iptables dest=/etc/sysconfig/iptables.orig"

  2. Run the following command that accepts traffic on the DOCKER chain, as well as allows POSTROUTING MASQUERADING on a specific Ethernet device, i.e. eth1, and persist the iptables rules

    Apply and persist iptables rules

    #!/bin/sh
    ansible nodes -i ~/ansible_files/inventory -b -m shell \
      -a 'iptables -A DOCKER -p all -j ACCEPT'
    ansible nodes -i ~/ansible_files/inventory -b -m shell \
      -a 'iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE'
    ansible nodes -i ~/ansible_files/inventory -b -m shell \
      -a "tac /etc/sysconfig/iptables.orig | sed -e '0,/:DOCKER -/ s/:DOCKER -/:DOCKER ACCEPT/' | awk '"\!"p && /POSTROUTING/{print \"-A POSTROUTING -o eth1 -j MASQUERADE\"; p=1} 1' | tac > /etc/sysconfig/iptables"

Note

eth1 is the flannel interface within the RHOCP nodes.

5.9.3. Post Installation Tasks

Once OpenShift Container Platform has been installed, there are some post-installation tasks that need to be performed

5.9.3.1. Configure bastion host

To be able to do administrative tasks or interact with the OpenShift Container Platform cluster, the bastion host cloud-user can be configured to be logged as system:admin using the following commands:

Configure Bastion Host for oc cli

#!/bin/sh
mkdir ~/.kube/
ssh master-0 sudo cat /etc/origin/master/admin.kubeconfig > ~/.kube/config
sudo yum -y install atomic-openshift-clients
oc whoami

5.9.3.2. Create StorageClass

OpenShift Container Platform can provide dynamic storage to pods that requires persistent storage (pvc) using cinder volumes by creating StorageClasses.

Note

This process is performed transparently for the user.

This requires the OCP cluster to be configured to be able to interact with the OSP API, which is done by the ansible installer with the cloud parameters before.

The following command creates a StorageClass named "standard" in the "nova" OSP availability zone:

Configure StorageClass

#!/bin/sh
oc create -f - <<API
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: standard
provisioner: kubernetes.io/cinder
parameters:
  availability: nova
API

Note

Default storageclass is a feature included in future OCP releases.