Chapter 4. Prerequisites and Preparation

This procedure depends on a number of services and settings that must be in place before proceeding.

This procedure assumes:

  • Red Hat OpenStack Platform 10 or later
  • OSP port security controls must be enabled in neutron service
  • Increase keystone token expiration
  • RHOSP user account and credentials
  • A Red Hat Enterprise Linux 7.3 cloud image pre-loaded in glance
  • A Red Hat Enterprise Linux update subscription credentials (user/password or Satellite server)
  • An SSH keypair pre-loaded in nova
  • A publicly available neutron network for inbound access
  • A pool of floating IP addresses to provide inbound access points for instances
  • A host running haproxy to provide load-balancing
  • A host running bind with a properly delegated sub-domain for publishing, and a key for dynamic updates.
  • An existing LDAP or Active Directory service for user identification and authentication

4.1. Red Hat OpenStack Platform

This procedure was developed and tested on Red Hat OpenStack Platform 10. While it may apply to other releases, no claims are made for installations on versions other than RHOSP 10.

This procedure makes use of the following RHOSP service components:

  • keystone - Identification and Authentication
  • nova- Compute Resources an Virtual Machines
  • neutron- Software Defined Networks
  • glance- Bootable Image Storage
  • cinder- Block Storage
  • ceilometer- Resource Metering
  • gnocchi- Time-Series Database
  • aodh- Alarms and Responses
  • heat- Deployment Orchestration
Note

Red Hat OpenStack Platform 10 requires several patches to support the process detailed here. See Appendix C

4.2. Red Hat OpenStack Platform User Account and Credentials

All of the RHOSP commands in this reference environment are executed using the CLI tools. The user needs a RHOSP account, and a project to contain the service.

For the manual procedure the user needs only the member role on the project. Running the installation using Heat requires the heat_stack_owner role.

The RHOSP credentials are commonly stored in a source file that sets the values of their shell environment.

A script fragement from the RHOSP keystone web console is available as shown below.

Keystone Web Console - Download OCP

A simpler format can be used and is provided in the following code snippet:

sample ks_ocp3

unset OS_AUTH_URL OS_USERNAME OS_PASSWORD OS_PROJECT_NAME OS_REGION
# replace the IP address with the RHOSP10 keystone server IP
export OS_AUTH_URL=http://10.1.0.100:5000/v2.0
export OS_USERNAME=ocp3ops
export OS_PASSWORD=password
export OS_TENANT_NAME=ocp3
export OS_REGION=RegionOne

To use these values for RHOSP CLI commands, source the file into the environment.

Note

Replace the values here with those for the Red Hat OpenStack Platform account.

source ./ks_ocp3

4.3. Red Hat Enterprise Linux 7 Base Image

This procedure expects the instances to be created from the stock Red Hat Enterprise Linux 7.3 cloud image.

The Red Hat Enterprise Linux 7.3 cloud image is in the rhel-guest-image-7 package that is located in the rhel-7-server-rh-common-rpms repository. Install the package on any Red Hat Enterprise Linux system and load the image file into glance. For the purposes of this procedure, the image is named rhel7.

Load the RHEL7 image into Glance

sudo subscription-manager repos --enable rhel-7-server-rh-common-rpms
sudo yum -y install rhel-guest-image-7
sudo cp /usr/share/rhel-guest-image7/*.qcow2 ~/images
cd ~/images
qemu-img convert -f qcow2 -O raw rhel-guest-image-<ver>.x86_64.qcow2 ./rhel7.raw
openstack image create rhel7 \
--disk-format=raw \
--container-format=bare
--public < rhel7.raw

Confirm the image has been created as follows:

openstack image list

4.4. SSH Key Pair

RHOSP uses cloud-init to place an ssh public key on each instance as it is created to allow ssh access to the instance. RHOSP expects the user to hold the private key.

Generate a Key Pair

openstack keypair create ocp3 > ~/ocp3.pem
chmod 600 ~/ocp3.pem

Note

ssh refuses to use private keys if the file permissions are too open. Be sure to set the permissions on ocp3.pem to 600 (user read/write).

4.5. Keystone: Increase Token Expiration

Note

This step changes the global configuration of the RHOSP service. It must be done by an RHOSP service administrator.

When creating a heat stack, the heat engine gets a keystone token to continue taking actions on behalf of the user. By default the token is only good for 1 hour (3600 seconds). The token expires after 1 hour, and the heat engine can no longer continue.

The RHOCP 3 heat stack deployment may take more than one hour. To ensure deployment completion, increase the expiration time on the RHOSP controllers and restart the httpd service.

Increase Keystone token expiration

# On the RHOSP controller:
sudo sed -i -e 's/#expiration = .*/expiration = 7200/' /etc/keystone/keystone.conf
sudo systemctl restart httpd

4.6. Public Network

The RHOCP service must be connected to an external public network so that users can reach it. In general, this requires a physical network or at least a network that is otherwise out of the control of the RHOSP service.

The control network is connected to the public network via a router during the network setup.

Each instance is attached to the control network when it is created. Instances that accept direct inbound connections (the bastion, masters and infrastructure nodes) are given floating IP addresses from a pool on the public network.

The provisioner must have permission necessary to create and attach the router to the public network and to create floating IPs from the pool on that network.

The floating IP pool must be large enough to accomodate all of the accessable hosts.

4.7. DNS service

Before beginning the provisioning process, the hostname for developer access and the wild card DNS entry for applications must be in place. Both of these should point to the IP address of the load balancer.

Both the IP address and the FQDN for the OpenShift master name and the application wild card entry are known before the provisioning process begins.

Note

A wildcard DNS entry is a record where, instead of a leading hostname, the record has an asterisk (*), ie. *.wildcard.example.com. The IP address associated with a wildcard DNS record returned for any query that matches the suffix. All of the following examples return the same IP address:

  • mystuff.wildcard.example.com
  • foobar.wildcard.example.com
  • random.wildcard.example.com

Each host in the RHOCP service must have a DNS entry for the interfaces on the control and tenant networks. These names are only used internally. Additionally, each host with a floating IP must have a DNS entry with a public name in order to be found by the load-balancer host.

Since the instances are created and addressed dynamically, the name/IP pairs cannot be known before the provisioning process begins. The entries are added using dynamic DNS after each instance is created.

Dynamic DNS updates use a tool called nsupdate found in the bind-utils RPM. The update requests are authorized using a symmetric key found on the DNS host in /etc/rndc.key.

The dynamic domains must be created before the provisioning process begins and must be configured to allow dynamic updates. Configuring DNS zones and configuring them to allow dynamic updates is outside the scope of this paper.

Appendix B contains a procedure to create a heat stack running a compliant DNS service.

4.8. haproxy Service

A load-balancer provides a single view of the OpenShift master services for the applications. The IP address of the load-balancer must be known before beginning. The master services and the applications use different TCP ports so a single TCP load balancer can handle all of the inbound connections.

The load-balanced DNS name that developers use must be in a DNS A record pointing to the haproxy server before installation. For applications, a wildcard DNS entry must point to the haproxy host.

The configuration of the load-balancer is created after all of the RHOCP instances have been created and floating IP addresses assigned.

When installing RHOCP using Heat, the load-balancer can be created and configured as part of the heat stack.

4.9. LDAP/AD Service

RHOCP can use an external LDAP or Active Directory service to manage developers from a single user base.

The hostname or IP address of the LDAP service must be known before commencing. This procees requires the base_dn and a small set of parameters to allow LDAP authentication by the RHOCP service.

Table 4.1. LDAP credentials example

ParameterValueDescription

LDAP server

dc.example.com

An Active Directory domain controller or LDAP server

User DN

cn=users,dc=example.dc=com

The DN of the user database

Preferred Username

sAMAccountName

The field to use to find user entries

Bind DN

cn=openshift,cn=users,dc=example,dc=com

The user allowed to query

Bind Password

<password>

The password for the user allowed to query

4.10. Collected Information

In summary, this table lists all of the information and settings that an operator requires to begin a RHOCP deployment on RHOSP.

Table 4.2. Configuration Values

ParameterValueComments

DNS Domain

ocp3.example.com

Base domain for all hosts and services

Bastion Hostname

bastion.ocp3.example.com

 

Load Balancer Hostname

proxy1.ocp3.example.com

 

LDAP Server Hostname

ldap1.ocp3.example.com

 

RHOCP Master hostname

devs.ocp3.example.com

Developer access to Web UI and API

RHOCP App sub-domain

*.apps.ocp3.example.com

All apps get a name under this

RHOSP User

user provided

User account for access to RHOSP

RHOSP Password

user provided

User password for RHOSP account

RHOSP Project

user provided

OSP project name for RHOCP deployment

RHOSP Roles (manual)

_member_

Roles required for the RHOSP user

RHOSP Roles (Heat)

_member_, heat_stack_owner

Roles required for the OSP user

RHEL Base Image

RHEL 7.3 guest

From rhel-guest-image-7 RPM or download from the Customer Portal

Glance Image Name

rhel7

 

SSH Keypair Name

ocp3

 

Public Network Name

public_network

 

Control Network Name

control-network

 

Control Network Base

172.18.10.0/24

 

Control Network DNS

X.X.X.X

External DNS IP address

Tenant Network Name

tenant-network

 

Tenant Network Base

172.18.20.0/24

 

DNS Update Host

ns1.ocp3.example.com

 

DNS Update Key

TBD

 

RHN Username

<username>

 

RHN Password

<password>

 

RHN Pool ID

<pool id>

 

LDAP Host

dc.example.com

 

LDAP User DNE

cn=users,dc=example,dc=com

 

Preferred Username

sAMAccountName

 

Bind DN

cn=openshift.cn=users,dc=example,dc=com

 

Bind Password

<password>