Chapter 7. Deploying Using Heat

The installation created earlier shows the creation and configuration of the OSP resources to host an OCP service. It also shows how to run the openshift-ansible installation process directly.

Note

Heat is the OpenStack orchestration system.

Orchestration allows the end user to describe the system to their specfications rather than the process to create it. The OCP RPM suite includes a set of templates for an OCP service. Using these templates it is possible to create a working OCP service on OSP with a single file input.

7.1. Project Quotas

Each project in OSP has a set of resource quotas which are set by default. Several of these values should be increased to allow the OCP stack to fit.

Table 7.1. OSP Resource Minimum Quotas

ResourceMinimumRecommended

Instances

9

20

VCPUs

20

60

RAM (GB)

50

450

Floating IPs

9

15

Security Groups

5

5

Volumes

10

30

Volume Storage (GB)

800

2000

These numbers are for a basic general-purpose installation with low to moderate use. It allows for scaling up to 10 more application nodes. The correct values for a specific installation depends on the expected use. They are calculated using a detailed analysis of the actual resources needed and available.

Note

The number of nodes, selection of instance flavors and disk volume sizes here are for demonstration purposes. To deploy a production service consult the OCP sizing guidelines for each resource.

7.2. Benefits of Heat

Heat orchestration and the heat engine offer on-demand detection that can respond to automatic adding and removing nodes based upon the running workloads. The Heat templates integrate the RHOCP service with the Ceilometer monitoring service in RHOSP. While Ceilometer monitors the environment, it can signal the Heat stack to appropriate increase or decrease nodes upon workload requirements.

7.3. Installation of the Heat Templates

The installation of the Heat Templates is best approached using the RHOSP heat CLI tool in conjunction with the OpenShift Heat templates. A successful installation depends on the enablement of the rhel-7-server-openstack-10-rpms repository. Once enabled, the heat stack installation process can be initiated from any host that can access the RHOSP service using an RHOSP CLI client.

Enable RHOSP repository & install Heat RPMs.

sudo subscription-manager repos --enable rhel-7-server-openstack-10-rpms
sudo yum -y install python-heatclient openshift-heat-templates

The template files are installed at /usr/share/openshift/heat/templates.

7.4. Heat Stack Configuration

Heat uses one or more YAML input files to customize the stacks it creates.

This configuration uses an external DNS service as detailed in Appendix B and a dedicated loadbalancer created as part of the Heat stack. DNS records are created to direct inbound traffic for the masters and the OpenShift router through this loadbalancer. This configuration uses the flannel SDN. The Docker storage is set to a low value as this reference environment is for demonstration purposes only. When deploying in production environments, ensure to tune these values to accomodate for appropriate container density.

OpenShift Heat Stack Confguration File - openshift_parameters.yaml

# Invoke:
# heat stack-create ocp3-heat \
#      -e openshift_parameters.yaml \
#      -e /usr/share/openshift-heat-templates/env_loadbalancer_dedicated.yaml \
#      -t /usr/share/openshift-heat-templates/openshift.yaml
#
parameters:
  # OpenShift service characteristics 1
  deployment_type: openshift-enterprise
  domain_name: "ocp3.example.com"
  app_subdomain: "apps.ocp3.example.com"
  lb_hostname: "devs"
  loadbalancer_type: dedicated
  openshift_sdn: flannel
  deploy_router: true
  deploy_registry: true

  # Number of each server type 2
  master_count: 3
  infra_count: 3
  node_count: 2

  # OpenStack network characteristics 3
  external_network: public_network
  internal_subnet: "172.22.10.0/24"
  container_subnet: "172.22.20.0/24"

  # DNS resolver and updates 4
  dns_nameserver: 10.x.x.130,10.x.x.29,10.x.x.19
  dns_update_key: <HMAC:MD5 string>

  # Instance access 5
  ssh_key_name: ocp3
  ssh_user: cloud-user

  # Image selection 6
  bastion_image: rhel7
  master_image: rhel7
  infra_image: rhel7
  node_image: rhel7
  loadbalancer_image: rhel7

  # Docker Storage controls 7
  master_docker_volume_size_gb: 10
  infra_docker_volume_size_gb: 20
  node_docker_volume_size_gb: 100

  # OpenStack user credentials 8
  os_auth_url: http://10.x.x.62:5000/v2.0
  os_username: <username>
  os_password: <password>
  os_region_name: <region name>
  os_tenant_name: <project name>

  # Red Hat Subscription information 9
  rhn_username: "<username>"
  rhn_password: "<password>"
  rhn_pool: '<pool id containing openshift>'

parameter_defaults:
  # Authentication service information 10
  ldap_url: "ldap://ad.example.com.com:389/cn=users,dc=example,dc=com?sAMAccountName"
  ldap_preferred_username: "sAMAccountName"
  ldap_bind_dn: "cn=openshift,cn=users,dc=example,dc=com"
  ldap_bind_password: "password"
  ldap_insecure: true

resource_registry: 11
  # Adjust path for each entry
  OOShift::LoadBalancer: /usr/share/openshift-heat-templates/loadbalancer_dedicated.yaml
  OOShift::ContainerPort: /usr/share/openshift-heat-templates/sdn_flannel.yaml
  OOShift::IPFailover: /usr/share/openshift-heat-templates/ipfailover_keepalived.yaml
  OOShift::DockerVolume: /usr/share/openshift-heat-templates/volume_docker.yaml
  OOShift::DockerVolumeAttachment: /usr/share/openshift-heat-templates/volume_attachment_docker.yaml
  OOShift::RegistryVolume: /usr/share/openshift-heat-templates/registry_ephemeral.yaml

1
RHOCP Service Configuration
This section defines the RHOCP service itself. The parameters here set the public view for the developers and application users.
2
Number of each instance type
This section determines the number of each type of component of the RHOCP service.
3
RHOSP Network Definition
This section defines the internal networks and how the RHOCP servers connect to the public network.
4
DNS Services and Updates
This section defines DNS servers and provides the means for the Heat templates to populate name services for the new instances as they are created. See Generating an Update Key in the appendices for details.
5
Instance access
This section defines how Ansible accesses the instances to configure and manage them.
6
Base image for each instance type
This section selects the base image for each instance within the RHOCP deployment.
7
Docker Storage Sizing
This section controls the amount of storage allocated on each instance to contain Docker images and container runtime files.
8
RHOSP credentials
This section defines the credentials which allows the bastion host and kubernetes on each node to communicate with the cloud provider to manage storage for containers.
NOTE: It is critical that the values here match those in the Red Hat OpenStack Platform Credentials section. Incorrect values can result in installation failures that are difficult to diagnose.
9
Red Hat Subscription Credentials
This section defines the credentials used to enable software subscription and updates.
10
LDAP Authentication
This section defines the credentials that enable RHOCP to authenticate users from an existing LDAP server.
11
Sub-templates
This section defines a set of sub-templates that enable extensions or variations like dedicated loadbalancer or SDN selection.

7.5. Hostname Generation In Heat Stack

Two of the hostnames generated by the heat stack installation are significant for users who need to find the service. The hostnames are generated from the domain_name and the lb_hostname parameters in the YAML file and from the heat stack name given on the CLI when the stack is created. This is to avoid naming conflicts if multiple stacks are created.

  • domain_name: ocp3.example.com
  • stack name: ocp3-heat
  • lb_hostname: devs

Table 7.2. OCP Service Host Names

hostFQDN

suffix

ocp3.example.com

master LB

ocp3-heat-devs.ocp3.example.com

application LB

*.apps.ocp3.example.com

The instances that make up the Heat deployment get names composed from the stack name and domain. The master and infrastructure nodes are distinguished from each other by simple integer serial numbers. The nodes are handled differently as they can be created on demand when a load trigger event occurs. The heat stack assigns each node a random string to distingush them.

A listing of all the instance names can be found using nova list --field name.

Table 7.3. OCP instance names in OSP Nova

instance TypeName TemplateExample

bastion

<stackname>-bastion.<domain>

ocp3-heat-bastion.ocp3.example.com

master

<stackname>-master-<num>.<domain>

ocp3-heat-master-0.ocp3.example.com

infrastructure node

<stackname>-infra-<num>.<domain>

ocp3-heat-infra-0.ocp3.example.com

application node

<stackname>-node-<hash>.<domain>

ocp3-heat-node-12345678

These are the names developers and application users use to reach the OCP service. These two domain names must be registered in DNS and must point to the load-balancer. The load-balancer must be configured with the floating IP addresses of the master instances for port 8443 and with the addresses of the infrastructure nodes on ports 80 and 443 for access to the OCP service.

7.6. Creating the Stack

Create the Heat Stack

heat stack-create ocp3-heat \ --timeout 120 \ -e openshift_parameters.yaml \ -e /usr/share/openshift-heat-templates/env_loadbalancer_dedicated.yaml \ -f /usr/share/openshift-heat-templates/openshift.yaml

7.7. Observing Deployment

Observe the Stack Creation

heat stack-list ocp3-heat
heat resource-list ocp3-heat | grep CREATE_IN_PROGRESS

7.8. Verifying the OCP Service

7.8.1. Ops Access

Log into the bastion host through nova ssh

nova ssh -i ocp3_rsa cloud-user@openshift-bastion.ocp3.example.com

7.8.2. WebUI Access

Browse to https://ocp3-heat-devs.ocp3.example.com:8443 and log in with user credentials from the LDAP/AD service. For this example, username = "openshift" and password is "password"

7.8.3. CLI Access

oc login ocp3-heat-devs.ocp3.example.com --username openshift --insecure-skip-tls-verify
Authentication required for https://ocp3-heat-devs.ocp3.example.com:8443 (openshift)
Username: openshift
Password:
Login successful.

Using project "test-project".