Menu Close

Chapter 6. Reference architecture implementation

This section describes the deployed reference architecture.

Note

The reference architecture does not include step-by-step instructions for deploying Red Hat Openshift Container Platform (RHOCP) 4.4 on Red Hat Openstack Platform (RHOSP) 13 or 16.0.

For detailed installation steps, see Installing on OpenStack.

For simplicity, this reference architecture uses director stack user to host the RHOCP installation. All files and actions described in the following section were performed on this host. Unlike the RHOCP 3.11 reference architecture, a dedicated installation/bastion host is not required.

6.1. Red Hat OpenStack Platform installation

This reference architecture uses the 13.0.11 and 16.0 releases of Red Hat OpenStack (RHOSP). For RHOSP 13 deployments, the specific 13.0.11 maintenance release is required as it includes the required enhancements to Red Hat Ceph Object Gateway (RGW), the OpenStack Networking (neutron) service, and the OpenStack Image Service (glance), for use with installer-provisioned infrastructure deployments. RHOSP 16.0 includes all these enhancements as default.

You can check your release version by viewing the release file on director and overcloud hosts:

(overcloud) [stack@undercloud ]$ cat /etc/rhosp-release
Red Hat OpenStack Platform release 13.0.11 (Queens)

6.1.1. RHOSP deployment

The overcloud deployment consists of:

  • three monolithic Controller nodes without custom roles
  • three Compute nodes
  • three SSD-backed storage nodes running Ceph.

Endpoints

The Public API endpoint is created using Predictable VIPs (PublicVirtualFixedIPs). For more information, see the following documents:

The endpoint does not use a DNS hostname to access the overcloud through SSL/TLS, as described in the following documents:

SSL/TLS

To implement external TLS encryption, use a self-signed certificate and a certificate authority on the director host. This reference architecture follows the steps in the following procedures to create a self-signed certificate and a certificate authority file called ca.crt.pem:

The RHOCP installation program requires these certificates in front of IP-based endpoints to be part of the certificate’s Subject Alternative Name (SAN).

The reference architecture also adds the ca.crt.pem file to the local CA trust on the director host, as described in the following reference:

This allows the undercloud to communicate with the overcloud endpoints’ self-signed certificate, and to allow the RHOCP installation program to share the private CA with the necessary components of RHOCP during installation.

Storage

Director deploys Red Hat Ceph Storage using the configuration file /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml. This default configuration sets Red Hat Ceph Storage as the backend for the following services:

  • OpenStack Image Service (glance)
  • OpenStack Compute (nova)
  • OpenStack Block Storage (cinder)

This reference architecture deploys Red Hat Ceph Storage across three dedicated, entirely SSD-backed Storage nodes. This is done to ensure that any storage provided to RHOCP nodes in this reference architecture is guaranteed to be fast without any extra configuration requirements. As mentioned earlier, your storage set up may differ and should be considered based on individual requirements and your unique hardware.

This reference architecture deploys Red Hat Ceph Storage with BlueStore as the OSD backend. Red Hat recommends using Red Hat Ceph Storage with BlueStore as the OSD backend for all new deployments of RHOSP 13 using Red Hat Ceph Storage 3.3 and later. Red Hat Ceph Storage with BlueStore as the OSD backend is the default for RHOSP 16.0 using Ceph 4.x, therefore no specific actions are required.

Object storage

This reference architecture uses the Red Hat Ceph Object Gateway (RGW) for object storage, which is backed by Red Hat Ceph Storage. This reference architecture deploys RGW with the following default template provided by director, with no customizations:

/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml

Using this template automatically disables the default OpenStack Object Storage (swift) installation on the Controller nodes.

A RHOSP cloud administrator must explicitly allow access to Object Storage. To allow access to RGW, the administrator grants the tenant the “Member” role.

Roles

This reference architecture does not use custom roles.

Network

This reference architecture uses the standard networking isolation provided by using /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml configuration file. This configuration creates the following five default networks: Storage, StorageMgmt, InternalApi, Tenant, and External. This reference architecture does not create any additional networks.

Director deploys the default Open vSwitch/OVS plugin backend.

The external network is a provider network offering a range of routable addresses that can be added to DNS and are accessible from a client web browser.

This reference architecture uses a HostnameMap and pre-assigns IPs for all node types by using the configuration file ips-from-pool-all.yaml.

Image storage

This reference architecture uses Red Hat Ceph Storage as the backend for the OpenStack Image Service (glance). This allows images to be stored on redundant, fast storage, that provides copy-on-write cloning (CoW) for faster boot and optimal storage.

However, this also means that using QCOW2 formatted images is not advised. For more information, see Converting an image to RAW format. Instead, to boot VMs from an ephemeral back end or from a volume, the image format must be RAW.

The RHOCP installation program automatically downloads and uses a publicly available QCOW2-formatted image. You can change this by setting the clusterOSImage installation variable to the URL of an external RAW-formatted image, or to the name of an existing, pre-deployed RAW-formatted image already stored in the OpenStack Image Service.

The clusterOSImage variable is not available from the guided installation. You must manually add it to the install-config.yaml file.

6.1.2. Preparing the environment

We performed the following actions to prepare the environment after overcloud deployment.

6.1.2.1. RHOSP administration

The tasks described in this section are for illustrative purposes only, to allow both RHOSP administrators and RHOCP deployers and administrators to understand all elements of the reference architecture. The steps were performed on a RHOSP 13 install. There is no requirement for administrative access to a RHOSP cloud to deploy and run RHOCP on it.

Create the public network

This reference architecture created an external flat network to provide external access and routable floating IPs:

$ openstack network create public --external --provider-network-type flat --provider-physical-network datacentre
$ openstack subnet create --dhcp --gateway 192.168.122.1 --network public --subnet-range 192.168.122.0/24 --allocation-pool start=192.168.122.151,end=192.168.122.200 public

This network is the source for IPs to use for the API, the applications, and the bootstrap virtual IPs (VIPs), and must be accessible by cloud tenants.

Create the flavor

This reference architecture created a suitable flavor to align with the minimum requirements detailed in the Resource guidelines for installing OpenShift Container Platform on OpenStack.

$ openstack flavor create --ram 16384 --disk 25 --vcpu 4 --public m1.large
Tip

Each node type can be configured by using a custom machine pool. You can use machine pools to set different flavors per node type by setting the type: value in the openstack: section of the node machine pool in install-config.yaml. For more information, see Custom machine pools.

Create the user and project

This reference architecture created a simple project (tenant) called "shiftstack", and a user named "shiftstack_user".

$ openstack project create shiftstack
$ openstack user create --password 'redhat' shiftstack_user

Add the tenant to a basic role to allow them to use the cloud:

$ openstack role add --user shiftstack_user --project shiftstack _member_
Note

RHOCP projects and users do not need the admin role to access your cloud.

Grant access to tenants to use object storage

The cloud administrator for this reference architecture grants access to tenants to use Red Hat Ceph Object Gateway (RGW) by granting the "Member" role. The "Member" role is a special role created by the Red Hat Ceph Storage installation specifically for granting access to RGW.

Note

“Member” for Ceph and _member_ for the shiftstack_user are different, distinct roles and both are required for our purposes.

$ openstack role add --user shiftstack_user --project shiftstack Member

Quotas

This reference architecture changed the default quotas to meet the resource requirements of the RHOCP nodes. Each RHOCP node requires 4 VCPUs and 16 GB RAM.

$ openstack quota set --cores 28 --ram 120000 shiftstack

The reference architecture now has a RHOSP user and project with the ability to save Ignition files in the RGW object store, and use a public, external provider network with routable floating IPs and plenty of resources available.

6.2. RHOCP tenant operations

Before starting the RHOCP installation program, the shiftstack_user performs the following tasks from the director host, logged in as the stack user.

Download the OpenStack RC file

An OpenStack RC file is an environment file that sets the environment variables required to use the RHOSP command line clients.

The reference architecture uses the shiftstack_user’s RC file downloaded from the OpenStack Dashboard (horizon) GUI and placed on the deployment host. The file was not modified in any way.

Downloading files from OpenStack Dashboard

Download and modify clouds.yaml

The clouds.yaml file contains the required configuration for connecting to one or more clouds. Download it from the same location as the OpenStack RC file.

The default location for clouds.yaml is ~/.config/openstack/, and later versions of RHOSP create the file there during installation. This reference architecture places clouds.yaml in the home directory of the stack user on the director host, in the same location as the OpenStack RC file. You can place the file in alternative locations, for more information, see OpenStack Credentials.

For more information, see clouds.yaml.

This reference architecture makes the following modifications to the clouds.yaml file:

  • Adds a password, as the installation program requires a password and one is not present in the downloaded file:

    password: "redhat"
  • Adds support for the self-signed external TLS, by adding the Certificate Authority file, ca.crt.pem:

    cacert: /home/stack/ssl/ca.crt.pem

    This change is necessary before running the RHOCP installation program, as the RHOCP installation program uses this configuration to:

    • Interact with the RHOSP TLS-enabled endpoints directly from the host it is running from.
    • Share the self-signed Certificate Authority with some of the cluster operators during the installation.
Note

RHOCP tenants must run the RHOCP installation program from a host that has had the certificate authority file placed, as described in the following references:

The following example shows the edited clouds.yaml file:

# This is a clouds.yaml file, which can be used by OpenStack tools as a source
# of configuration on how to connect to a cloud. If this is your only cloud,
# just put this file in ~/.config/openstack/clouds.yaml and tools like
# python-openstackclient will just work with no further config. (You will need
# to add your password to the auth section)
# If you have more than one cloud account, add the cloud entry to the clouds
# section of your existing file and you can refer to them by name with
# OS_CLOUD=openstack or --os-cloud=openstack
clouds:
  openstack:
    auth:
      auth_url: http://192.168.122.150:5000/v3
      username: "shiftstack_user"
      password: "redhat"
      project_id: 1bfe23368dc141068a79675b9dea3960
      project_name: "shiftstack"
      user_domain_name: "Default"
    cacert: /home/stack/ssl/ca.crt.pem
    region_name: "regionOne"
    interface: "public"
    identity_api_version: 3

Reserve floating IPs

This reference architecture reserved two floating IPs from the global pool:

  • 192.168.122.167 for ingress (Apps)
  • 192.168.122.152 for API access

The RHOCP installation program also needs a third floating IP for the bootstrap machine. However, since the bootstrap host is temporary, this floating IP does not need to be pre-allocated or in the DNS. Instead, the installation program chooses an IP from the floating pool and allocates that to the host to allow access for troubleshooting during the installation. Once the bootstrap is destroyed, the IP is returned to the pool.

Obtain the installation program and pull secret

This reference architecture uses the downloaded RHOCP installation program, client, and pull secret from the Red Hat OpenShift Cluster Manager. You can also download the latest images, installation program, and client from the Red Hat Customer Portal.

Downloading files from Red Hat OpenShift Cluster Manager

Note

New clusters are automatically registered with a 60-day evaluation subscription. Evaluation subscriptions do not include support from Red Hat. For non-evaluation use, you should attach a subscription that includes support. For more information, see OpenShift Container Platform 3 to 4 oversubscriptions during cluster migration explained.

6.3. Red Hat OpenShift Container Platform installation

Before running the installation program, create a directory to store the install-config.yaml, and a directory to store the cluster assets. For example, the following two directories are for the "ocpra" cluster:

  • "ocpra-config": Stores the “master copy” of install-config.yaml.
  • "ocpra": The asset directory of the cluster.

The installation program can manage multiple clusters, therefore use unique directories for each cluster to keep their asset files separate.

This reference architecture uses the guided installation to generate an install-config.yaml file directly into the ocpra-config directory:

$ openshift-install --dir=ocpra-config create install-config
? SSH Public Key /home/stack/.ssh/id_rsa.pub
? Platform openstack
? Cloud openstack
? ExternalNetwork public
? APIFloatingIPAddress 192.168.122.152
? FlavorName m1.large
? Base Domain example.com
? Cluster Name ocpra
? Pull Secret [? for help] ******************************************

The guided installation populates the following variables:

  • SSH Public Key: The specified key for all RHOCP nodes. It is saved under the “core” user (connect as core@IP). This example uses the public key of the stack user. For production deployments you need to provide separate keys in line with good security practices and your company’s security policies.
  • Platform: The cloud provider platform you are installing RHOCP on to. This reference architecture installs on “openstack”.
  • Cloud: This is the cloud you want to use, as defined in clouds.yaml.
  • ExternalNetwork: The network defined with the “--external” flag. The installation program presents available options to choose from.
  • APIFloatingIPAddress: The floating IP designated for the API. The installation program presents all allocated floating IPs in the project to choose from. This reference architecture allocates 192.168.122.152. This Floating IP is attached to the load balancer (haproxy) in front of the cluster. It is not a physical node.
  • FlavorName: The flavor to use for all instances. This reference architecture uses the m1.large flavor created earlier with the required resource limits.
  • Base Domain: The base domain name for the cluster. This reference architecture uses example.com.
  • Cluster Name: The name of the cluster. This name is appended as a prefix to the Base Domain name as <clustername>.<basedomain>. This reference architecture uses “ocpra”.
  • Pull Secret: Your unique value, copied from the "Pull Secret" section on https://cloud.redhat.com/openshift/install/openstack/installer-provisioned.
Tip

You can regenerate the install-config.yaml for a running cluster by running the following command:

$ openshift-install --dir=<asset directory of the cluster> create install-config

For example:

$ openshift-install --dir=ocpra create install-config

Customize install-config.yaml

Using the guided installation creates a fully supported production cluster. However, you can still manually add some specific additional values to install-config.yaml and create an opinionated, supported, production-ready deployment. This reference architecture adds the following values to the platform: openstack: section:

  • externalDNS: The RHOSP cloud this reference architecture creates does not provide a DNS by default to tenant-created subnets. Instead, this reference architecture manually sets the externalDNS value to allow the installer to automatically add a specific DNS to the subnet it creates.
  • clusterOSImage: This reference architecture sets this value to a URL pointing to a RAW-formatted version of the RHCOS QCOW2 image. This overrides the use of the default QCOW2 downloaded by the installation program, which is not suitable for the Ceph backend. This RAW image is hosted on an internal webserver. For more information, see Converting an image to RAW format.

    Note

    Due to an open issue the image has a QCOW label when uploaded by the RHOCP installation program, and it returns the “disk_format” field as “qcow2”. This is incorrect. To be sure the image uploaded is RAW you can review the actual image uploaded by using the cached file that the installation program uploaded from:

    $ qemu-img info ~/.cache/openshift-installer/image_cache/7efb520ee8eb9ccb339fa223329f8b69
    image: /home/stack/.cache/openshift-installer/image_cache/7efb520ee8eb9ccb339fa223329f8b69
    file format: raw
    virtual size: 16G (17179869184 bytes)
    disk size: 16G

The resulting install-config.yaml file for this reference architecture is as follows:

$ cat ocpra-config/install-config.yaml
apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 3
controlPlane:
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 3
metadata:
  creationTimestamp: null
  name: ocpra
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineCIDR: 10.0.0.0/16

  # The networkType is set by default to OpenShiftSDN, even if Octavia
  # is detected and you plan to use Kuryr. Set to "Kuryr" to use Kuryr.
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  openstack:
    cloud: openstack
    computeFlavor: m1.large
    externalDNS: ['8.8.8.8']
    externalNetwork: public

    # lbFloatingIP is populated with the APIFloatingIPAddress value
    # specified in the guided installation.
    lbFloatingIP: 192.168.122.152

    # octaviaSupport is set to '0' by the installation program
    # as Octavia was not detected.
    octaviaSupport: "0"
    region: ""
    trunkSupport: "1"
    clusterOSImage: http://10.11.173.1/pub/rhcos-4.4.0-rc.1-x86_64-openstack.x86_64.raw
publish: External
pullSecret: ‘xxx'
sshKey: |
  ssh-rsa AAAAB3NzaC1yc2EsdfsdfsdsfewX0lTpqvkL/rIk5dGYVZGCHYse65W8tKT... stack@undercloud.redhat.local
Note

The installation program defaults both the workers and masters replica count to “3”. Production installations do not support using more or less than three masters, as HA and over-the-air updates require three masters.

Prepare the DNS settings

This reference architecture prepared the DNS entries as follows:

  • api.ocpra.example.com resolves to 192.168.122.152
  • *.apps.ocpra.example.com resolves to 192.168.122.167

Specific DNS server administration is beyond the scope of this document.

Install RHOCP

Prior to installation, we copied the install-confg.yaml file to the asset directory for installation.

$ cp ocpra-config/install-config.yaml ocpra/

We are now ready to run the RHOCP installation program, specifying the asset directory containing the copy of the install-config.yaml file:

$ openshift-install --dir=ocpra create cluster --log-level=debug
Note

You do not need to use --log-level=debug for the installation. This reference architecture uses it to clearly see the most verbose output for the installation process.

6.4. Red Hat OpenShift Container Platform on Red Hat OpenStack Platform deployment

The following images show the running RHOCP on RHOSP deployment using the OpenStack Dashboard.

Master and worker node instances

Master and worker node instances

RHCOS image

RHCOS image

Network

Network topology

Routers

Security groups

Security groups

Object storage registry

Object storage registry

6.5. Post installation

To make the RHOCP on RHOSP deployment production ready, the reference architecture performed the following post-installation tasks:

  • Attach the ingress floating IP to the ingress port to make it available.
  • Place master nodes on separate Compute nodes.
  • Verified cluster status

6.5.1. Make the ingress floating IP available

The ingress floating IP, for use by the applications running on RHOCP, was allocated to 192.168.122.167 as part of the installation and has an entry in DNS.

To make the RHOCP ingress access available, you need to manually attach the ingress floating IP to the ingress port once the cluster is created, following the guidance in Configuring application access with floating IP addresses.

Note

The ingress IP is a fixed IP address managed by keepalived. It has no record in the OpenStack Network database, and is therefore not visible to RHOSP, therefore it remains in a “DOWN” state when queried.

Run the following command to check the ingress port ID:

$ openstack port list | grep ingress
| 28282230-f90e-4b63-a5c3-a6e2faddbd15 | ocpra-6blbm-ingress-port  | fa:16:3e:4e:b8:bc | ip_address='10.0.0.7', subnet_id='cb3dbf4a-8fb3-4b2e-bc2d-ad12606d849a'  | DOWN   |

Run the following command to attach the port to the IP address:

$ openstack floating ip set --port 28282230-f90e-4b63-a5c3-a6e2faddbd15 192.168.122.167

6.5.2. Place master nodes on separate Compute nodes

The RHOCP installation program does not include support for RHOSP anti-affinity rules or availability zones. Therefore, you need to move each master node to its own Compute node after installation. This reference architecture uses live migration to ensure one master per Compute node, as detailed in the following references:

Future releases are planned to support RHOSP anti-affinity rules and availability zones.

6.5.3. Verify the cluster status

This reference architecture follows the procedure described in Verifying cluster status to verify that the cluster is running correctly.

You can access the RHOCP console using the URL associated with the DNS. For this reference architecture the console is deployed to: https://console-openshift-console.apps.ocpra.example.com/

RHOCP Login

RHOCP Dashboard