Chapter 4. Reference architecture

This chapter of the document describes the deployed reference architecture. It includes architectural details for:

  • OpenStack overcloud configuration
  • OpenStack tenants (also known as projects)
  • DNS configuration
  • OpenShift roles
  • OpenShift service placement
  • Storage

The reference architecture does not include step by step instructions for deploying OpenShift or OpenStack, you will need to refer to the official product documentation.

4.1. Reference architecture diagram

Figure 7 depicts the OpenShift components of the deployed reference architecture for OpenShift on OpenStack. Refer to this diagram to visualize the relationships between OpenShift roles and services, including the underlying OpenStack networks described in the later sections.

Figure 7: OpenShift on OpenStack 31 0619 7

4.2. OpenStack Tenant Networking

OpenShift Container Platform is installed into an OpenStack project. Prior to running openshift-ansible, the tenant must meet the prerequisites described in OpenShift Container Platform 3.11: Configuring for OpenStack.

Listed below are the types of tenant networks used in this reference architecture:

  • Public network: This network is reachable by the outside world. It is an OpenStack provider network that maps to a physical network that exists in the data centre. The public network includes a range of floating IP addresses that can be dynamically assigned to OpenStack instances. It is defined by an OpenStack administrator.
  • Deployment network: An internal network created by the tenant user. All OpenShift instances are created on this internal network. The deployment network is used to deploy and manage the OpenStack instances that will be running OpenShift. Instances on the deployment network cannot be reached from the outside world unless they are given a floating IP address. OpenShift pod network: Created by openshift-ansible during the provisioning phase. It is an internal network for OpenShift inter-pod communication. When Kuryr is enabled, master nodes also have access to the pod network. OpenShift service network: The control plane network for OpenShift service communication. It is also created by OpenShift ansible.

A neutron router running in OpenStack routes L3 traffic between the deployment (bastion), pod, and service networks. It also acts as a gateway for instances on internal networks that need to access the outside world.

4.3. Deployment host

By default, openshift-ansible assigns floating IP addresses to all OpenStack instances during installation. As a result, all instances are accessible from the outside world. You will need to consider whether this approach is acceptable in your deployment.

This reference architecture uses a deployment host. The deployment host has a network interface on the internal network as well as an externally accessible floating IP. The tenant user runs openshift-ansible from the deployment host.

A deployment host is recommended for production deployments for the following reasons:

  • Security hardening — OpenShift instances should not be accessible to the outside world or users from other tenants except via the exposed console and router endpoints.
  • Resource allocation — Floating IP addresses are finite. They should be reserved for tenant users to be allocated when needed.

4.4. DNS configuration

DNS is provided by a Bind 9 server accessible using the external (public) network. The DNS server is authoritative for the domain. This domain resolves the public address records specified in openshift-ansible and updated using nsupdate. An example public nsupdate key is shown in the following example:



key_secret: "<secret_key>"

key_name: ""

key_algorithm: "HMAC-SHA256"

server: ""

The DNS server is named It is configured to forward addresses it cannot resolve to an external DNS server.

The openshift instance hostnames resolve to addresses on the bastion_net network. A record for the deployment server must be added manually, as openshift-ansible does not add it automatically. The following example shows a zone file populated by openshift-ansible:


* A


app-node-0 A

app-node-1 A

app-node-2 A

console A

infra-node-0 A

infra-node-1 A

infra-node-2 A

master-0 A

master-1 A

master-2 A

ns1 A

bastion A

After running install.yml, an address record is added for that resolves to an externally accessible address on the public network. This address is from the OpenStack floating IP pool.

The install.yml playbook also creates a wildcard record named This address is used to access exposed OpenShift applications from clients outside the internal OpenShift cluster network. The address resolves to a floating IP assigned to the OpenShift router pod.

4.5. OpenShift infrastructure roles

OpenShift hosts are either masters or nodes. The masters run the control plane components including the API server, controller manager server, and the etcd state database. Nodes provide the runtime environment for containers. They run the services required to be managed by the master and to run pods. The master schedules pods to be run on the nodes.

This reference architecture uses the following infrastructure node roles:




















The RAM, CPU, and disk recommendations in this reference architecture should be considered minimums, so adjust them as needed. Hawkular metrics, the Prometheus cluster monitoring service, and the EFK stack for log aggregation in particular require additional disk and memory based on polling intervals and storage retention length. Specific guidelines for configuring these services and environment specific and beyond the scope of this document.

Red Hat OpenStack Platform can be configured to boot instances from persistent Cinder volumes backed by Ceph RBD. Note that this was not done in this reference architecture. The images for instances booted from persistent volumes backed by Ceph RBD should be converted to RAW format prior to booting the instances. Instances booted from persistent volumes backed by Ceph RBD enjoy additional data protection features such as data live migration, snapshots, and mirroring.

In this reference architecture, three master nodes are deployed for high availability to help ensure the cluster has no single point of failure. An Octavia load balancer balances the loads between API master endpoints. The controller manager server runs in an active-passive configuration with one instance elected as a cluster leader at one time.

Etcd is a distributed key-value store that OpenShift Container Platform uses for configuration. The OpenShift API server onsults the etcd state database for node status, network configuration, secrets, and more. In this reference architecture the etcd state database is co-located with the master nodes in order to optimize the data path between the API server and the database.

This reference architecture also uses infrastructure nodes. These are OpenShift nodes that run the OpenShift Container Platform infrastructure components including routers, the cluster monitoring operator, the registry endpoint, and the kuryr controller. Two infrastructure nodes are required for high availability. However, three nodes are deployed in this reference architecture to support the logging and metrics services. The sharded ElasticSearch database that backs the container log aggregation pods in particular requires three nodes for high availability.

4.5.1. OpenShift pod placement

Chapter 9, Appendix A: Pod placement by role describes where the pods are scheduled to roles in this reference architecture. In some cases, multiple replicas of each pod are scheduled across all nodes of that role. In other cases there are single pods.

For example, the kuryr-controller pod runs on a single infrastructure node. Kuryr-cni (listener) pods run on all nodes regardless of role. They run as a daemonset that links the containers running on a node to the neutron network.

In addition, the service pods listed in the appendix, application nodes also run the application payload pods defined by user’s dpod definitions, deployment configs, and stateful sets.

4.6. OpenStack storage

This reference architecture uses OpenStack storage to:

  • Back the docker filesystem on instances.
  • Provide persistent volumes to pods.
  • Store the internal registry.

4.6.1. Instance volumes

A Cinder persistent volume is provisioned for each master and node by the openshift-ansible installer. The volume is mounted to /var/lib/docker, as the following example demonstrates.

[openshift@master-0 ~]$ mount | grep dockerlv

/dev/mapper/docker--vol-dockerlv on /var/lib/docker type xfs (rw,relatime,seclabel,attr2,inode64,prjquota)

The container storage volume in this reference architecture is 15 GB and mounted to /dev/vdb on all instances. This is handled automatically by openshift-ansible during instance provisioning. The following example shows the list of Cinder volumes attached to each OpenShift instance.

[cloud-user@bastion ~]$ openstack volume list -f table -c Size -c "Attached to"


| Size | Attached to |


| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |

| 15 | Attached to on /dev/vdb |


Adjust the size based on the number of size of containers each node will run. You can change the container volume size with the _`openshift_openstack_docker_volume_size Ansible variable in provision.yml.

4.6.2. Persistent volumes for pods

The openshift-ansible installer automates the creation of a Cinder storage class for dynamic persistent volume creation. The following example shows the storage class:

[openshift@master-0 ~]$ oc get storageclass


standard (default) 1d

In this reference architecture the storage class allocates persistent volumes for logging and metrics data storage. Persistent volume claims are issued to the storage class and fulfilled during the installation phase of the openshift-ansible installer. For example:

[openshift@master-0 ~]$ oc get pv


25Gi RWO openshift-infra/metrics-cassandra-1 standard

30Gi RWO openshift-logging/logging-es-0 standard

30Gi RWO openshift-logging/logging-es-1 standard

30Gi RWO openshift-logging/logging-es-2 standard0 ~]$

The Cinder storage class access mode is RWO only; this is sufficient for most use cases. RWX (shared) access mode will be addressed in a future reference architecture. There are multiple ways to address the shared use case with OpenShift Container Platform 3.11 and Red Hat OpenStack Platform 13 but they are beyond the scope of this document.

See the OpenStack Container Platform 3.11 Persistent Storage documentation for more information about storage classes and access modes.

4.6.3. Internal registry

OpenShift Container Platform can build container images from source code, deploy them, and manage their lifecycle. To enable this, OpenShift Container Platform provides an internal, integrated container image registry to locally manage images.

The openshift-ansible installer can configure a persistent volume for internal registry storage or use an S3-compatible object store. OpenStack can provide the persistent volume using Cinder, or object storage through Swift backed by Ceph Rados Gateway. The following code block shows the openshift-registry container automatically created by openshift-ansible when install.yml is run.

(shiftstack) [cloud-user@bastion ~]$ openstack container list -f value


This reference architecture uses Ceph Rados Gateway. When a Cinder persistent volume is used it is attached to a single infrastructure node which can become a single point of failure.

Note that by default the OpenStack user specified to configure Ceph Rados Gateway in install.yml must have the admin or Member Keystone role.