Chapter 3. Design considerations

Consider the following questions when adapting this reference architecture to your environment:

  • What installation tooling will you use?
  • Is high availability required?
  • Which storage and network backends will you use?
  • How will DNS be provided?
  • What security and authentication features are needed?

This section describes how this reference architecture addresses each of these design considerations. The recommendations in this reference architecture were developed by Red Hat field consultants for deploying OpenShift on OpenStack in production.

3.1. Installation tools

Perform the installation using the tools recommended and supported by Red Hat: Red Hat OpenStack Platform director and openshift-ansible.

3.1.1. Red Hat OpenStack Platform director

Red Hat OpenStack Platform director is a management and installation tool based on the OpenStack TripleO (OpenStack on OpenStack) project. This reference architecture uses director to install Red Hat OpenStack Platform.

The fundamental concept behind Red Hat OpenStack Platform director is that there are two clouds. The first cloud (called the undercloud) is a standalone OpenStack deployment. The undercloud can be deployed on a single physical server or virtual machine. The administrator uses the undercloud’s OpenStack services to define and deploy the production OpenStack cloud. Director is also used for day two management operations, such as applying software updates and upgrading between OpenStack versions.

Figure 3: OpenShift on OpenStack 31 0619 3

The second cloud (called the overcloud) is the full-featured production environment deployed by the undercloud. The overcloud is comprised of physical servers with various roles:

  • Controller nodes - run the OpenStack API endpoints. They also store OpenStack’s stateful configuration database and messaging queues.
  • Compute nodes - run virtual machine hypervisors. They host the computing resources allocated for user workloads.
  • Storage nodes - provide block, object, or software defined storage for the user workloads.

Figure 3 depicts the relationship between the undercloud, overcloud, and OpenStack nodes. Red Hat OpenShift Container Platform runs on the overcloud.

Note

Director is required for all production Red Hat OpenStack Platform deployments. Red Hat engineering’s tested and validated configuration settings are embedded into director’s deployment tooling.

3.1.2. Openshift-ansible

Once OpenStack is installed, virtual machines are built within OpenStack to run OpenShift. The virtual machines are configured using Ansible roles from openshift-ansible. Openshift-ansible is a set of Ansible playbooks that orchestrate complex deployment tasks including:

  • Configuring the container runtime environment on virtual machines.
  • Provisioning storage for an internal registry.
  • Configuring the OpenShift SDN.
  • Connecting to authentication systems.

Figure 4: OpenShift on OpenStack 31 0619 4

Openshift-ansible deploys OpenShift on OpenStack using two playbooks:

  • provision.yml - deploys the OpenStack virtual machines. It uses the Ansible cloud provider module to build OpenStack resources using calls directly to the OpenStack Heat API. The all.yml Ansible vars file defines how OpenStack should be configured.
  • install.yml - installs the OpenShift cluster on the virtual machines. It uses the OSEv3.yml Ansible vars to define how OpenShift should be deployed.

More information on using openshift-ansible can be found in the official OpenShift Container Platform 3.11 documentation.

Openshift-ansible uses a dynamic Ansible inventory script to issue commands against roles. The inventory is automatically generated as output from provision.yml. The following command output shows an OpenShift host list generated from the dynamic inventory:

(shiftstack) [cloud-user@bastion openstack]$ ./inventory.py --list | jq .OSEv3.hosts

[

"master-1.openshift.example.io",

"app-node-1.openshift.example.io",

"app-node-2.openshift.example.io",

"infra-node-1.openshift.example.io",

"master-2.openshift.example.io",

"master-0.openshift.example.io",

"app-node-0.openshift.example.io",

"infra-node-0.openshift.example.io",

"infra-node-2.openshift.example.io"

]

Openshift-ansible can also deploy OpenShift directly onto physical servers using the OpenStack Ironic API. This reference architecture deploys to virtual machines as that is the more common deployment model.

3.2. High Availability

High availability is a requirement for any production deployment. A crucial consideration for high availability is the removal of single points of failure. This reference architecture is highly available at both the OpenStack and OpenShift layers.

3.2.1. OpenStack HA

OpenStack Platform Director deploys three controller nodes. Multiple instances of the OpenStack services and APIs run simultaneously on all three controllers. HAproxy load balances connections across the controller API endpoints to ensure service availability. The controllers also run the OpenStack state database and message bus. A Galera cluster protects the state database. RabbitMQ queues are duplicated across all nodes to protect the message bus. This is the default level of high availability enforced by Red Hat OpenStack Platform director.

3.2.2. OpenShift HA

OpenShift is also deployed for high availability. In this reference architecture the etcd state database is co-located across the master nodes. etcd requires a minimum of three nodes for high availability.

This reference architecture also uses three infrastructure nodes. Infrastructure nodes host OpenShift infrastructure components such as the registry, containers for log aggregation, and metrics. A minimum of three infrastructure nodes are needed for high availability when using a sharded aggregated logging database, and to ensure service interruptions do not occur during a reboot.

In deployments with three or more OpenStack Compute nodes, OpenShift should be configured to use node anti-affinity. This feature helps ensure high availability by preventing virtual machines of the same role from running on the same Compute node. Anti-affinity prevents a single Compute node failure from bringing down the OpenShift cluster.

Figure 5: OpenShift on OpenStack 31 0619 5

Figure 5 shows an example of a highly available OpenShift on OpenStack deployment. The OpenStack compute nodes and Ceph OSDs are grouped into availability zones on a per-rack basis. The virtual machines are all members of the same OpenStack tenant. Affinity rules spread the virtual machines across the physical compute nodes by role.

3.2.3. Hardware HA

You should also take care to eliminate single points of failure at the hardware layer. Hardware fault tolerance recommendations implemented in this architecture include:

  • RAID on server internal hard disks.
  • Ceph OSD disks should be configured as RAID 0 or without RAID. The OSDs are protected by Ceph’s native data replication.
  • Redundant server power supplies connected to different power sources.
  • Bonded network interfaces connected to redundant network switches. The following code sample shows a network bond definition for an OpenStack controller:
- type: ovs_bridge

name: br-vlan

members:

- type: linux_bond

name: bond2

bonding_options:

get_param: BondInterfaceOvsOptions

members:

- type: interface

name: nic4

primary: true

- type: interface

name: nic5

...

BondInterfaceOvsOptions: 'mode=1 miimon=150'

If the OpenStack deployment spans multiple racks, the racks should be connected to different PDUs. OpenStack Compute nodes should be divided into availability zones by PDU.

Note

A complete description of how to configure hardware fault tolerance is beyond the scope of this document.

3.3. Storage and networking

Plan the storage and networking carefully. Select storage and networking backends that meet the OpenShift application’s needs while minimizing complexity. Both Red Hat OpenStack Platform and OpenShift Container Platform have independent and mature storage and networking solutions. However, combining the native solutions for each platform can increase complexity and unwanted performance overhead. Avoid duplicate storage or networking components. For example, do not run an OpenShift SDN on top of an OpenStack SDN, and do not use OpenShift Cluster Storage over Ceph.

3.3.1. Red Hat Ceph Storage

Red Hat Ceph Storage provides scalable cloud storage for this reference architecture. Ceph is tightly integrated with the complete Red Hat cloud software stack. It provides block, object, and file storage capabilities.

Ceph cluster servers are divided into Monitors and Object Storage Device (OSD) nodes. Ceph monitors run the monitor daemon, which keeps a master copy of the cluster topology. Clients query the Ceph monitors in order to read and write data to the cluster. Ceph OSD nodes store data. The data are replicated across the physical disks within the nodes.

Figure 6: OpenShift on OpenStack 31 0619 6

A minimum of three Ceph monitors and three or more Ceph OSD nodes are needed to ensure high availability in production. This is depicted in figure 6. Other than recommending the minimum number of Ceph nodes for high availability, Ceph hardware sizing guidelines are beyond the scope of this document. They vary depending on the workload size and characteristics. Refer to the Ceph Hardware Selection Guide for additional information.

This reference architecture also provides S3-compatible object storage using the Ceph Rados Gateway. A minimum of three Object Gateway nodes are required for external access to Ceph object storage.

Note

It is recommended that all Ceph nodes run on dedicated physical servers.

3.3.2. Neutron networking

The OpenStack Neutron API provides a common abstraction layer for various backend network plugins. Red Hat OpenStack Platform 13 supports multiple backend network plugins, including ML2 OVS, OVN, and commercial SDNs. This reference architecture uses the default ML2 OVS plugin. VXLAN encapsulation carries layer 2 traffic between nodes. All L3 traffic is routed through centralized Neutron agents running on the controller nodes.

A range of externally accessible floating IP addresses are bridged to the tenant instances using a provider network. The floating IP addresses are used by remote clients to access exposed OpenShift services, the OpenShift API endpoint, and the web console.

3.3.3. OpenShift SDN

The OpenShift SDN connects pods across all node hosts. It provides a unified cluster network. OpenShift natively provides several options for inter-pod communication. These include OVS multi-tenant and network policy. More information on OpenShift 3 SDN concepts can be found in the OpenShift 3 SDN documentation.

This reference architecture uses Kuryr for OpenShift networking. Kuryr allows OpenShift pods direct access to the underlying OpenStack Neutron networks. It directly plumbs OpenShift pods and OpenStack virtual machines to the same software defined networks. Kuryr is described further in the integration section of this document.

3.4. DNS

OpenShift Container Platform requires a fully functional DNS server. This section describes design considerations related to providing OpenShift with DNS through OpenStack.

3.4.1. OpenShift DNS

Internally, OpenShift uses DNS to resolve container names. OpenShift nodes run local dnsmasq services to resolve the internal addresses.

OpenShift also requires external DNS; openshift-ansible resolves OpenShift node hostnames while validating certificate signing requests during installation. The OpenShift console and containerized applications exposed through the external router must also be externally resolvable through DNS.

Openshift-ansible uses floating IP addresses for the externally resolvable OpenShift addresses, this is not ideal for statically configured external DNS. Any delete or redeploy of the OpenShift environment requires a change to the external DNS records. Therefore, in this reference architecture openshift-ansible is configured to dynamically push the OpenShift console and application wildcard addresses to an external server using nsupdate.

3.4.2. OpenStack DNS

OpenStack can provide DNS for OpenShift in multiple different ways. By default, OpenStack is configured for a provider-type environment. Tenants are expected to bring their own DNS. The external DNS server may exist outside of OpenStack, or it can be deployed within the tenant, forwarding addresses it cannot resolve to an external DNS server. The external DNS server address is pushed to the OpenStack instances using the Neutron subnet configuration for the tenant.

Red Hat OpenStack Platform 13 also supports native DNS resolution using Neutron; this is not enabled by default. When enabled, Neutron associates DNS names with ports. The port names are internally resolvable by dnsmasq running on the OpenStack hypervisors. Dnsmasq can be configured to forward addresses it cannot resolve to a remote DNS server or to whatever external DNS server is used by the OpenStack hypervisor. Note that when this approach is used, the default DNS suffix is changed for all OpenStack tenants, which does not cause a problem. Namespace isolation allows the same name to be used in multiple tenants without any issues.

This reference architecture implements the first approach. A BIND server was deployed within the tenant to provide local DNS resolution. “Bring your own DNS” is the more common case. The internal Neutron DNS approach involves changing the default OpenStack configuration deployed by Director. If internal Neutron DNS resolution is not configured during the initial OpenStack deployment, enabling it later can be an invasive process.

3.5. Security and authentication

This section shares design considerations related to security and authentication when running OpenShift on OpenStack.

OpenShift and OpenStack are enterprise open source software, and both support Role Based Access Control (RBAC) and flexible options for integrating with existing user authentication systems. In addition, both inherit the security Red Hat Enterprise Linux native security features, such as SELinux.

3.5.1. Authentication

By default the OpenStack Identity service (keystone) stores user credentials in its state database, or it can use an LDAP-compliant directory server. Similarly, authentication and authorization in OpenStack can be delegated to another service.

OpenShift master nodes issue tokens to authenticate user requests with the API. OpenShift supports various identity providers including HTPassword and LDAP.

There is no formal integration between OpenShift and OpenStack identity providers and authentication. However, both services can be configured to use the same LDAP directory servers, providing a consistent authentication user experience across the platforms.

3.5.2. Security

The Red Hat OpenStack Platform 13 Security and Hardening Guide recommends practices to help security harden OpenStack. Many of the security practices are embedded into a default Red Hat OpenStack Platform deployment. In the field, Red Hat OpenStack Platform has been security hardened to various levels of standard security compliance, such as ANSI and FedRamp. This reference architecture is not a comprehensive resource for securing OpenStack.

Multiple OpenStack Platform security features are used to help security harden OpenShift in this reference architecture. For example, openshift-ansible creates security groups for every role that only allow port and protocol level access to the services running on that role. In addition, TLS SSL certificates are used to encrypt the OpenStack public API endpoints. The certificate chain can be imported into OpenShift either during installation or manually to allow openshift-ansible to access the encrypted endpoints.

Note that if OpenStack TLS SSL public endpoint encryption is enabled, the certificates must be imported to any hosts issuing commands against the endpoints. This includes the deployment host where openshift-ansible is run.

3.5.3. OpenStack Barbican

Barbican is the OpenStack Key Manager service API. It is designed for the secure storage, provisioning, and management of secrets such as passwords, encryption keys, and X.509 certificates. Barbican does not appear in this reference architecture, but it was tested during reference architecture development. Potential use cases for Barbican with OpenShift on Openstack include:

  • Creating keys for the self-signed Glance images for building OpenShift instances.
  • Encrypting object storage buckets for the OpenShift internal registry in a multi-tenant deployment.
  • Encrypting Cinder volumes backed by Ceph in a multi-tenant deployment.

Information related to installing and configuring Barbican can be found in the official OpenStack Platform 13 product documentation.