Chapter 2. Reference Architecture Overview

Red Hat OpenShift Container Platform is a container application development and hosting platform that provides compute services on-demand. It allows developers to create and deploy applications by delivering a consistent environment for both development and during the run-time life-cycle that requires no server management.

The reference environment described in this document consists of a single bastion host, three master hosts, and five node hosts running the Docker containers on behalf of its users. The node hosts are separated into two functional classes: infrastructure nodes and app nodes. The infra nodes run the internal RHOCP services, OpenShift Router and the Local Registry. The remaining three app nodes host the user container processes.

A pictorial representation of the environment in this reference environment is shown below.

Figure 2.1. OpenShift Container Platform Architecture

ocp on osp architecture

When viewing the OpenShift Container Platform Architecture diagram, it is best seen as four layers.

The first layer consists of the prerequisites: DNS server, LDAP, Load-Balancer, Public Network, and Floating IP Pool that has readily available public IPs for the different nodes to be created.

The second layer consists of the Bastion server that runs Ansible, and the IP Router that routes traffic across all the nodes via the Control network.

The third layer consists of all the nodes: master, infrastructure, app nodes. This layer shows the connectivity of the OpenShift Master nodes, Infrastructure nodes, and App nodes.

The fourth layer consists of Docker storage and Cinder storage connected to their respective nodes.

The main distinction between the infrastructure nodes and the app nodes is the placement of the OpenShift Router and a public interface. The infrastructure nodes require a public IP address in order to communicate with the load-balancer. Once the traffic reaches the OpenShift router, the router forwards traffic to the containers on the app nodes thus eliminating the need for the app nodes to require a public interface.

All of the participating instances have two active network interfaces. These are connected to two networks with distinct purposes and traffic. The control network carries all of the communications between OpenShift service processes. The floating IPs of the bastion host, the master nodes and the infrastructure nodes come from a floating pool routed to this network. The tenant network carries and isolates the container-to-container traffic. This is directed through a flannel service which carries packets from one container to another.

Each node instance creates and mounts a cinder volume. The volume contains the Docker run-time disk overhead for normal container operations. Mounting from cinder allows the size of the disk space to be determined independently without the need to define a new OpenStack flavor. Each node offers Cinder access to the containers it hosts through RHOCP.

2.1. Internal Networks

RHOCP establishes a network for communication between containers on separate nodes. In the Architecture Diagram this is designated by the tenant network. However, the tenant network is actually two networks; one network encapsulated within another. The outer network (tenant network) is the neutron network that carries traffic between nodes. The second network carries container to container traffic.

RHOCP offers two mechanisms to operate the internal network; openshift_sdn and flannel. This reference environment uses the flannel network for the reasons detailed below.

The default mechanism is the openshift_sdn. This is a custom Open vSwitch (OVS) SDN that offers fine-grained dynamic routing and traffic isolation. On a bare-metal installation, openshift_sdn offers both control and high performance.

If the default mechanism (openshift_sdn) is used to operate the RHOCP internal network, all RHOCP internode traffic carried over OpenStack neutron undergoes double encapsulation, creating significant network performance degradation.

flannel replaces the default openshift_sdn mechanism thus eliminating the second OVS instance. flannel is used in host-gateway mode that routes packets using a lookup table for the destination instance and then forwards them directly to that instance encapsulated only with a single UDP header. This provides a performance benefit over OVS double-encapsulation. It, however, does come with a cost.

flannel uses a single IP network space for all of the containers, allocating a contiguous subset of the space to each instance. Consequently, nothing prevents a container from attempting to contact any IP address in the same network space. This hinders multi-tenancy because the network cannot be used to isolate containers in one application from another.

In this reference environment, performance is given precedence over tenant isolation.

2.2. Users and Views - the faces of RHOCP

RHOCP is a software container service. When fully deployed in a cloud environment, it has three operational views for the different classes of users. These users work with RHOCP in different ways, performing distinct tasks. They see different views of RHOCP.

When deploying RHOCP, it is important to understand these views and how the service presents itself to different users.

Operations Users use a bastion host to reach the service internals. The bastion host serves as a stable trusted platform for service monitoring and managing updates.

Application Developers use the CLI or web interfaces of the RHOCP service to create and manage their applications.

Application Users see RHOCP applications as a series of web services published under a blanket DNS domain name. These all forward to the OpenShift Router an internal service that creates the connections between external users and internal containers.

Figure 2.2. RHOCP User Views

openshift user views

When the deployment process is complete, each type of user is able to reach and use their resources. The RHOCP service makes the best possible use of the OpenStack resources and where appropriate, making them available to the users.

Each of these views corresponds to a host or virtual host (via a load-balancer) that the user sees as their gateway to the RHOCP service.

Table 2.1. Sample User Portal Hostname

RoleActivityHostname

Operator

Monitoring and management of RHOCP service

bastion.ocp3.example.com

Application Developer

Creating, deploying and monitoring user services

devs.ocp3.example.com

Application User

Accessing apps and services to do business tasks

*.apps.ocp3.example.com

2.3. Load-Balancers and Hostnames

An external DNS server is used to make the service views visible. Most installations contain an existing DNS infrastructure capable of dynamic updates.

The developer interface, provided by the OpenShift master servers, can be spread across multiple instances to provide both load-balancing and high-availablity properties. This guide uses an external load-balancer running haproxy to offer a single entry point for developers.

Application traffic passes through the OpenShift Router on its way to the container processes inside the service. The OpenShift Router is really a reverse proxy service that multiplexes the traffic to multiple containers making up a scaled application running inside RHOCP. It itself can accept inbound traffic on multiple hosts as well. The load-balancer used for developer HA acts as the public view for the RHOCP applications, forwarding to the OpenShift Router endpoints.

Since the load-balancer is an external component and the hostnames for the OpenShift master service and application domain are known beforehand, it can be configured into the DNS prior to starting the deployment procedure.

The load-balancer can handle both the master and application traffic. The master web and REST interfaces are on port 8443/TCP. The applications use ports 80/TCP and 443/TCP. A single load-balancer can forward both sets of traffic to different destinations.

With a load-balancer host address of 10.19.x.y, the two DNS records can be added as follows:

Table 2.2. Load Balancer DNS records

IP AddressHostnamePurpose

10.19.x.y

devs.ocp3.example.com

Developer access to OpenShift master web UI and REST API

10.19.x.y

*.apps.ocp3.example.com

User access to application web services

The second record is called a wildcard record. It allows all hostnames under that subdomain to have the same IP address without needing to create a separate record for each name.

This allows RHOCP to add applications with arbitrary names as long as they are under that subdomain. For example DNS name lookups for tax.apps.ocp3.example.com and home-goods.apps.ocp3.example.com would both get the same IP address: 10.19.x.y. All of the traffic is forwarded to the OpenShift Routers that examines the HTTP headers of the queries and forward them to the correct destination.

The destination for the master and application traffic is set in the load-balancer configuration after each instance is created and the floating IP address is assigned.

2.4. Deployment Methods

This document describes two ways of deploying RHOCP. The first is a manual process to create and prepare the run-time environment in RHOSP and then install RHOCP on the instances. The manual process is presented first to provide a detailed explanation of all the steps involved in the deployment process and to provide a better understanding of all the components and their interworkings.

The second method uses Heat orchestration to deploy the RHOCP service. Heat is the OpenStack service that automates complex infrastructure and service deployments. With Heat, the user describes the structure of a set of OpenStack resources in a template, and the Heat Engine creates a stack from the template, configuring all of the networks, instances, volumes in a single step.

The RHOCP on RHOSP Heat templates, along with some customization parameters, define the RHOSP service and the Heat engine executes the deployment process.

Heat orchestration offers serveral benefits including the removal of the operator from the process thus providing a consistent repeatable result.