Chapter 5. Key integration points

OpenShift Container Platform and Red Hat OpenStack Platform are co-engineered to form a robust, scalable, and secure platform for running self-service virtual machine and containers. This section of the reference architecture describes the key integration points between the products in greater detail.

Figure 8 maps the key integration points from an OpenStack service to the corresponding OpenShift components.

Figure 8: OpenShift on OpenStack 31 0619 10

The integrations are divided into three categories: compute, network, and storage.

5.1. Compute

Heat is OpenStack’s orchestration service. It can launch composite cloud applications based on text-file templates that can be managed as code. The openshift-ansible installer makes native calls to the Heat API to deploy OpenShift instances, networks, and storage.

Figure 9: OpenShift on OpenStack 31 0619 11

The installer uses the Ansible OpenStack cloud modules to manage OpenStack resources. The Ansible playbook provision.yml makes the calls to Heat. This is depicted in figure 9.

Heat is responsible for orchestrating the Nova Compute service and the Ironic Bare metal service. Nova creates virtual machines for running various OpenShift roles based on predefined flavors and images. Ironic can also be used to push operating system images to bare metal servers that meet or exceed the minimum hardware requirements for the role’s flavor.

Once the OpenShift cloud stack is deployed, it can also be scaled using the same facilities.

(shiftstack) [cloud-user@bastion ~]$ openstack stack resource list openshift-cluster -c resource_name -c resource_type -c resource_status

+--------------------------------+-------------------------------+-----------------+

| resource_name | resource_type | resource_status |

+--------------------------------+-------------------------------+-----------------+

| compute_nodes | OS::Heat::ResourceGroup | CREATE_COMPLETE |

| api_lb | OS::Octavia::LoadBalancer | CREATE_COMPLETE |

| router_lb_pool_http | OS::Octavia::Pool | CREATE_COMPLETE |

| common-secgrp | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| service_subnet | OS::Neutron::Subnet | CREATE_COMPLETE |

| api_lb_pool | OS::Octavia::Pool | CREATE_COMPLETE |

| router_lb_floating_ip | OS::Neutron::FloatingIP | CREATE_COMPLETE |

| infra-secgrp | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| cns | OS::Heat::ResourceGroup | CREATE_COMPLETE |

| lb-secgrp | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| infra_nodes | OS::Heat::ResourceGroup | CREATE_COMPLETE |

| router_lb_listener_https | OS::Octavia::Listener | CREATE_COMPLETE |

| router_lb_pool_https | OS::Octavia::Pool | CREATE_COMPLETE |

| etcd-secgrp | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| router_lb | OS::Octavia::LoadBalancer | CREATE_COMPLETE |

| etcd | OS::Heat::ResourceGroup | CREATE_COMPLETE |

| service_net | OS::Neutron::Net | CREATE_COMPLETE |

| pod_subnet | OS::Neutron::Subnet | CREATE_COMPLETE |

| pod_access_sg | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| master-secgrp | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| pod_net | OS::Neutron::Net | CREATE_COMPLETE |

| api_lb_listener | OS::Octavia::Listener | CREATE_COMPLETE |

| router_lb_listener_http | OS::Octavia::Listener | CREATE_COMPLETE |

| masters | OS::Heat::ResourceGroup | CREATE_COMPLETE |

| service_router_port | OS::Neutron::Port | CREATE_COMPLETE |

| pod_subnet_interface | OS::Neutron::RouterInterface | CREATE_COMPLETE |

| api_lb_floating_ip | OS::Neutron::FloatingIP | CREATE_COMPLETE |

| internal_api_lb_listener | OS::Octavia::Listener | CREATE_COMPLETE |

| node-secgrp | OS::Neutron::SecurityGroup | CREATE_COMPLETE |

| service_subnet_interface | OS::Neutron::RouterInterface | CREATE_COMPLETE |

+--------------------------------+-------------------------------+-----------------+

Abstracting API calls to the native OpenStack services has multiple benefits:

  • Administrators do not have to learn Heat or any other OpenStack tools to deploy OpenShift on OpenStack.
  • OpenStack administrators familiar with Heat can use the tools they are already familiar with the examine and manage the deployed stack. The preceding code block shows a Heat resource listing from a stack deployed by openshift-ansible.
  • Heat provides a scalable and reliable interface for automating OpenShift installations.

5.2. Storage

The persistent volume framework allows OpenShift users to request block storage volumes without any knowledge of the underlying storage that is used. The storage must exist in the underlying infrastructure before it can be used in OpenShift.

The openshift-ansible installer configured a Cinder storageClass so that persistent volumes can be dynamically provisioned and attached when they are requested. This is depicted in figure 10.

Figure 10: OpenShift on OpenStack 31 0619 12

OpenShift API events are modeled on actions taken against API objects. The Cinder provisioner monitors the OpenShift API server for CRUD events related to persistent volume requests. When a persistent volume claim is generated by a pod, the Cinder provisioner performs actions to create the Cinder volume.

Once the volume is created, a subsequent call to the Nova API attaches the volume to the Nova compute node where the pod resides. The volume is then attached to the pod. The Cinder provisioner continues monitoring the OpenShift API server for volume release or delete requests, and initiates those actions when they are received.

The openshift-ansible installer also interacts with Cinder to create persistent block storage for the OpenShift instances. The master and node instances contain a volume to storage container images. The purpose of the container volume is to ensure that container images do not compromise node performance or local storage.

5.2.1. Internal registry

The OpenShift image registry requires persistent storage to ensure that images are saved in the event that the registry pod needs to migrate to another node. The openshift-ansible installer can automatically create a Cinder volume for internal registry persistent storage, or it can point to a pre-created volume. Alternately, openshift-ansible can use an S-3 compatible object store to hold container images. Either OpenStack Swift or Ceph Rados Gateway are suitable object stores for the internal container registry.

5.3. Network

There are two primary network integration points: Kuryr gives OpenShift pods direct access to OpenStack Neutron networks. Octavia automates load balancer creation and configuration for OpenShift services.

5.3.1. Kuryr

Kuryr is a CNI plugin that uses OpenStack Neutron and Octavia to provide networking for pods and services. It is primarily designed for OpenShift clusters running on OpenStack virtual machines. Figure 11 is a block diagram of a Kuryr architecture.

Figure 11: OpenShift on OpenStack 31 0619 13

Kuryr components are deployed as pods in the kuryr namespace. The kuryr-controller is a single container service pod installed on an infrastructure node as a deployment OpenShift resource type. The kuryr-cni container installs and configures the kuryr CNI driver on each of the OpenShift masters, infrastructure nodes, and compute node as a daemonset.

The Kuryr controller watches the OpenShift API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift using Kuryr. This includes open source solutions such as OVS and OVN as well as Neutron-compatible commercial SDNs. That is also the reason the openvswitch firewall driver is a prerequisite instead of the ovs-hybrid firewall driver.

Kuryr is recommended for OpenShift deployments on encapsulated OpenStack tenant networks in order to avoid double encapsulation: running an encapsulated OpenShift SDN over top of an encapsulated OpenStack network, such as whenever VXLAN/GRE/GENEVE are required. Implementing Kuryr does not make sense when provider networks, tenant VLANs, or a third party commercial SDN such as Cisco ACI, or Juniper Contrail is implemented.

The OpenShift Network Policy SDN is recommended for unencapsulated OpenStack networks such as tenant VLANs or provider networks. Kuryr is also not needed when OpenStack Neutron is backed by a commercial SDN with hardware acceleration.

More information on OpenShift Container Platform 3.11 SDN can be found in the Configuring the SDN section of the official documentation.

5.3.2. Octavia

Octavia is an operator-grade, open source, scalable load balancer. It implements load balancing functionality by launching virtual machine appliances (called amphora) in the OpenStack service project. The amphora run HAproxy services. Figure 12 depicts the Octavia architecture and its relationship to OpenShift.

Figure 12: OpenShift on OpenStack 31 0619 14

The load balancer is Octavia’s front end service. Each load balancer has a listener that maps a virtual IP address and port combination to a pool of pods. The openshift-ansible installer configures Octavia load balancers for the API server, cluster routers, and registry access.

(shiftstack) [stack@undercloud ~]$ openstack loadbalancer list -f table -c name -c vip_address -c provider

+-----------------------------------------------+----------------+----------+

| name | vip_address | provider |

+-----------------------------------------------+----------------+----------+

| openshift-ansible-openshift.example.io-api-lb | 172.30.0.1 | octavia |

| openshift-cluster-router_lb-gc2spptdjh46 | 172.16.1.5 | octavia |

| default/docker-registry | 172.30.95.176 | octavia |

| default/registry-console | 172.30.164.5 | octavia |

| default/router | 172.30.220.98 | octavia |

| openshift-monitoring/grafana | 172.30.242.217 | octavia |

| openshift-web-console/webconsole | 172.30.10.9 | octavia |

| openshift-monitoring/prometheus-k8s | 172.30.134.157 | octavia |

| openshift-console/console | 172.30.235.94 | octavia |

| openshift-metrics-server/metrics-server | 172.30.237.31 | octavia |

| openshift-monitoring/alertmanager-main | 172.30.189.57 | octavia |

| openshift-logging/logging-kibana | 172.30.226.107 | octavia |

| openshift-infra/hawkular-cassandra | 172.30.207.19 | octavia |

+-----------------------------------------------+----------------+----------+

This command output shows the OpenStack Octavia load balancers that correspond to the underlying OpenShift services. Notice that all VIP addresses belong to the OpenShift service network except for the cluster router load balancer, which has a VIP on the deployment network.

Octavia is also used by OpenShift to load balance across pod replica sets. When an OpenShift service is exposed as a LoadBalancer type, an Octavia load balancer is automatically created to load balance client connections to the service pods. This also happens when creating a ClusterIP service. Namespace isolation is enforced when kuryr ClusterIP services are created. It is not enforced when LoadBalancer services are created because they should be externally accessible. A floating IP is added to the load balancer VIP port.