Chapter 2. Infrastructure Components

2.1. Kubernetes Infrastructure

2.1.1. Overview

Within OpenShift Dedicated, Kubernetes manages containerized applications across a set of containers or hosts and provides mechanisms for deployment, maintenance, and application-scaling. The container runtime packages, instantiates, and runs containerized applications. A Kubernetes cluster consists of one or more masters and a set of nodes.

You can optionally configure your masters for high availability (HA) to ensure that the cluster has no single point of failure.

Note

OpenShift Dedicated uses Kubernetes 1.11 and Docker 1.13.1.

2.1.2. Masters

The master is the host or hosts that contain the control plane components, including the API server, controller manager server, and etcd. The master manages nodes in its Kubernetes cluster and schedules pods to run on those nodes.

Table 2.1. Master Components

ComponentDescription

API Server

The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also assigns pods to nodes and synchronizes pod information with service configuration.

etcd

etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the desired state. etcd can be optionally configured for high availability, typically deployed with 2n+1 peer services.

Controller Manager Server

The controller manager server watches etcd for changes to replication controller objects and then uses the API to enforce the desired state. Several such processes create a cluster with one active leader at a time.

HAProxy

Optional, used when configuring highly-available masters with the native method to balance load between API master endpoints.

2.1.2.1. Control Plane Static Pods

The core control plane components, the API server and the controller manager components, run as static pods operated by the kubelet.

For masters that have etcd co-located on the same host, etcd is also moved to static pods. RPM-based etcd is still supported on etcd hosts that are not also masters.

In addition, the node components openshift-sdn and openvswitch are now run using a DaemonSet instead of a systemd service.

Figure 2.1. Control plane host architecture changes

Control plane host architecture changes

2.1.2.2. High Availability Masters

When using the native HA method with HAProxy, master components have the following availability:

Table 2.2. Availability Matrix with HAProxy

RoleStyleNotes

etcd

Active-active

Fully redundant deployment with load balancing.

API Server

Active-active

Managed by HAProxy.

Controller Manager Server

Active-passive

One instance is elected as a cluster leader at a time.

HAProxy

Active-passive

Balances load between API master endpoints.

2.1.3. Nodes

A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the container runtime, a kubelet, and a service proxy.

OpenShift Dedicated creates nodes from a cloud provider, physical systems, or virtual systems. Kubernetes interacts with node objects that are a representation of those nodes. The master uses the information from node objects to validate nodes with health checks. A node is ignored until it passes the health checks, and the master continues checking nodes until they are valid. The Kubernetes documentation has more information on node statuses and management.

2.1.3.1. Kubelet

Each node has a kubelet that updates the node as specified by a container manifest, which is a YAML file that describes a pod. The kubelet uses a set of manifests to ensure that its containers are started and that they continue to run.

A container manifest can be provided to a kubelet by:

  • A file path on the command line that is checked every 20 seconds.
  • An HTTP endpoint passed on the command line that is checked every 20 seconds.
  • The kubelet watching an etcd server, such as /registry/hosts/$(hostname -f), and acting on any changes.
  • The kubelet listening for HTTP and responding to a simple API to submit a new manifest.

2.1.3.2. Service Proxy

Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends.

2.1.3.3. Node Object Definition

The following is an example node object definition in Kubernetes:

apiVersion: v1 1
kind: Node 2
metadata:
  creationTimestamp: null
  labels: 3
    kubernetes.io/hostname: node1.example.com
  name: node1.example.com 4
spec:
  externalID: node1.example.com 5
status:
  nodeInfo:
    bootID: ""
    containerRuntimeVersion: ""
    kernelVersion: ""
    kubeProxyVersion: ""
    kubeletVersion: ""
    machineID: ""
    osImage: ""
    systemUUID: ""
1
apiVersion defines the API version to use.
2
kind set to Node identifies this as a definition for a node object.
3
metadata.labels lists any labels that have been added to the node.
4
metadata.name is a required value that defines the name of the node object. This value is shown in the NAME column when running the oc get nodes command.
5
spec.externalID defines the fully-qualified domain name where the node can be reached. Defaults to the metadata.name value when empty.

2.1.3.4. Node Bootstrapping

A node’s configuration is bootstrapped from the master, which means nodes pull their pre-defined configuration and client and server certificates from the master. This allows faster node start-up by reducing the differences between nodes, as well as centralizing more configuration and letting the cluster converge on the desired state. Certificate rotation and centralized certificate management are enabled by default.

Figure 2.2. Node bootstrapping workflow overview

Node bootstrapping workflow overview

When node services are started, the node checks if the /etc/origin/node/node.kubeconfig file and other node configuration files exist before joining the cluster. If they do not, the node pulls the configuration from the master, then joins the cluster.

ConfigMaps are used to store the node configuration in the cluster, which populates the configuration file on the node host at /etc/origin/node/node-config.yaml.

2.2. Container Registry

2.2.1. Overview

OpenShift Dedicated can utilize any server implementing the container image registry API as a source of images, including the Docker Hub, private registries run by third parties, and the integrated OpenShift Dedicated registry.

2.2.2. Integrated OpenShift Container Registry

OpenShift Dedicated provides an integrated container image registry called OpenShift Container Registry (OCR) that adds the ability to automatically provision new image repositories on demand. This provides users with a built-in location for their application builds to push the resulting images.

Whenever a new image is pushed to OCR, the registry notifies OpenShift Dedicated about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different pieces of OpenShift Dedicated react to new images, creating new builds and deployments.

2.2.3. Third Party Registries

OpenShift Dedicated can create containers using images from third party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift Dedicated registry. In this situation OpenShift Dedicated will fetch tags from the remote registry upon imagestream creation. Refreshing the fetched tags is as simple as running oc import-image <stream>. When new images are detected, the previously-described build and deployment reactions occur.

2.2.3.1. Authentication

OpenShift Dedicated can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Dedicated to push and pull images to and from private repositories. The Authentication topic has more information.

2.2.4. Red Hat Quay Registries

If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced registry features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images.

Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. Alternatively, refer to Getting Started with Red Hat Quay for information about setting up your own Red Hat Quay registry.

You can access your Red Hat Quay registry from OpenShift Dedicated like any remote container image registry. To learn how to set up credentials to access Red Hat Quay as a secured registry, refer to Allowing Pods to Reference Images from Other Secured Registries.

2.2.5. Authentication Enabled Red Hat Registry

All container images available through the Red Hat Container Catalog are hosted on an image registry, registry.access.redhat.com. With OpenShift Dedicated 3.11 Red Hat Container Catalog moved from registry.access.redhat.com to registry.redhat.io.

The new registry, registry.redhat.io, requires authentication for access to images and hosted content on OpenShift Dedicated. Following the move to the new registry, the existing registry will be available for a period of time.

Note

OpenShift Dedicated pulls images from registry.redhat.io, so you must configure your cluster to use it.

The new registry uses standard OAuth mechanisms for authentication, with the following methods:

  • Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters.
  • Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com. While it is possible to use this authentication method with OpenShift Dedicated, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Dedicated.

You can use docker login with your credentials, either username and password or authentication token, to access content on the new registry.

All image streams point to the new registry. Because the new registry requires authentication for access, there is a new secret in the OpenShift namespace called imagestreamsecret.

You must place your credentials in two places:

  • OpenShift namespace. Your credentials must exist in the OpenShift namespace so that the image streams in the OpenShift namespace can import.
  • Your host. Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images.

To access the new registry:

  • Verify image import secret, imagestreamsecret, is in your OpenShift namespace. That secret has credentials that allow you to access the new registry.
  • Verify all of your cluster nodes have a /var/lib/origin/.docker/config.json, copied from master, that allows you to access the Red Hat registry.

2.3. Web Console

2.3.1. Overview

The OpenShift Dedicated web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects.

Note

JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets.

From the About page in the web console, you can check the cluster’s version number.

About page
version number

2.3.2. Project Overviews

After logging in, the web console provides developers with an overview for the currently selected project:

Figure 2.3. Web Console Project Overview

Web Console Project Overview
The project selector allows you to switch between projects you have access to.
To quickly find services from within project view, type in your search criteria
Create new applications using a source repository or service from the service catalog.
Notifications related to your project.
The Overview tab (currently selected) visualizes the contents of your project with a high-level view of each component.
Applications tab: Browse and perform actions on your deployments, pods, services, and routes.
Builds tab: Browse and perform actions on your builds and image streams.
Resources tab: View your current quota consumption and other resources.
Storage tab: View persistent volume claims and request storage for your applications.
Monitoring tab: View logs for builds, pods, and deployments, as well as event notifications for all objects in your project.
Catalog tab: Quickly get to the catalog from within a project.

2.3.3. JVM Console

For pods based on Java images, the web console also exposes access to a hawt.io-based JVM console for viewing and managing any relevant integration components. A Connect link is displayed in the pod’s details on the Browse → Pods page, provided the container has a port named jolokia.

Figure 2.4. Pod with a Link to the JVM Console

Pod with a Link to the JVM Console

After connecting to the JVM console, different pages are displayed depending on which components are relevant to the connected pod.

Figure 2.5. JVM Console

JVM Console

The following pages are available:

PageDescription

JMX

View and manage JMX domains and mbeans.

Threads

View and monitor the state of threads.

ActiveMQ

View and manage Apache ActiveMQ brokers.

Camel

View and manage Apache Camel routes and dependencies.

OSGi

View and manage the JBoss Fuse OSGi environment.