Chapter 2. Architecture of OpenShift Container Storage
Red Hat OpenShift Container Storage provides services for, and can run internally from Red Hat OpenShift Container Platform.
Red Hat OpenShift Container Storage architecture
Red Hat OpenShift Container Storage supports deployment into Red Hat OpenShift Container Platform clusters deployed on Installer Provisioned Infrastructure or User Provisioned Infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process. To know more about interoperability of components for the Red Hat OpenShift Container Storage and Red Hat OpenShift Container Platform, see the Interoperabilty matrix.
For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture.
2.1. About operators
Red Hat OpenShift Container Storage comprises three main operators, which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated. Administrators define the desired end state of the cluster, and the OpenShift Container Storage operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention.
OpenShift Container Storage (OCS) operator
A meta-operator that codifies and enforces the recommendations and requirements of a supported Red Hat OpenShift Container Storage deployment by drawing on other operators in specific, tested ways. This operator provides the storage cluster resource that wraps resources provided by the Rook-Ceph and NooBaa operators.
This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services object bucket claims made against it in on-premises environments.
Additionally, for internal mode clusters, it provides the Ceph cluster resource, which manages the deployments and services representing the following:
- Object storage daemons (OSDs)
- Monitors (MONs)
- Manager (MGR)
- Metadata servers (MDS)
- Object gateways (RGW) on-premises only
This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway object service. It creates an object storage class and services object bucket claims made against it.
Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint.
2.2. Storage cluster deployment approaches
Flexibility is a core tenet of Red Hat OpenShift Container Storage, as evidenced by its growing list of operating modalities. This section provides you with information that will help you to select the most appropriate approach for your environments. OpenShift Container Storage can be deployed either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach).
2.2.1. Internal approach
Deployment of Red Hat OpenShift Container Storage entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. There are two different deployment modalities available when Red Hat OpenShift Container Storage is running entirely within Red Hat OpenShift Container Platform:
Red Hat OpenShift Container Storage services run co-resident with applications, managed by operators in Red Hat OpenShift Container Platform.
A simple deployment is best for situations where
- Storage requirements are not clear
- OpenShift Container Storage services will run co-resident with applications
- Creating a node instance of a specific size is difficult (bare metal)
In order for Red Hat OpenShift Container Storage to run co-resident with applications, they must have local storage devices, or portable storage devices attached to them dynamically. For example, EBS volumes on EC2, or vSphere Virtual Volumes on VMware.
OpenShift Container Storage services run on dedicated infrastructure nodes managed by Red Hat OpenShift Container Platform.
An optimized approach is best for situations when:
- Storage requirements are clear
- OpenShift Container Storage services run on dedicated infrastructure nodes
- Creating a node instance of a specific size is easy (Cloud, Virtualized environment, etc.)
2.2.2. External approach
Red Hat OpenShift Container Storage exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes.
The external approach is best used when
- Storage requirements are significant (600+ storage devices)
- Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster.
- Another team (SRE, Storage, etc.) needs to manage the external cluster providing storage services. Possibly pre-existing.
After installing the OpenShift Container Storage operator, you can create a Storage Cluster resource from the OpenShift Console using an installation wizard, or submitted manually; in either Internal or External mode.
2.3. Node types
Nodes run the container runtime, as well as services, to ensure that containers are running, and maintain network communication and separation between pods. In OpenShift Container Storage, there are three types of nodes.
Table 2.1. Types of nodes
These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers.
Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters, but recommended for OpenShift Container Storage in virtualized and cloud environments.
To create Infra nodes, you can provision new nodes labeled as Infra. See How to label Red Hat OpenShift Storage "nodes" as an infra node?
Worker nodes are also known as application nodes since they run applications.
When OpenShift Container Storage is deployed in internal mode, a minimal cluster of 3 worker nodes is required, where the nodes are spread across three different racks, or availability zones, to ensure availability. In order for OpenShift Container Storage to run on worker nodes, they must either have local storage devices, or portable storage devices attached to them dynamically.
When it is deployed in external mode, it runs on multiple nodes to allow rescheduling by K8S on available nodes in case of a failure.
Examples of portable storage devices are EBS volumes on EC2, or vSphere Virtual Volumes on VMware.
Nodes that run only storage workloads require a subscription for Red Hat OpenShift Container Storage. Nodes that run other workloads in addition to storage workloads require both Red Hat OpenShift Container Storage and Red Hat OpenShift Container Platform subscriptions. See Chapter 6, Subscriptions for more information.