Menu Close

Chapter 5. Resource considerations and limitations

5.1. Disk

Review the resource requirements in Resource guidelines for installing OpenShift Container Platform on OpenStack before implementation. This section highlights additional disk considerations relevant to this reference architecture.

It is essential that you ensure fast, redundant disk is available to your Red Hat OpenShift Container Platform (RHOCP) installation. To make this possible on RHOCP on RHOSP, you need to consider the minimum disk requirements for etcd, and how you are going to provide the disk to the RHOCP nodes.

5.1.1. Minimum disk requirements for etcd

As the control plane virtual machines run the etcd key-value store, they need to meet the known resource requirements. Fast disks are the most critical for etcd stability and performance. etcd is sensitive to disk write latency.

Minimum disk requirement
50 sequential IOPS, for example, a 7200 RPM disk
Minimum disk requirement for heavily loaded clusters
500 sequential IOPS, for example, a typical local SSD or a high performance virtualized block device.

For more information on the etcd disk requirements, see Hardware recommendations - Disks.

For this reference architecture, all masters need access to fast disk (SSD or better) to ensure stability and guarantee supportability. Contact your Red Hat account team for assistance on reviewing your disk performance needs.

5.1.2. Providing disk to RHOCP nodes

The full range of options for providing storage to your cluster is outside the scope of this document. The following sections provide a simple analysis of the most popular options. Ephemeral on Compute nodes

This is the easiest way to provide disk to the masters as there is no additional configuration of RHOSP or RHOCP. All the instance volumes are stored directly on the disk local to the Compute service. As long as that disk is fast, SSD or better, the requirement for fast disk is fulfilled.

However, while etcd does have resiliency built into the software, choosing to use local Compute disks means any loss of the underlying physical node destroys the instance and etcd member, which removes a layer of protection. Ceph-backed ephemeral

This is the default configuration for a director deployed Red Hat Ceph Storage environment, using the configuration file /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml. This sets NovaEnableRBDBackend to True, ensuring that the Compute service uses Red Hat Ceph Block Devices (RBD) for instances. The volumes are still ephemeral, but are sourced from the Red Hat Ceph Storage cluster. The cluster itself can then provide resilience. You can also build the Red Hat Ceph Storage cluster on fast disks to meet this requirement. The Red Hat Ceph Storage cluster may also include tiering options, as described in the following documents:

Using this method allows for added resilience for volumes. Additionally, instances backed with Red Hat Ceph Storage can be live migrated, which provides resilience and ensures correct RHOCP master node placement across the Compute nodes.

This reference architecture uses the Ceph-Backed Ephemeral default provided. This allows the Red Hat Ceph Storage cluster to be properly configured to allow for the fast disk requirements of etcd. Volumes provided by Red Hat OpenStack Block Storage

Red Hat OpenStack Block Storage (cinder) provides an API front end that uses multiple backend storage providers, including Red Hat Ceph Storage. Compute node instances can request a volume from OpenStack Block Storage, which in turn requests the storage from the backend plugin.

RHOSP deployments with Red Hat Ceph Storage create a default setup that can be used with the RHOCP installation program to provide RHOCP nodes with OpenStack Block Storage (cinder) backed by Red Hat Ceph Block Devices (RBD). This is achieved by using machine pools. You can use machine pools to customize some aspects of each node type in an installer-provisioned infrastructure.

For example, a default RHOSP 13 installation provides the following preconfigured OpenStack Block Storage volume type:

$ openstack volume type list
 | ID                                   | Name    | Is Public         |
 | 37c2ed76-c9f7-4b0d-9a35-ab02ca6bcbb2 | tripleo | True              |

This pool can then be manually added to a machine pool subsection of the install-config.yaml file for the node type. In the following example, we request that the RHOCP installation program create a 25GB root volume on all masters.

  hyperthreading: Enabled
  name: master
      type: m1.large
        size: 25
        type: tripleo
  replicas: 1

The RHOCP installation program creates the volume for the RHOSP instance on deployment:

$ openstack volume list
| ID                                   | Name                  | Status | Size | Attached to                                    |
| b555cc99-3317-4334-aa19-1184a78ee3f8 | ostest-8vt7s-master-0 | in-use |   25 | Attached to ostest-8vt7s-master-0 on /dev/vda  |

The instance is built and the volume is used by the instance:

[core@ostest-8vt7s-master-0 ~]$ df -h
Filesystem                            Size  Used Avail Use% Mounted on
devtmpfs                              7.8G     0  7.8G   0% /dev
tmpfs                                 7.9G   84K  7.9G   1% /dev/shm
tmpfs                                 7.9G   18M  7.9G   1% /run
tmpfs                                 7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/mapper/coreos-luks-root-nocrypt   25G   19G  6.2G  75% /sysroot
none                                  2.0G  435M  1.6G  22% /var/lib/etcd
/dev/vda1                             364M   84M  257M  25% /boot
/dev/vda2                             127M  3.0M  124M   3% /boot/efi
tmpfs                                 1.6G  4.0K  1.6G   1% /run/user/1000

Using OpenStack Block Storage for volume backends, combined with the installer-provisioned infrastructure method, allows the easy use of flexible storage backends for all installer-provisioned infrastructures. Using OpenStack Block Storage, administrators can create complex and refined tiers (classes) of storage to present to the different nodes. For example, you can set different values for the machine pools for workers and masters:

- hyperthreading: Enabled
  name: worker
      type: m1.large
        size: 25
        type: tripleo-ssd
  replicas: 3
  hyperthreading: Enabled
  name: master
      type: m1.large
        size: 25
        type: tripleo-nvme
  replicas: 3

5.2. Limitations

This reference architecture is intentionally opinionated and prescriptive. You should only customise your deployment by using the install-config.yaml file, and not add additional infrastructure customization manually after deployment.

This reference architecture does not attempt to handle all uses for RHOCP on RHOSP. You should consider your individual requirements and undertake a detailed analysis of the capabilities and restrictions of an installer-provisioned infrastructure-based installation.

The following sections detail the FAQs our field teams have received around limitations when using an installer-provisioned infrastructure-based installation. This list is not conclusive and may change over time. Contact Red Hat Support or your Red Hat Specialized Solutions Architect before proceeding with the implementation of this reference architecture if you have questions around specific functionalities.

5.2.1. Internal TLS (TLS Everywhere) with Red Hat Identity Management

This reference architecture has not been tested with the steps documented in the following procedures:

Contact Red Hat Support if you want to use this functionality with RHOCP on RHOSP.

5.2.2. RHOCP installer-provisioned infrastructure subnets

With the installer-provisioned infrastructure method, the network ranges presented within the networking section should not be edited from their default subnet values. Future changes to the networking structure for IPI will allow more flexibility for this.

The installer-provisioned infrastructure method creates all networks required by the installation. This means that modifying those networks with special settings is not possible unless done manually after the installation.

Settings for values such as MTU must be set from the RHOSP deployment as described in the RHOSP Networking Guide:

5.2.3. ReadWriteMany (RWX) PersistentVolumes (PVs)

When deploying RHOCP on RHOSP there is no support for ReadWriteMany (RWX) PersistentVolumes. An RWX volume is a volume that can be mounted as read-write by many nodes. For more information, see Understanding persistent storage.

While there are many PersistentVolume plugins for RHOCP, the only one currently tested for RHOCP on RHOSP is Red Hat OpenStack Block Storage (cinder). OpenStack Block Storage provides a convenient front end for many different storage providers. Using OpenStack Block Storage ensures that the underlying storage requests are managed using the OpenStack APIs, like all other cloud resources. This provides consistent management for infrastructure administrators across all elements of the on-demand infrastructure, and allows easier management and tracking of resource allocations.

OpenStack Block Storage does not support RWX, therefore RHOCP 4.4 on RHOSP deployments are not be able to offer RWX PersistentVolumes. Support for RWX Persistent Volumes is due in a future release of RHOCP on RHOSP through the OpenStack Shared-Filesystems-as-a-Service (manila) CSI plugin.

5.2.4. Red Hat OpenShift Container Storage 4

Red Hat OpenShift Container Storage is persistent software-defined storage, integrated with and optimized for RHOCP. OpenShift Container Storage offers storage to a RHOCP installation through a containerized solution that is run within RHOCP directly.

For RHOCP, OpenShift Container Storage provides a fully self-contained containerized Red Hat Ceph Storage deployment by using underlying storage to create a unique Red Hat Ceph Storage cluster for the RHOCP installation it runs within. OpenShift Container Storage uses the upstream Rook-Ceph operator to do this.

At the time of this writing, using OpenShift Container Storage in a RHOCP on RHOSP environment is not supported. Support for this scenario is planned for a future release of OpenShift Container Storage.