Chapter 1. Integrating an overcloud with containerized Red Hat Ceph Storage

You can use Red Hat OpenStack Platform (RHOSP) director to integrate your cloud environment, which director calls the overcloud, with Red Hat Ceph Storage. You manage and scale the cluster itself outside of the overcloud configuration.

For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.

This guide contains instructions for deploying a containerized Red Hat Ceph Storage cluster with your overcloud. Director uses Ansible playbooks provided through the ceph-ansible package to deploy a containerized Ceph Storage cluster. The director also manages the configuration and scaling operations of the cluster.

For more information about containerized services in the Red Hat OpenStack Platform, see Configuring a basic overcloud with the CLI tools in Director Installation and Usage.

1.1. Ceph Storage clusters

Red Hat Ceph Storage is a distributed data object store designed to provide excellent performance, reliability, and scalability. Distributed object stores accommodate unstructured data so clients can use modern object interfaces and legacy interfaces simultaneously. At the core of every Ceph deployment is the Ceph Storage cluster, which consists of several types of daemons, but primarily, these two:

Ceph OSD (Object Storage Daemon)
Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs use the CPU and memory of Ceph nodes to perform data replication, rebalancing, recovery, monitoring, and reporting functions.
Ceph Monitor
A Ceph monitor maintains a master copy of the Ceph storage cluster map with the current state of the storage cluster.

For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.

1.2. Requirements to deploy a containerized Ceph Storage cluster with your overcloud

Before you deploy a containerized Ceph Storage cluster with your overcloud, your environment must contain the following configuration:

Important

The Ceph monitor service installs on the overcloud Controller nodes, so you must provide adequate resources to avoid performance issues. Ensure that the Controller nodes in your environment use at least 16GB of RAM for memory and solid-state drive (SSD) storage for the Ceph monitor data. For a medium to large Ceph installation, provide at least 500GB of Ceph monitor data. This space is necessary to avoid levelDB growth if the cluster becomes unstable. The following examples are common sizes for Ceph Storage clusters:

  • Small: 250 terabytes
  • Medium: 1 petabyte
  • Large: 2 petabytes or more.

1.3. Ceph Storage node requirements

If you use Red Hat OpenStack Platform (RHOSP) director to create Red Hat Ceph Storage nodes, there are additional requirements.

For information about how to select a processor, memory, network interface cards (NICs), and disk layout for Ceph Storage nodes, see Hardware selection recommendations for Red Hat Ceph Storage in the Red Hat Ceph Storage Hardware Guide.

Each Ceph Storage node also requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server.

Note

RHOSP director uses ceph-ansible, which does not support installing the OSD on the root disk of Ceph Storage nodes. This means that you need at least two disks for a supported Ceph Storage node.

Ceph Storage nodes and RHEL compatibility

Red Hat Ceph Storage compatibility

  • RHOSP 16.2 supports Red Hat Ceph Storage 4.

Placement Groups (PGs)

  • Ceph Storage uses placement groups (PGs) to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster rebalancing, Ceph can move or replicate a placement group and its contents, which means a Ceph Storage cluster can rebalance and recover efficiently.
  • The default placement group count that director creates is not always optimal, so it is important to calculate the correct placement group count according to your requirements. You can use the placement group calculator to calculate the correct count. To use the PG calculator, enter the predicted storage usage per service as a percentage, as well as other properties about your Ceph cluster, such as the number OSDs. The calculator returns the optimal number of PGs per pool. For more information, see Placement Groups (PGs) per Pool Calculator.
  • Auto-scaling is an alternative way to manage placement groups. With the auto-scale feature, you set the expected Ceph Storage requirements per service as a percentage instead of a specific number of placement groups. Ceph automatically scales placement groups based on how the cluster is used. For more information, see Auto-scaling placement groups in the Red Hat Ceph Storage Strategies Guide.

Processor

  • 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.

Network Interface Cards

  • A minimum of one 1 Gbps Network Interface Cards (NICs), although Red Hat recommends that you use at least two NICs in a production environment. Use additional NICs for bonded interfaces or to delegate tagged VLAN traffic. Use a 10 Gbps interface for storage nodes, especially if you want to create a Red Hat OpenStack Platform (RHOSP) environment that serves a high volume of traffic.

Power management

  • Each Controller node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality on the motherboard of the server.

1.4. Ansible playbooks to deploy Ceph Storage

The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file instructs director to use playbooks derived from the ceph-ansible project. These playbooks are installed in /usr/share/ceph-ansible/ of the undercloud. In particular, the following file contains all the default settings that the playbooks apply:

  • /usr/share/ceph-ansible/group_vars/all.yml.sample
Warning

Although ceph-ansible uses playbooks to deploy containerized Ceph Storage, do not edit these files to customize your deployment. Instead, use heat environment files to override the defaults set by these playbooks. If you edit the ceph-ansible playbooks directly, your deployment fails.

For information about the default settings applied by director for containerized Ceph Storage, see the heat templates in /usr/share/openstack-tripleo-heat-templates/deployment/ceph-ansible.

Note

Reading these templates requires a deeper understanding of how environment files and heat templates work in director. for more information, see Understanding Heat Templates and Environment Files.

For more information about containerized services in RHOSP, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide.