Chapter 1. Introduction
Red Hat OpenStack Platform director creates a cloud environment called the overcloud. The director provides the ability to configure extra features for an overcloud, including integration with Red Hat Ceph Storage (both Ceph Storage clusters created with the director or existing Ceph Storage clusters).
1.1. Introduction to Ceph Storage
Red Hat Ceph Storage is a distributed data object store designed to provide excellent performance, reliability, and scalability. Distributed object stores are the future of storage, because they accommodate unstructured data, and because clients can use modern object interfaces and legacy interfaces simultaneously. At the core of every Ceph deployment is the Ceph Storage cluster, which consists of several types of daemons, but primarily, these two:
- Ceph OSD (Object Storage Daemon)
- Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs utilize the CPU and memory of Ceph nodes to perform data replication, rebalancing, recovery, monitoring and reporting functions.
- Ceph Monitor
- A Ceph monitor maintains a master copy of the Ceph storage cluster map with the current state of the storage cluster.
For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.
1.2. Defining the scenario
This guide provides instructions on integrating an existing Ceph Storage cluster with an overcloud. This means the director configures the overcloud to use the Ceph Storage cluster for storage needs. You manage and scale the cluster itself outside of the Overcloud configuration.
1.3. Deploy the Shared File Systems service with external CephFS through NFS
When Red Hat OpenStack Platform director deploys the Shared File Systems service with CephFS through NFS, it deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability using an active-passive configuration.
This feature enables director to deploy the Shared File System with an external Ceph Storage cluster. In this type of deployment, Ganesha still runs on the Controller nodes managed by PCS.
You must edit the parameters in ceph-ansible-external.yaml
to integrate the Shared File System with an external Ceph Storage cluster.
You must edit the ceph-ansible-external.yaml
file to configure any OpenStack Platform service to use an external Ceph Storage cluster.
For more information about how to configure the ceph-ansible-external.yaml
file, see Integrating with the existing Ceph cluster.
This feature is supported with Ceph Storage 4.1 or later. After you upgrade to Ceph Storage 4.1, you must install the latest version of the ceph-ansible
package on the undercloud. For more information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
For more information about how to update the ceph-ansible
package on the undercloud, see Installing the ceph-ansible package.
Prerequisites
The following prerequisites are required to configure manila with an external Ceph Storage cluster:
- The external Ceph Storage cluster must have an active MDS.
-
The external Ceph Storage cluster must have a CephFS file system based on the values of the CephFS data (
ManilaCephFSDataPoolName
) and CephFS metadata pools (ManilaCephFSMetadataPoolName
). For more information, see Creating a custom environment file. -
The external Ceph Storage cluster must have a
cephx
client key for the Shared File Systems service and NFS-Ganesha. For more information, see Creating a custom environment file. -
You must have the
cephx
ID and client key to configure the Shared File Systems service and NFS-Ganesha. For more information, see Creating a custom environment file.
For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage File System Guide.
For more information about CephFS through NFS, see Deploying the Shared File Systems service with CephFS through NFS.