Chapter 1. Integrating an overcloud with Ceph Storage

Red Hat OpenStack Platform director creates a cloud environment called the overcloud. You can use director to configure extra features for an overcloud, such as integration with Red Hat Ceph Storage. You can integrate your overcloud with Ceph Storage clusters created with director or with existing Ceph Storage clusters.

For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.

1.1. Red Hat Ceph Storage compatibility

RHOSP 16.2 supports connection to external Red Hat Ceph Storage 4 and Red Hat Ceph Storage 5 clusters.

1.2. Deploying the Shared File Systems service with external CephFS

You can deploy the Shared File Systems service (manila) with CephFS by using Red Hat OpenStack Platform (RHOSP) director. You can use the Shared File Systems service with the NFS protocol or the native CephFS protocol.

Important

You cannot use the Shared File Systems service with the CephFS native driver to serve shares to Red Hat OpenShift Container Platform through Manila CSI. Red Hat does not support this type of deployment. For more information, contact Red Hat Support.

The Shared File Systems service with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For more information about CSI workload recommendations, see https://access.redhat.com/articles/6667651.

To use native CephFS shared file systems, clients require access to the Ceph public network. When you integrate an overcloud with an existing Ceph Storage cluster, director does not create an isolated storage network to designate as the Ceph public network. This network is assumed to already exist. Do not provide direct access to the Ceph public network; instead, allow tenants to create a router to connect to the Ceph public network.

NFS-Ganesha gateway

When you use CephFS through the NFS protocol, director deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability by using an active-passive configuration.

The NFS-Ganesha gateway is supported with Red Hat Ceph Storage 4.x (Ceph package 14.x) and Red Hat Ceph Storage 5.x (Ceph package 16.x). For information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.

You must install the latest version of the ceph-ansible package on the undercloud, as described in Installing the ceph-ansible package.

Prerequisites

Before you configure the Shared File Systems service with an external Ceph Storage cluster, complete the following prerequisites:

  • Verify that your external Ceph Storage cluster has an active Metadata Server (MDS):

    $ ceph -s
  • The external Ceph Storage cluster must have a CephFS file system that is supported by CephFS data and metadata pools.

    • Verify the pools in the CephFS file system:

      $ ceph fs ls
    • Note the names of these pools to configure the director parameters, ManilaCephFSDataPoolName and ManilaCephFSMetadataPoolName. For more information about this configuration, see Creating a custom environment file.
  • The external Ceph Storage cluster must have a cephx client name and key for the Shared File Systems service.

    • Verify the keyring:

      $ ceph auth get client.<client name>
      • Replace <client name> with your cephx client name.

1.3. Configuring Ceph Object Store to use external Ceph Object Gateway

Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).

For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.