Chapter 2. Native CephFS deployment

A typical native Ceph file system (CephFS) installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following components:

  • RHOSP Controller nodes that run containerized Ceph metadata server (MDS), Ceph monitor (MON) and Shared File Systems (manila) services. Some of these services can coexist on the same node or they can have one or more dedicated nodes.
  • Ceph Storage cluster with containerized object storage daemons (OSDs) that run on Ceph Storage nodes.
  • An isolated storage network that serves as the Ceph public network on which the clients can communicate with Ceph service daemons. To facilitate this, the storage network is made available as a provider network for users to connect their VMs and mount CephFS shares.
Important

You cannot use the Shared File Systems service (manila) with the CephFS native driver to serve shares to OpenShift Container Platform through Manila CSI, because Red Hat does not support this type of deployment. For more information, contact Red Hat Support.

The Shared File Systems (manila) service provides APIs that allow the tenants to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver, allows the Shared File Systems service to use native CephFS as a back end. You can install native CephFS in an integrated deployment managed by director.

When director deploys the Shared File Systems service with a CephFS back end on the overcloud, it automatically creates the required data center storage network. However, you must create the corresponding storage provider network on the overcloud. For more information, see Post-deployment configuration.

For more information about network planning, see Overcloud networks in the Director Installation and Usage guide.

Although you can manually configure the Shared File Systems service by editing the /var/lib/config-data/puppet-generated/manila/etc/manila/manila.conf file for the node, any settings can be overwritten by the Red Hat OpenStack Platform director in future overcloud updates. Red Hat only supports deployments of the Shared File Systems service that are managed by director.

2.1. Requirements

You can deploy a native CephFS back end with new or existing Red Hat OpenStack Platform (RHOSP) environments if you meet the following requirements:

Important

The RHOSP Shared File Systems service (manila) with the native CephFS back end is supported for use with Red Hat Ceph Storage version 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.

  • Install the Shared File Systems service on a Controller node. This is the default behavior.
  • Use only a single instance of a CephFS back end for the Shared File Systems service.

2.2. File shares

File shares are handled differently between the Shared File Systems service (manila), Ceph File System (CephFS), and CephFS through NFS.

The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage inherently allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect.

With CephFS, a share is considered a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File Systems service creates. Access to Ceph shares is determined by MDS authentication capabilities.

With native CephFS, file shares are provisioned and accessed through the CephFS protocol. Access control is performed with a CephX authentication scheme that uses CephFS usernames.

2.3. Isolated network used by native CephFS

Native CephFS deployments use the isolated storage network deployed by director as the Ceph public network. Clients use this network to communicate with various Ceph infrastructure service daemons. For more information about isolating networks, see Basic network isolation in the Advanced Overcloud Customization guide.

2.4. Installing the ceph-ansible package

Install the ceph-ansible package to be installed on an undercloud node to deploy containerized Ceph.

Procedure

  1. Log in to an undercloud node as the stack user.
  2. Install the ceph-ansible package:

    [stack@undercloud-0 ~]$ sudo dnf install -y ceph-ansible
    [stack@undercloud-0 ~]$ sudo dnf list ceph-ansible
    ...
    Installed Packages
    ceph-ansible.noarch    4.0.23-1.el8cp      @rhelosp-ceph-4-tools

2.5. Deploying the environment

When you are ready to deploy the environment, use the openstack overcloud deploy command with the custom environments and roles required to configure the native CephFS back end.

The openstack overcloud deploy command has the following options in addition to other required options.

ActionOptionAdditional Information

Specify the network configuration with network_data.yaml

[filename] -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml

You can use a custom environment file to override values for the default networks specified in this network data environment file. This is the default network data file that is available when you use isolated networks. You can omit this file from the openstack overcloud deploy command for brevity.

Deploy the Ceph daemons with ceph-ansible.yaml

-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml

Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide

Deploy the Ceph metadata server with ceph-mds.yaml

-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml

Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide

Deploy the manila service with the native CephFS back end.

-e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml

Environment file

The following example shows an openstack overcloud deploy command that includes options to deploy a Ceph cluster, Ceph MDS, the native CephFS back end, and the networks required for the Ceph cluster:

[stack@undercloud ~]$ openstack overcloud deploy \
...
-n /usr/share/openstack-tripleo-heat-templates/network_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml   \
-e /home/stack/network-environment.yaml  \
-e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml  \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml  \
-e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml

For more information about the openstack overcloud deploy command, see Deployment command in the Director Installation and Usage guide.

2.5.1. Environment file

The integrated environment file that defines a native CephFS back end is located in the following path of an undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml.

The manila-cephfsnative-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service. The back end default settings should work for most environments.

The example shows the default values that director uses during deployment of the Shared File Systems service:

[stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml

# A Heat environment file which can be used to enable a
# a Manila CephFS Native driver backend.
resource_registry:
  OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml
  OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml
  # Only manila-share is pacemaker managed:
  OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml
  OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml

parameter_defaults:
  ManilaCephFSBackendName: cephfs 1
  ManilaCephFSDriverHandlesShareServers: false 2
  ManilaCephFSCephFSAuthId: 'manila' 3
  ManilaCephFSCephFSEnableSnapshots: true 4
  ManilaCephFSCephVolumeMode: '0755'  5
  # manila cephfs driver supports either native cephfs backend - 'CEPHFS'
  # (users mount shares directly from ceph cluster), or nfs-ganesha backend -
  # 'NFS' (users mount shares through nfs-ganesha server)
  ManilaCephFSCephFSProtocolHelperType: 'CEPHFS'  6

The parameter_defaults header signifies the start of the configuration. Specifically, settings under this header let you override default values set in resource_registry. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs, which sets defaults for a CephFS back end.

1
ManilaCephFSBackendName sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs.
2
ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false, the driver does not handle the lifecycle. This is the only supported option for CephFS back ends.
3
ManilaCephFSCephFSAuthId defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster.
4
ManilaCephFSCephFSEnableSnapshots controls snapshot activation. Snapshots are supported With Ceph Storage 4.1 and later, but the value of this parameter defaults to false. You can set the value to true to ensure that the driver reports the snapshot_support capability to the manila scheduler.
5
ManilaCephFSCephVolumeMode controls the UNIX permissions to set against the manila share created on the native CephFS back end. The value defaults to 755.
6
ManilaCephFSCephFSProtocolHelperType must be set to CEPHFS to use the native CephFS driver.

For more information about environment files, see Environment Files in the Director Installation and Usage guide.