Chapter 2. CephFS through NFS-Ganesha Installation
A typical Ceph file system (CephFS) through NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations:
- OpenStack Controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services can coexist on the same node or can have one or more dedicated nodes.
- Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes.
- An isolated StorageNFS network that provides access from projects to the NFS-Ganesha services for NFS share provisioning.
The Shared File Systems service (manila) with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For important recommendations, see https://access.redhat.com/articles/6667651.
The Shared File Systems service (manila) provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver
, means that you can use the Shared File Systems service as a CephFS back end. RHOSP director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented through the NFS 4.1 protocol.
Using RHOSP director to deploy the Shared File Systems service with a CephFS back end on the overcloud automatically creates the required storage network defined in the heat template. For more information about network planning, see Overcloud networks in the Director Installation and Usage guide.
Although you can manually configure the Shared File Systems service by editing its node /etc/manila/manila.conf
file, RHOSP director can override any settings in future overcloud updates. The recommended method for configuring a Shared File Systems back end is through director. Use RHOSP director to create an extra StorageNFS network for storage traffic.
Adding CephFS through NFS to an externally deployed Ceph cluster, which was not configured by Red Hat OpenStack Platform (RHOSP) director, is supported. Currently, only one CephFS back end can be defined in director. For more information, see Integrate with an existing Ceph Storage cluster in the Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster guide.
2.1. CephFS through NFS-Ganesha installation requirements
CephFS through NFS has been fully supported since Red Hat OpenStack Platform version (RHOSP) 13. The RHOSP Shared File Systems service with CephFS through NFS for RHOSP 16.0 and later is supported for use with Red Hat Ceph Storage version 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
Prerequisites
- You install the Shared File Systems service on Controller nodes, as is the default behavior.
- You install the NFS-Ganesha gateway service on the Pacemaker cluster of the Controller node.
- You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end.
2.2. File shares
File shares are handled differently between the OpenStack Shared File Systems service (manila), Ceph File System (CephFS), and Ceph through NFS.
The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage inherently allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect.
With CephFS, a share is considered a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File Systems service creates. Access to CephFS through NFS shares is provided by specifying the IP address of the client.
With CephFS through NFS, file shares are provisioned and accessed through the NFS protocol. The NFS protocol also handles security.
2.3. Installing the ceph-ansible package
Install the ceph-ansible
package to be installed on an undercloud node to deploy containerized Ceph.
Procedure
-
Log in to an undercloud node as the
stack
user. Install the ceph-ansible package:
[stack@undercloud-0 ~]$ sudo dnf install -y ceph-ansible [stack@undercloud-0 ~]$ sudo dnf list ceph-ansible ... Installed Packages ceph-ansible.noarch 4.0.23-1.el8cp @rhelosp-ceph-4-tools
2.4. Generating the custom roles file
For security, isolate NFS traffic to a separate network when using CephFS through NFS so that the Ceph NFS server is accessible only through the isolated network. Deployers can constrain the isolated network to a select group of projects in the cloud. Red Hat OpenStack director ships with support to deploy a dedicated StorageNFS network. To configure and use the StorageNFS network, a custom Controller role is required.
It is possible to omit the creation of an isolated network for NFS traffic. However, Red Hat strongly discourages such setups for production deployments that have untrusted clients. When omitting the StorageNFS network, director can connect the Ceph NFS server on any shared non-isolated network, such as the external network. Shared non-isolated networks are typically routable to all user private networks in the cloud. When the NFS server is on such a network, you cannot control access to OpenStack Shared File Systems service (manila) shares through specific client IP access rules. Users would have to use the generic 0.0.0.0/0
IP to allow access to their shares. The shares are then mountable to anyone who discovers the export path.
The ControllerStorageNFS
custom role configures the isolated StorageNFS
network. This role is similar to the default Controller.yaml
role file with the addition of the StorageNFS
network and the CephNfs
service, indicated by the OS::TripleO::Services:CephNfs
command.
[stack@undercloud ~]$ cd /usr/share/openstack-tripleo-heat-templates/roles [stack@undercloud roles]$ diff Controller.yaml ControllerStorageNfs.yaml 16a17 > - StorageNFS 50a45 > - OS::TripleO::Services::CephNfs
For more information about the openstack overcloud roles generate
command, see Roles in the Advanced Overcloud Customization guide.
The openstack overcloud roles generate
command creates a custom roles_data.yaml
file including the services specified after -o
. In the following example, the roles_data.yaml
file created has the services for ControllerStorageNfs
, Compute
, and CephStorage
.
If you have an existing roles_data.yaml
file, modify it to add ControllerStorageNfs
, Compute
, and CephStorage
services to the configuration file. For more information, see Roles in the Advanced Overcloud Customization guide.
Procedure
-
Log in to an undercloud node as the
stack
user, Use the
openstack overcloud roles generate
command to create theroles_data.yaml
file:[stack@undercloud ~]$ openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage
2.5. Deploying the updated environment
When you are ready to deploy your environment, use the openstack overcloud deploy
command with the custom environments and roles required to run CephFS with NFS-Ganesha.
The overcloud deploy command has the following options in addition to other required options.
Action | Option | Additional information |
---|---|---|
Add the extra StorageNFS network with |
| The StorageNFS and network_data_ganesha.yaml file. You can omit this option if you do not want to isolate NFS traffic to a separate network. For more information, see Generating the custom roles file. |
Add the custom roles defined in the |
| You can omit this option if you do not want to isolate NFS traffic to a separate network. For more information, see Generating the custom roles file. |
Deploy the Ceph daemons with |
| Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide. |
Deploy the Ceph metadata server with |
| Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide |
Deploy the Shared File Systems (manila) service with the CephFS through NFS back end. Configure NFS-Ganesha with director. |
|
The following example shows an openstack overcloud deploy command
with options to deploy CephFS through NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network:
[stack@undercloud ~]$ openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml \ -r /home/stack/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
For more information about the openstack overcloud deploy
command, see Deployment command in the Director Installation and Usage guide.
2.5.1. The StorageNFS and network_data_ganesha.yaml file
Use composable networks to define custom networks and assign them to any role. Instead of using the standard network_data.yaml
file, you can configure the StorageNFS composable network with the network_data_ganesha.yaml
file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates
directory.
- IMPORTANT
- If you do not define the Storage NFS network, director defaults to the external network. Although the external network can be useful in test and prototype environments, security on the external network is not sufficient for production environments. For example, if you expose the NFS service on the external network, a denial of service (DoS) attack can disrupt controller API access to all cloud users, not only consumers of NFS shares. By contrast, when you deploy the NFS service on a dedicated Storage NFS network, potential DoS attacks can target only NFS shares in the cloud. In addition to potential security risks, when you deploy the NFS service on an external network, additional routing configurations are required for precise access control to shares. On the Storage NFS network, however, you can use the client IP address on the network to achieve precise access control.
The network_data_ganesha.yaml
file contains an additional section that defines the isolated StorageNFS network. Although the default settings work for most installations, you must edit the YAML file to add your network settings, including the VLAN ID, subnet, and other settings.
name: StorageNFS enabled: true vip: true name_lower: storage_nfs vlan: 70 ip_subnet: '172.17.0.0/20' allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::4', 'end': 'fd00:fd00:fd00:7000::fffe'}]
For more information about composable networks, see Using Composable Networks in the Advanced Overcloud Customization guide.
2.5.2. The CephFS back-end environment file
The integrated environment file for defining a CephFS back end, manila-cephfsganesha-config.yaml
, is located in /usr/share/openstack-tripleo-heat-templates/environments/
.
The manila-cephfsganesha-config.yaml
environment file contains settings relevant to the deployment of the Shared File Systems service (manila). The back-end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service:
[stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml # A Heat environment file which can be used to enable a # a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by ceph-ansible # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../deployment/ceph-ansible/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: true 4 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'
The parameter_defaults
header signifies the start of the configuration. To override default values set in resource_registry
, copy this manila-cephfsganesha-config.yaml
environment file to your local environment file directory, /home/stack/templates/
, and edit the parameter settings as required by your environment. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs
, which sets defaults for a CephFS back end.
- 1
ManilaCephFSBackendName
sets the name of the manila configuration of your CephFS back end. In this case, the default back-end name iscephfs
.- 2
ManilaCephFSDriverHandlesShareServers
controls the lifecycle of the share server. When set tofalse
, the driver does not handle the lifecycle. This is the only supported option.- 3
ManilaCephFSCephFSAuthId
defines the Ceph auth ID that director creates for themanila
service to access the Ceph cluster.- 4
ManilaCephFSCephFSEnableSnapshots
controls snapshot activation. Snapshots are supported with Ceph Storage 4.1 and later but the value of this parameter defaults tofalse
. Set the value totrue
to ensure that the driver reports thesnapshot_support
capability to the Shared File Systems scheduler.
For more information about environment files, see Environment Files in the Director Installation and Usage guide.