Red Hat Training
A Red Hat training course is available for Red Hat OpenStack Platform
Chapter 2. CephFS via NFS Installation
2.1. CephFS with NFS-Ganesha deployment
A typical Ceph file system (CephFS) via NFS installation in an OpenStack environment includes:
- OpenStack controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services may coexist on the same node or may have one or more dedicated nodes.
- Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes.
- An isolated StorageNFS network that provides access from tenants to the NFS-Ganesha services for NFS share provisioning.
The Shared File System (manila) service provides APIs that allow the tenants to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS (namely, manila.share.drivers.cephfs.driver.CephFSDriver
) allows the Shared File System service to use CephFS as a back end. The Red Hat OpenStack Platform director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented via the NFS 4.1 protocol. In this document, this configuration is referred to as CephFS via NFS.
Using OpenStack director to deploy the Shared File System with a CephFS back end on the overcloud automatically creates the required storage network (defined in the heat template). For more information about network planning, refer to the Planning Networks section of the Director Installation and Usage Guide.
While you can manually configure the Shared File System service by editing its node’s /etc/manila/manila.conf
file, any settings can be overwritten by the Red Hat OpenStack Platform director in future overcloud updates. The recommended method for configuring a Shared File System back end is through the director.
This section describes how to install CephFS via NFS in an integrated deployment managed by director.
Adding CephFS to an externally deployed Ceph cluster that was not configured by Red Hat OpenStack director is not supported at this time. Currently, only one CephFS back end can be defined in director at a time.
2.1.1. Requirements
To use CephFS via NFS, you need a Red Hat OpenStack Platform version 13 or newer environment, which can be an existing or new OpenStack environment. CephFS works with Red Hat Ceph Storage version 3. See the Deploying an Overcloud with Containerized Red Hat Ceph Guide for instructions on how to deploy such an environment.
This document assumes that:
- The Shared File System service will be installed on controller nodes, as is the default behavior.
- The NFS-Ganesha gateway service will be installed on the controller’s nodes Pacemaker cluster.
- Only a single instance of a CephFS back end will be used by the Shared File System Service. Other non-CephFS back ends can be used with the single CephFS back end.
- An extra network (StorageNFS) created by OpenStack Platform director used for the storage traffic.
- New Red Hat Ceph Storage version 3 cluster configured at the same time as CephFS via NFS.
2.1.2. File shares
File shares are handled slightly different between the OpenStack Shared File System service (manila), Ceph File System (CephFS), and Ceph via NFS.
The Shared File System service provides shares, where a share is an individual file system namespace and a unit of storage or sharing and a defined size (for example, subdirectories with quotas). Shared file system storage enables multiple clients because the file system is set up prior to when access is requested (versus block storage, which is set up at the time it is requested).
With CephFS, a share is considered a directory with a defined quota and a layout pointing to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File System service creates. Access to Ceph shares is determined by MDS authentication capabilities.
With CephFS via NFS, file shares are provisioned and accessed through the NFS protocol. The NFS protocol also handles security.
2.1.3. Isolated network used by CephFS via NFS
CephFS via NFS deployments use an extra isolated network, StorageNFS. This network is deployed so users can mount shares over NFS on that network without accessing the Storage or Storage Management networks which are reserved for infrastructure traffic.
For more information about isolating networks, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/advanced_overcloud_customization/basic-network-isolation in the Director Installation and Usage guide.
2.2. Installing OpenStack with CephFS via NFS and a custom network_data file
Installing CephFS via NFS involves:
- Installing the ceph-ansible package.
-
Preparing the overcloud container images with the
openstack overcloud image prepare
command. -
Generating the custom roles file (
roles_data.yaml
) andnetwork_data.yaml
file. -
Deploying Ceph, Shared File System service (manila), and CephFS using the
openstack overcloud deploy
command with custom roles and environments. - Configuring the isolated StorageNFS network and creating the default share type.
Examples use the standard stack
user in the OpenStack environment.
Tasks should be performed in conjunction with an OpenStack installation or environment update.
2.2.1. Installing the ceph-ansible package
The OpenStack director requires the ceph-ansible
package to be installed on an undercloud node to deploy containerized Ceph.
Procedure
- Log in to an undercloud node.
Install the ceph-ansible package using
yum install
with elevated privileges.[stack@undercloud-0 ~]$ sudo yum install -y ceph-ansible [stack@undercloud-0 ~]$ sudo yum list ceph-ansible ... Installed Packages ceph-ansible.noarch 3.1.0-0.1.el7 rhelosp-13.
2.2.2. Preparing overcloud container images
Because all services are containerized in OpenStack, Docker images have to be prepared for the overcloud using the openstack overcloud image prepare
command. Running this command with the additional options add default images for the ceph and manila services to the docker registry. Ceph MDS and NFS-Ganesha services use the same Ceph base container image.
For additional information on container images, refer to the Container Images for Additional Services section in the Director Installation and Usage Guide.
Procedure
From the undercloud, run the
openstack overcloud image prepare
command with-e
to include these environment files:$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/manila.yaml \ ...
Use grep to verify the default images for the ceph and manila services are available in the
containers-default-parameters.yaml
file.[stack@undercloud-0 ~]$ grep -E 'ceph|manila' composable_roles/docker-images.yaml DockerCephDaemonImage: 192.168.24.1:8787/rhceph:3-12 DockerManilaApiImage: 192.168.24.1:8787/rhosp13/openstack-manila-api:2018-08-22.2 DockerManilaConfigImage: 192.168.24.1:8787/rhosp13/openstack-manila-api:2018-08-22.2 DockerManilaSchedulerImage: 192.168.24.1:8787/rhosp13/openstack-manila-scheduler:2018-08-22.2 DockerManilaShareImage: 192.168.24.1:8787/rhosp13/openstack-manila-share:2018-08-22.2
2.2.2.1. Generating the custom roles file
The ControllerStorageNFS custom role is used to set up the isolated StorageNFS network. This role is similar to the default Controller.yaml
role file with the addition of the StorageNFS network and the CephNfs service (indicated by OS::TripleO::Services:CephNfs
).
[stack@undercloud ~]$ cd /usr/share/openstack-tripleo-heat-templates/roles [stack@undercloud roles]$ diff Controller.yaml ControllerStorageNfs.yaml 16a17 > - StorageNFS 50a45 > - OS::TripleO::Services::CephNfs
For information about the openstack overcloud roles generate
command, refer to the Roles section of the Advanced Overcloud Customization Guide.
Procedure
The openstack overcloud roles generate
command creates a custom roles_data.yaml
file including the services specified after -o
. In the example below, the roles_data.yaml
file created has the services for ControllerStorageNfs, Compute, and CephStorage.
If you have an existing roles_data.yaml
file, modify it to add ControllerStorageNfs, Compute, and CephStorage services to the configuration file. Refer to the Roles section of the Advanced Overcloud Customization Guide.
- Log in to an undercloud node.
Use the
openstack overcloud roles generate
command to create theroles_data.yaml
file:[stack@undercloud ~]$ openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage
2.2.3. Deploying the updated environment
When you are ready to deploy your environment, use the openstack overcloud deploy
command with the custom environments and roles required to run CephFS with NFS-Ganesha. These environments and roles are explained below.
Your overcloud deploy command will have the options below in addition to other required options.
Action | Option | Additional Information |
---|---|---|
Add the updated default containers from the |
| |
Add the extra StorageNFS network with |
| Section 2.2.3.1, “StorageNFS and network_data_ganesha.yaml file” |
Add the custom roles defined in |
| |
Deploy the Ceph daemons with |
| Initiating Overcloud Deployment in Deploying an Overcloud with Containzerized Red Hat Ceph |
Deploy the Ceph metadata server with |
| Initiating Overcloud Deployment in Deploying an Overcloud with Containzerized Red Hat Ceph |
Deploy the manila service with the CephFS via NFS back end. Configures NFS-Ganesha via director. |
|
The example below shows an openstack overcloud deploy command
incorporating options to deploy CephFS via NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network:
[stack@undercloud ~]$ openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml \ -r /home/stack/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
For additional information on the openstack overcloud deploy command, refer to Creating the Overcloud with the CLI Tools section in the Director Installation and Usage Guide.
2.2.3.1. StorageNFS and network_data_ganesha.yaml file
Composable networks let you define custom networks and assign them to any role. Instead of using the standard network_data.yaml
file, the StorageNFS composable network is configured using the network_data_ganesha.yaml
file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates
directory.
The network_data_ganesha.yaml
file contains an additional section that defines the isolated StorageNFS network. While the default settings will work for most installations, you will still need to edit the YAML file to add your network settings, including the VLAN ID, subnet, etc.
name: StorageNFS enabled: true vip: true ame_lower: storage_nfs vlan: 70 ip_subnet: '172.16.4.0/24' allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]
For more information on composable networks, refer to the Using Composable Networks section in the Advanced Overcloud Customization Guide.
2.2.3.2. manila-cephfsganesha-config.yaml
The integrated environment file for defining a CephFS back end is located in the following path of an undercloud node:
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
The manila-cephfsganesha-config.yaml
environment file contains settings relevant to the deployment of the Shared File System service. The back end default settings should work for most environments. The example shows the default values used by the director when deploying the Shared File System service:
[stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml # A Heat environment file which can be used to enable a # a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../docker/services/manila-api.yaml OS::TripleO::Services::ManilaScheduler: ../docker/services/manila-scheduler.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../docker/services/pacemaker/manila-share.yaml OS::TripleO::Services::ManilaBackendCephFs: ../puppet/services/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by ceph-ansible # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../docker/services/ceph-ansible/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: false 4 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'
The parameter_defaults header signifies the start of the configuration. Specifically, settings under this header let you override default values set in resource_registry. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs, which sets defaults for a CephFS back end.
- 1
ManilaCephFSBackendName
sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs.- 2
ManilaCephFSDriverHandlesShareServers
controls the lifecycle of the share server. When set to false, the driver will not handle the lifecycle. This is the only supported option.- 3
ManilaCephFSCephFSAuthId
defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster.- 4
ManilaCephFSCephFSEnableSnapshots
controls snapshot activation. The false value indicates that snapshots are not enabled. This feature is currently not supported.
For more information about environment files, see Environment Files in the Advanced Overcloud Customization guide.
2.2.4. Completing post-deployment configuration
Two post-deployment configuration items need to be completed prior to allowing users access:
- The neutron StorageNFS network must be mapped to the isolated data center Storage NFS network, and
- The default share type must be created.
Once these steps are completed, the tenant compute instances can create, allow access to, and mount NFS shares.
2.2.4.1. Configuring the isolated network
The new isolated StorageNFS network must be mapped to a neutron-shared provider network. The Compute VMs will attach to this neutron network to access share export locations provided by the NFS-Ganesha gateway.
For general information about network security with the Shared File System service, refer to the section Hardening the Shared File System Service in the Security and Hardening Guide.
Procedure
The openstack network create command defines the configuration for the StorageNFS neutron network. Run this command with the following options:
- For --provider-physical-network, use the default value datacentre, unless you have set another tag for the br-isolated bridge via NeutronBridgeMappings in your tripleo-heat-templates.
- For the value of --provider-segment, use the vlan value set for the StorageNFS isolated network in the Heat template, /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml. This value is 70 unless the deployer has modified the isolated network definitions.
- For --provider-network-type, use the value vlan.
To use this command:
From an undercloud node:
[stack@undercloud ~]$ source ~/overcloudrc
On an undercloud node, run the openstack network create command to create the StorageNFS network:
[stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70 +---------------------------+--------------------------------------+ | Field | Value +---------------------------+--------------------------------------+ | admin_state_up | UP | availability_zone_hints | | availability_zones | | created_at | 2018-09-17T21:12:49Z | description | | dns_domain | None | id | cd272981-0a5e-4f3d-83eb-d25473f5176e | ipv4_address_scope | None | ipv6_address_scope | None | is_default | False | is_vlan_transparent | None | mtu | 1500 | name | StorageNFS | port_security_enabled | True | project_id | 3ca3408d545143629cd0ec35d34aea9c | provider-network-type | vlan | provider-physical-network | datacentre | provider-segment | 70 | qos_policy_id | None | revision_number | 3 | router:external | Internal | segments | None | shared | True | status | ACTIVE | subnets | | tags | | updated_at | 2018-09-17T21:12:49Z +---------------------------+--------------------------------------+
2.2.4.2. Set up the shared provider StorageNFS network
Create a corresponding StorageNFSSubnet on the neutron shared provider network. Ensure that the subnet is the same as for the storage_nfs_subnet in the undercloud but make sure that the allocation range for this subnet and that of the corresponding undercloud subnet do not overlap. No gateway is required since this subnet is dedicated to serving NFS shares.
Requirements
- The start and ending IP range for the allocation pool
- The subnet IP range
Procedure
- Log in to an overcloud node.
Use the sample command to provision the network, updating values where needed.
-
Replace the
start=172.16.4.150,end=172.16.4.250
IP values with the ones for your network. -
Replace the
172.16.4.0/24
subnet range with the correct ones for your network.
-
Replace the
[stack@undercloud-0 ~]$ openstack subnet create --allocation-pool start=172.16.4.150,end=172.16.4.250 --dhcp --network StorageNFS --subnet-range 172.16.4.0/24 --gateway none StorageNFSSubnet +-------------------+--------------------------------------+ | Field | Value +-------------------+--------------------------------------+ | allocation_pools | 172.16.4.150-172.16.4.250 | cidr | 172.16.4.0/24 | created_at | 2018-09-17T21:22:14Z | description | | dns_nameservers | | enable_dhcp | True | gateway_ip | None | host_routes | | id | 8c696d06-76b7-4d77-a375-fd2e71e3e480 | ip_version | 4 | ipv6_address_mode | None | ipv6_ra_mode | None | name | StorageNFSSubnet | network_id | cd272981-0a5e-4f3d-83eb-d25473f5176e | project_id | 3ca3408d545143629cd0ec35d34aea9c | revision_number | 0 | segment_id | None | service_types | | subnetpool_id | None | tags | | updated_at | 2018-09-17T21:22:14Z +-------------------+--------------------------------------+
2.2.4.3. Setting up a default share type
The Shared File System service allows you to define share types that you can use to create shares with specific settings. Share types work like Block Storage volume types: each type has associated settings (namely, extra specifications), and invoking the type during share creation applies those settings.
The OpenStack director expects a default share type. This default share type has to be created prior to opening the cloud for users to access. For CephFS with NFS, use the manila type-create
command:
manila type-create default false
For information about share types, refer to the section Create and Manage Share Types of the Storage Guide.