Chapter 3. Deploying Ceph File Systems
This chapter describes how to create and mount Ceph File Systems.
To deploy a Ceph File System:
- Create a Ceph file system on a Monitor node. See Section 3.2, “Creating the Ceph File Systems” for details.
-
If you use the
cephxauthentication, create a client keyring with capabilities that specify client access rights and permissions and copy the keyring to the node where the Ceph File System will be mounted. See Section 3.3, “Creating Ceph File System Client Users” for details. Mount CephFS on a dedicated node. Choose one of the following methods:
- Mounting CephFS as a kernel client. See Section 3.4, “Mounting the Ceph File System as a kernel client”
- Mounting CephFS as a FUSE client. See Section 3.5, “Mounting the Ceph File System as a FUSE Client”
3.1. Prerequisites
- Deploy a Ceph Storage Cluster if you do not have one. For details, see the Installation Guide for Red Hat Enterprise Linux or Ubuntu.
-
Install and configure Ceph Metadata Server daemons (
ceph-mds). For details, see the the Installation Guide for Red Hat Enterprise Linux or Ubuntu and Chapter 2, Configuring Metadata Server Daemons.
3.2. Creating the Ceph File Systems
This section describes how to create a Ceph File System on a Monitor node.
By default, you can create only one Ceph File System in the Ceph Storage Cluster. See Section 1.3, “CephFS Limitations” for details.
Prerequisites
On the Monitor node, enable the Red Hat Ceph Storage 3 Tools repository.
On Red Hat Enterprise Linux, use:
[root@monitor ~]# subscription-manager repos enable rhel-7-server-rhceph-3-tools-rpms
On Ubuntu, use:
[user@monitor ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' [user@monitor ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' [user@monitor ~]$ sudo apt-get update
Procedure
Use the following commands from a Monitor host and as the root user.
Create two pools, one for storing data and one for storing metadata:
ceph osd pool create <name> <pg_num>
Specify the pool name and the number of placement groups (PGs), for example:
[root@monitor ~]# ceph osd pool create cephfs-data 64 [root@monitor ~]# ceph osd pool create cephfs-metadata 64
Typically, the metadata pool can start with a conservative number of PGs as it will generally have far fewer objects than the data pool. It is possible to increase the number of PGs if needed. Size the data pool proportional to the number and sizes of files you expect in the file system.
ImportantFor the metadata pool, consider to use
- A higher replication level because any data loss to this pool can make the whole file system inaccessible
- Storage with lower latency such as Solid-state Drive (SSD) disks because this directly affects the observed latency of file system operations on clients
Install the
ceph-commonpackage:On Red Hat Enterprise Linux, use:
[root@monitor ~]# yum install ceph-common
On Ubuntu, use:
[user@monitor ~]$ sudo apt-get install ceph-common
Create the Ceph File System:
ceph fs new <name> <metadata-pool> <data-pool>
Specify the name of the Ceph File System, the metadata and data pool, for example:
[root@monitor ~]# ceph fs new cephfs cephfs-metadata cephfs-data
Verify that one or more MDSs enter to the active state based on you configuration.
ceph fs status <name>
Specify the name of the Ceph File System, for example:
[root@monitor ~]# ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+----
Additional Resources
- The Enabling the Red Hat Ceph Storage Repositories section in Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux
- The Enabling the Red Hat Ceph Storage Repositories Red Hat Ceph Storage 3 Installation Guide for Ubuntu
- The Pools chapter in the Storage Strategies guide for Red Hat Ceph Storage 3
3.3. Creating Ceph File System Client Users
If you use the cephx authentication, you must create a user for Ceph File System clients with correct authentication capabilities on a Monitor node and copy it to the node where the Ceph File System will be mounted.
Procedure
On a Monitor host, create a client user.
ceph auth get-or-create client.<id> <capabilities>
Specify the client ID and desired capabilities.
To restrict the client to only mount and work within a certain directory:
ceph auth get-or-create client.1 mon 'allow r' mds 'allow r, allow rw path=<directory>' osd 'allow rw'
For example, to restrict the client to only mount and work within the
/home/cephfs/directory:[root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow r, allow rw path=/home/cephfs' osd 'allow rw' [client.1] key = AQACNoZXhrzqIRAABPKHTach4x03JeNadeQ9Uw==To restrict the client to only write to and read from a particular pool in the cluster:
ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=<pool>'
For example, to restrict the client to only write to and read from the
datapool:[root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=data'
To prevent the client from modifying the data pool that is used for files and directories:
[root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow rwp' osd 'allow rw'
Verify the created key:
ceph auth get client.<id>
For example:
[root@monitor ~]# ceph auth get client.1
Copy the client keyring from the Monitor host to the
/etc/ceph/directory on the client host.scp root@<monitor>:/etc/ceph/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
Replace
<monitor>with the Monitor host name or IP, for example:[root@client ~]# scp root@192.168.0.1:/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
Set the appropriate permissions for the keyring file.
chmod 644 <keyring>
Specify the path to the keyring, for example:
[root@client ~]# chmod 644 /etc/ceph/ceph.client.1.keyring
Additional Resources
- The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3
3.4. Mounting the Ceph File System as a kernel client
You can mount the Ceph File System as a kernel client:
Clients on Linux distributions aside from Red Hat Enterprise Linux (RHEL) are permitted but not supported. If there are issues found in the cluster (e.g. the MDS) when using these clients, Red Hat will address them, but if the cause is found to be on the client side, the issue will have to be addressed by the kernel vendor.
3.4.1. Prerequisites
On the client node, enable the Red Hat Ceph Storage 3 Tools repository:
On Red Hat Enterprise Linux, use:
[root@client ~]# subscription-manager repos enable rhel-7-server-rhceph-3-tools-rpms
On Ubuntu, use:
[user@client ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' [user@client ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' [user@client ~]$ sudo apt-get update
-
If you use the
cephxauthentication, create and copy the client keyring to the node client node. See Section 3.3, “Creating Ceph File System Client Users” for details. Copy the Ceph configuration file from a Monitor node to the client node.
scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Replace
<monitor>with the Monitor host name or IP, for example:[root@client ~]# scp root@192.168.0.1:/ceph.conf /etc/ceph/ceph.conf
Set the appropriate permissions for the configuration file.
[root@client ~]# chmod 644 /etc/ceph/ceph.conf
3.4.2. Manually Mounting the Ceph File System as a kernel Client
To manually mount the Ceph File System as a kernel client, use the mount utility.
Procedure
Create a mount directory.
mkdir -p <mount-point>
For example:
[root@client]# mkdir -p /mnt/cephfs
Mount the Ceph File System. To specify multiple monitor addresses, either separate them with commas in the
mountcommand, or configure a DNS server so that a single host name resolves to multiple IP addresses and pass that host name to themountcommand. If you use thecephxauthentication, specify user and path to the secret file.mount -t ceph <monitor1-host-name>:6789,<monitor2-host-name>:6789,<monitor3-host-name>:6789:/ <mount-point> -o name=<user-name>,secretfile=<path>
[user@client ~]$ mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=user,secretfile=/etc/ceph/user.secret
Verify that the file system is successfully mounted:
stat -f <mount-point>
For example:
[user@client ~]$ stat -f /mnt/cephfs
Additional Resources
-
The
mount(8)manual page - The DNS Servers chapter in the Networking Guide for Red Hat Enterprise Linux 7
- The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3
3.4.3. Automatically Mounting the Ceph File System as a kernel Client
To automatically mount a Ceph File System on start, edit the /etc/fstab file.
Prerequisites
- Consider to mount the file system manually first. See Section 3.4.2, “Manually Mounting the Ceph File System as a kernel Client” for details.
-
If you want to use the
secretefile=mounting option, install theceph-commonpackage.
Procedure
On the client host, create a new directory for mounting the Ceph File System.
mkdir -p <mount-point>
For example:
[root@client ~]# mkdir -p /mnt/cephfs
Edit the
/etc/fstabfile as follows:#DEVICE PATH TYPE OPTIONS <host-name>:<port>:/, <mount-point> ceph [name=<user-name>, <host-name>:<port>:/, secret=<key>| <host-name>:<port>:/ secretfile=<file>, [<mount-options>]Specify the Monitors host names, the ports they use, and the mount point. If you use the
cephxauthentication, specify the user name and the path to its secret or secret file. Note that thesecretfileoption requires theceph-commonpackage to be installed. In addition, specify required mount options. Consider to use the_netdevoption that ensures that the file system is mounted after the networking subsystem to prevent networking issues. For example:#DEVICE PATH TYPE OPTIONS mon1:6789:/, /mnt/cephfs ceph name=admin, mon2:6789:/, secretfile= mon3:6789:/ /home/secret.key, _netdev,noatime 00The file system will be mounted on the next boot.
3.5. Mounting the Ceph File System as a FUSE Client
You can mount the Ceph File System as a File System in User Space (FUSE) client:
3.5.1. Prerequisites
On the client node, enable the Red Hat Ceph Storage 3 Tools repository:
On Red Hat Enterprise Linux, use:
[root@client ~]# subscription-manager repos enable rhel-7-server-rhceph-3-tools-rpms
On Ubuntu, use:
[user@client ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' [user@client ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' [user@client ~]$ sudo apt-get update
-
If you use the
cephxauthentication, create and copy the client keyring to the node client node. See Section 3.3, “Creating Ceph File System Client Users” for details. Copy the Ceph configuration file from a Monitor node to the client node.
scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Replace
<monitor>with the Monitor host name or IP, for example:[root@client ~]# scp root@192.168.0.1:/ceph.conf /etc/ceph/ceph.conf
Set the appropriate permissions for the configuration file.
[root@client ~]# chmod 644 /etc/ceph/ceph.conf
3.5.2. Manually Mounting the Ceph File System as a FUSE Client
To mount a Ceph File System as a File System in User Space (FUSE) client, use the ceph-fuse utility.
Prerequisites
On the node where the Ceph File System will be mounted, install the
ceph-fusepackage.On Red Hat Enterprise Linux, use:
[root@client ~]# yum install ceph-fuse
On Ubuntu, use:
[user@client ~]$ sudo apt-get install ceph-fuse
Procedure
Create a directory to serve as a mount point. Note that if you used the
pathoption with MDS capabilities, the mount point must be within what is specified bypath.mkdir <mount-point>
For example:
[root@client ~]# mkdir /mnt/mycephfs
Use the
ceph-fuseutility to mount the Ceph File System.ceph-fuse -n client.<client-name> <mount-point>
For example:
[root@client ~]# ceph-fuse -n client.1 /mnt/mycephfs
If you do not use the default name and location of the user keyring, that is
/etc/ceph/ceph.client.<client-name/id>.keyring, use the--keyringoption to specify the path to the user keyring, for example:[root@client ~]# ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
If you restricted the client to a only mount and work within a certain directory, use the
-roption to instruct the client to treat that path as its root:ceph-fuse -n client.<client-name/id> <mount-point> -r <path>
For example, to instruct the client with ID
1to treat the/home/cephfs/directory as its root:[root@client ~]# ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
Verify that the file system is successfully mounted:
stat -f <mount-point>
For example:
[user@client ~]$ stat -f /mnt/cephfs
Additional Resources
-
The
ceph-fuse(8)manual page * - The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3
3.5.3. Automatically Mounting the Ceph File System as a FUSE Client
To automatically mount a Ceph File System on start, edit the /etc/fstab file.
Prerequisites
- Consider to mount the file system manually first. See Section 3.4.2, “Manually Mounting the Ceph File System as a kernel Client” for details.
Procedure
On the client host, create a new directory for mounting the Ceph File System.
mkdir -p <mount-point>
For example:
[root@client ~]# mkdir -p /mnt/cephfs
Edit the
etc/fstabfile as follows:#DEVICE PATH TYPE OPTIONS none <mount-point> fuse.ceph ceph.id=<user-id> [,ceph.conf=<path>], _netdev,defaults 0 0Specify the use ID, for example
admin, notclient-admin, and the mount point. Use theconfoption if you store the Ceph configuration file somewhere else than in the default location. In addition, specify required mount options. Consider to use the_netdevoption that ensures that the file system is mounted after the networking subsystem to prevent networking issues. For example:#DEVICE PATH TYPE OPTIONS none /mnt/ceph fuse.ceph ceph.id=admin, ceph.conf=/etc/ceph/cluster.conf, _netdev,defaults 0 0The file system will be mounted on the next boot.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.