Chapter 3. Creating Ceph File Systems

3.1. Prerequisites

To use the Ceph File System, you must have:

a working Ceph Storage Cluster
See the Installation Guide for Red Hat Enterprise Linux and Installation Guide for Ubuntu for details.
at least one Ceph Metadata Server
See Installing and Configuring Ceph Metadata Server (MDS) for details.
at least two pools; one for data and one for metadata

When configuring these pools, consider:

  • Using a higher replication level for the metadata pool, as any data loss in this pool can render the whole file system inaccessible.
  • Using storage with lower latency such as Solid-state Drive (SSD) disks for the metadata pool, because this directly affects the observed latency of file system operations on clients.

See the Pools chapter in the Storage Strategies guide for details on pools.

3.2. Creating Ceph File Systems

Before creating the Ceph File System, ensure that you have the ceph-common package installed and if not, install it.

  • On Red Hat Enterprise Linux:

    # yum install ceph-common
  • On Ubuntu:

    $ sudo apt-get install ceph-common

To create a Ceph File System:

ceph fs new <file_system_name> <metadata> <pool>

Specify the name of the new Ceph File System and the metadata and data pools, for example:

$ ceph fs new cephfs cephfs-metadata cephfs_data

Once the file system is created, the Ceph Metadata Server (MDS) enters to the active state:

$ ceph mds stat
e5: 1/1/1 up {0=a=up:active}

After creating the Ceph File System, mount it. See Mounting Ceph File Systems for details.

Note

By default, only one Ceph File System can be created in a cluster. See Section 1.2, “Limitations” for details.