Chapter 4. Ceph File System administration

As a storage administrator, you can perform common Ceph File System (CephFS) administrative tasks, such as:

4.1. Prerequisites

  • A running, and healthy Red Hat Ceph Storage cluster.
  • Installation and configuration of the Ceph Metadata Server daemons (ceph-mds).
  • Create and mount the Ceph File System.

4.2. Unmounting Ceph File Systems mounted as kernel clients

How to unmount a Ceph File System that is mounted as a kernel client.

Prerequisites

  • Root-level access to the node doing the mounting.

Procedure

  1. To unmount a Ceph File System mounted as a kernel client:

    Syntax

    umount MOUNT_POINT

    Example

    [root@client ~]# umount /mnt/cephfs

Additional Resources

  • The umount(8) manual page

4.3. Unmounting Ceph File Systems mounted as FUSE clients

Unmounting a Ceph File System that is mounted as a File System in User Space (FUSE) client.

Prerequisites

  • Root-level access to the FUSE client node.

Procedure

  1. To unmount a Ceph File System mounted in FUSE:

    Syntax

    fusermount -u MOUNT_POINT

    Example

    [root@client ~]# fusermount -u /mnt/cephfs

Additional Resources

  • The ceph-fuse(8) manual page

4.4. Mapping directory trees to Metadata Server daemon ranks

To map a directory and its subdirectories to a particular active Metadata Server (MDS) rank so that its metadata is only managed by the MDS daemon holding that rank. This approach enables you to evenly spread application load or limit impact of users' metadata requests to the entire storage cluster.

Important

An internal balancer already dynamically spreads the application load. Therefore, only map directory trees to ranks for certain carefully chosen applications.

In addition, when a directory is mapped to a rank, the balancer cannot split it. Consequently, a large number of operations within the mapped directory can overload the rank and the MDS daemon that manages it.

Prerequisites

  • At least two active MDS daemons.
  • User access to the CephFS client node.
  • Verify that the attr package is installed on the CephFS client node with a mounted Ceph File System.

Procedure

  1. Add the p flag to the Ceph user’s capabilities:

    Syntax

    ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY] ...

    Example

    [user@client ~]$ ceph fs authorize cephfs_a client.1 /temp rwp
    
    client.1
      key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
      caps: [mds] allow r, allow rwp path=/temp
      caps: [mon] allow r
      caps: [osd] allow rw tag cephfs data=cephfs_a

  2. Set the ceph.dir.pin extended attribute on a directory:

    Syntax

    setfattr -n ceph.dir.pin -v RANK DIRECTORY

    Example

    [user@client ~]$ setfattr -n ceph.dir.pin -v 2 /temp

    This example assigns the /temp directory and all of its subdirectories to rank 2.

Additional Resources

4.5. Disassociating directory trees from Metadata Server daemon ranks

Disassociate a directory from a particular active Metadata Server (MDS) rank.

Prerequisites

  • User access to the Ceph File System (CephFS) client node.
  • Ensure that the attr package is installed on the client node with a mounted CephFS.

Procedure

  1. Set the ceph.dir.pin extended attribute to -1 on a directory:

    Syntax

    setfattr -n ceph.dir.pin -v -1 DIRECTORY

    Example

    [user@client ~]$ serfattr -n ceph.dir.pin -v -1 /home/ceph-user

    Note

    Any separately mapped subdirectories of /home/ceph-user/ are not affected.

Additional Resources

4.6. Adding data pools

The Ceph File System (CephFS) supports adding more than one pool to be used for storing data. This can be useful for:

  • Storing log data on reduced redundancy pools
  • Storing user home directories on an SSD or NVMe pool
  • Basic data segregation.

Before using another data pool in the Ceph File System, you must add it as described in this section.

By default, for storing file data, CephFS uses the initial data pool that was specified during its creation. To use a secondary data pool, you must also configure a part of the file system hierarchy to store file data in that pool or optionally within a namespace of that pool, using file and directory layouts.

Prerequisites

  • Root-level access to the Ceph Monitor node.

Procedure

  1. Create a new data pool:

    Syntax

    ceph osd pool create POOL_NAME PG_NUMBER

    Replace:

    • POOL_NAME with the name of the pool.
    • PG_NUMBER with the number of placement groups (PGs).

    Example

    [root@mon ~]# ceph osd pool create cephfs_data_ssd 64
    pool 'cephfs_data_ssd' created

  2. Add the newly created pool under the control of the Metadata Servers:

    Syntax

    ceph fs add_data_pool FS_NAME POOL_NAME

    Replace:

    • FS_NAME with the name of the file system.
    • POOL_NAME with the name of the pool.

    Example:

    [root@mon ~]# ceph fs add_data_pool cephfs cephfs_data_ssd
    added data pool 6 to fsmap

  3. Verify that the pool was successfully added:

    Example

    [root@mon ~]# ceph fs ls
    name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]

  4. If you use the cephx authentication, make sure that clients can access the new pool.

Additional Resources

4.7. Working with Ceph File System quotas

As a storage administrator, you can view, set, and remove quotas on any directory in the file system. You can place quota restrictions on the number of bytes or the number of files within the directory.

4.7.1. Prerequisites

  • Make sure that the attr package is installed.

4.7.2. Ceph File System quotas

The Ceph File System (CephFS) quotas allow you to restrict the number of bytes or the number of files stored in the directory structure.

Limitations

  • CephFS quotas rely on the cooperation of the client mounting the file system to stop writing data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial, untrusted client from filling the file system.
  • Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop.
  • Previously, quotas were only supported with the userspace FUSE client. With Linux kernel version 4.17 or newer, the CephFS kernel client supports quotas against Ceph mimic or newer clusters. Those version requirements are met by Red Hat Enterprise Linux 8 and Red Hat Ceph Storage 4, respectively. The userspace FUSE client can be used on older and newer OS and cluster versions. The FUSE client is provided by the ceph-fuse package.
  • When using path-based access restrictions, be sure to configure the quota on the directory to which the client is restricted, or to a directory nested beneath it. If the client has restricted access to a specific path based on the MDS capability, and the quota is configured on an ancestor directory that the client cannot access, the client will not enforce the quota. For example, if the client cannot access the /home/ directory and the quota is configured on /home/, the client cannot enforce that quota on the directory /home/user/.
  • Snapshot file data that has been deleted or changed does not count towards the quota.

4.7.3. Viewing quotas

Use the getfattr command and the ceph.quota extended attributes to view the quota settings for a directory.

Note

If the attributes appear on a directory inode, then that directory has a configured quota. If the attributes do not appear on the inode, then the directory does not have a quota set, although its parent directory might have a quota configured. If the value of the extended attribute is 0, the quota is not set.

Prerequisites

  • Make sure that the attr package is installed.

Procedure

  1. To view CephFS quotas.

    1. Using a byte-limit quota:

      Syntax

      getfattr -n ceph.quota.max_bytes DIRECTORY

      Example

      [root@fs ~]# getfattr -n ceph.quota.max_bytes /cephfs/

    2. Using a file-limit quota:

      Syntax

      getfattr -n ceph.quota.max_files DIRECTORY

      Example

      [root@fs ~]# getfattr -n ceph.quota.max_files /cephfs/

Additional Resources

  • See the getfattr(1) manual page for more information.

4.7.4. Setting quotas

This section describes how to use the setfattr command and the ceph.quota extended attributes to set the quota for a directory.

Prerequisites

  • Make sure that the attr package is installed.

Procedure

  1. To set CephFS quotas.

    1. Using a byte-limit quota:

      Syntax

      setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_bytes -v 100000000 /cephfs/

      In this example, 100000000 bytes equals 100 MB.

    2. Using a file-limit quota:

      Syntax

      setfattr -n ceph.quota.max_files -v 10000 /some/dir

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_files -v 10000 /cephfs/

      In this example, 10000 equals 10,000 files.

Additional Resources

  • See the setfattr(1) manual page for more information.

4.7.5. Removing quotas

This section describes how to use the setfattr command and the ceph.quota extended attributes to remove a quota from a directory.

Prerequisites

  • Make sure that the attr package is installed.

Procedure

  1. To remove CephFS quotas.

    1. Using a byte-limit quota:

      Syntax

      setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_bytes -v 0 /cephfs/

    2. Using a file-limit quota:

      Syntax

      setfattr -n ceph.quota.max_files -v 0 DIRECTORY

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_files -v 0 /cephfs/

Additional Resources

  • See the setfattr(1) manual page for more information.

4.7.6. Additional Resources

  • See the getfattr(1) manual page for more information.
  • See the setfattr(1) manual page for more information.

4.8. Working with File and Directory Layouts

As a storage administrator, you can control how file or directory data is mapped to objects.

This section describes how to:

4.8.1. Prerequisites

  • The installation of the attr package.

4.8.2. Overview of file and directory layouts

This section explains what file and directory layouts are in the context for the Ceph File System.

A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory layouts serves primarily for setting an inherited layout for new files in that directory.

To view and set a file or directory layout, use virtual extended attributes or extended file attributes (xattrs). The name of the layout attributes depends on whether a file is a regular file or a directory:

  • Regular files layout attributes are called ceph.file.layout.
  • Directories layout attributes are called ceph.dir.layout.

The File and Directory Layout Fields table lists available layout fields that you can set on files and directories.

Layouts Inheritance

Files inherit the layout of their parent directory when you create them. However, subsequent changes to the parent directory layout do not affect children. If a directory does not have any layouts set, files inherit the layout from the closest directory with layout in the directory structure.

Additional Resources

4.8.3. Setting file and directory layout fields

Use the setfattr command to set layout fields on a file or directory.

Important

When you modify the layout fields of a file, the file must be empty, otherwise an error occurs.

Prerequisites

  • Root-level access to the node.

Procedure

  1. To modify layout fields on a file or directory:

    Syntax

    setfattr -n ceph.TYPE.layout.FIELD -v VALUE PATH

    Replace:

    • TYPE with file or dir.
    • FIELD with the name of the field.
    • VALUE with the new value of the field.
    • PATH with the path to the file or directory.

    Example

    [root@fs ~]# setfattr -n ceph.file.layout.stripe_unit -v 1048576 test

Additional Resources

4.8.4. Viewing file and directory layout fields

To use the getfattr command to view layout fields on a file or directory.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all nodes in the storage cluster.

Procedure

  1. To view layout fields on a file or directory as a single string:

    Syntax

    getfattr -n ceph.TYPE.layout PATH

    Replace
    • PATH with the path to the file or directory.
    • TYPE with file or dir.

    Example

    [root@mon ~] getfattr -n ceph.dir.layout /home/test
    ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"

Note

A directory does not have an explicit layout until you set it. Consequently, attempting to view the layout without first setting it fails because there are no changes to display.

Additional Resources

4.8.5. Viewing individual layout fields

Use the getfattr command to view individual layout fields for a file or directory.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all nodes in the storage cluster.

Procedure

  1. To view individual layout fields on a file or directory:

    Syntax

    getfattr -n ceph.TYPE.layout.FIELD _PATH

    Replace
    • TYPE with file or dir.
    • FIELD with the name of the field.
    • PATH with the path to the file or directory.

    Example

    [root@mon ~] getfattr -n ceph.file.layout.pool test
    ceph.file.layout.pool="cephfs_data"

    Note

    Pools in the pool field are indicated by name. However, newly created pools can be indicated by ID.

Additional Resources

4.8.6. Removing directory layouts

Use the setfattr command to remove layouts from a directory.

Note

When you set a file layout, you cannot change or remove it.

Prerequisites

  • A directory with a layout.

Procedure

  1. To remove a layout from a directory:

    Syntax

    setfattr -x ceph.dir.layout DIRECTORY_PATH

    Example

    [user@client ~]$ setfattr -x ceph.dir.layout /home/cephfs

  2. To remove the pool_namespace field:

    Syntax

    setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH

    Example

    [user@client ~]$ setfattr -x ceph.dir.layout.pool_namespace /home/cephfs

    Note

    The pool_namespace field is the only field you can remove separately.

Additional Resources

  • The setfattr(1) manual page

4.9. Taking down a Ceph File System cluster

You can take down Ceph File System (CephFS) cluster by simply setting the down flag true. Doing this gracefully shuts down the Metadata Server (MDS) daemons by flushing journals to the metadata pool and all client I/O is stopped.

You can also take the CephFS cluster down quickly for testing the deletion of a file system and bring the Metadata Server (MDS) daemons down, for example, practicing a disaster recovery scenario. Doing this sets the jointable flag to prevent the MDS standby daemons from activating the file system.

Prerequisites

  • User access to the Ceph Monitor node.

Procedure

  1. To mark the CephFS cluster down:

    Syntax

    ceph fs set FS_NAME down true

    Exmaple

    [root@mon]# ceph fs set cephfs down true

    1. To bring the CephFS cluster back up:

      Syntax

      ceph fs set FS_NAME down false

      Exmaple

      [root@mon]# ceph fs set cephfs down false

    or

  1. To quickly take down a CephFS cluster:

    Syntax

    ceph fs fail FS_NAME

    Exmaple

    [root@mon]# ceph fs fail cephfs

4.10. Removing a Ceph File System

You can remove a Ceph File System (CephFS). Before doing so, consider backing up all the data and verifying that all clients have unmounted the file system locally.

Warning

This operation is destructive and will make the data stored on the Ceph File System permanently inaccessible.

Prerequisites

  • Back up your data.
  • Root-level access to a Ceph Monitor node.

Procedure

  1. Display the Ceph File System status to determine the MDS ranks. In a later step you will stop the CephFS by referencing the MDS ranks.

    Syntax

    ceph fs status

    Example

    [root@mon ~]# ceph fs status
    cephfs - 0 clients
    ======
    +------+--------+----------------+---------------+-------+-------+
    | Rank | State  |      MDS       |    Activity   |  dns  |  inos |
    +------+--------+----------------+---------------+-------+-------+
    |  0   | active | cluster1-node6 | Reqs:    0 /s |   10  |   13  |
    +------+--------+----------------+---------------+-------+-------+
    +-----------------+----------+-------+-------+
    |       Pool      |   type   |  used | avail |
    +-----------------+----------+-------+-------+
    | cephfs_metadata | metadata | 2688k | 15.0G |
    |   cephfs_data   |   data   |    0  | 15.0G |
    +-----------------+----------+-------+-------+
    +----------------+
    |  Standby MDS   |
    +----------------+
    | cluster1-node5 |
    +----------------+

    In the example above, the rank is 0.

  2. Mark the Ceph File System as down:

    Syntax

    ceph fs set FS_NAME down true

    Replace FS_NAME with the name of the Ceph File System you want to remove.

    Example

    [root@mon]# ceph fs set cephfs down true
    marked down

  3. Display the status of the Ceph File System to determine it has stopped:

    Syntax

    ceph fs status

    Example

    [root@mon ~]# ceph fs status
    cephfs - 0 clients
    ======
    +------+----------+----------------+----------+-------+-------+
    | Rank |  State   |      MDS       | Activity |  dns  |  inos |
    +------+----------+----------------+----------+-------+-------+
    |  0   | stopping | cluster1-node6 |          |   10  |   12  |
    +------+----------+----------------+----------+-------+-------+
    +-----------------+----------+-------+-------+
    |       Pool      |   type   |  used | avail |
    +-----------------+----------+-------+-------+
    | cephfs_metadata | metadata | 2688k | 15.0G |
    |   cephfs_data   |   data   |    0  | 15.0G |
    +-----------------+----------+-------+-------+
    +----------------+
    |  Standby MDS   |
    +----------------+
    | cluster1-node5 |
    +----------------+

    After some time, the MDS is no longer listed:

    Example

    [root@mon ~]# ceph fs status
    cephfs - 0 clients
    ======
    +------+-------+-----+----------+-----+------+
    | Rank | State | MDS | Activity | dns | inos |
    +------+-------+-----+----------+-----+------+
    +------+-------+-----+----------+-----+------+
    +-----------------+----------+-------+-------+
    |       Pool      |   type   |  used | avail |
    +-----------------+----------+-------+-------+
    | cephfs_metadata | metadata | 2688k | 15.0G |
    |   cephfs_data   |   data   |    0  | 15.0G |
    +-----------------+----------+-------+-------+
    +----------------+
    |  Standby MDS   |
    +----------------+
    | cluster1-node5 |
    +----------------+

  4. Fail all MDS ranks shown in the status of step one:

    Syntax

    ceph mds fail RANK

    Replace RANK with the rank of the MDS daemons to fail.

    Example

    [root@mon]# ceph mds fail 0

  5. Remove the Ceph File System:

    Syntax

    ceph fs rm FS_NAME --yes-i-really-mean-it

    Replace FS_NAME with the name of the Ceph File System you want to remove.

    Example

    [root@mon]# ceph fs rm cephfs --yes-i-really-mean-it

  6. Verify that the file system has been successfully removed:

    Syntax

    ceph fs ls

    Example

    [root@mon ~]# ceph fs ls
    No filesystems enabled

  7. Optional. Remove the data and metadata pools associated with the removed file system.

    1. Delete the CephFS metadata pool:

      Syntax

      ceph osd pool delete CEPH_METADATA_POOL CEPH_METADATA_POOL --yes-i-really-really-mean-it

      Replace CEPH_METADATA_POOL with the pool CephFS used for metadata storage. You must include it twice.

      Example

      [root@mon ~]# ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
      pool 'cephfs_metadata' removed

    2. Delete the CephFS data pool:

      Syntax

      ceph osd pool delete CEPH_DATA_POOL CEPH_DATA_POOL --yes-i-really-really-mean-it

      Replace CEPH_DATA_POOL with the pool CephFS used for data storage. You must include it twice.

      Example

      [root@mon ~]# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
      pool 'cephfs_data' removed

Additional Resources

  • See the Delete a pool section in the Red Hat Ceph Storage Storage Strategies Guide.

4.11. Setting a minimum client version

You can set a minimum version of Ceph that a third-party client must be running to connect to a Red Hat Ceph Storage Ceph File System (CephFS). Set the min_compat_client parameter to prevent older clients from mounting the file system. CephFS will also automatically evict currently connected clients that use an older version than the version set with min_compat_client.

The rationale for this setting is to prevent older clients which might include bugs or have incomplete feature compatibility from connecting to the cluster and disrupting other clients. For example, some older versions of CephFS clients might not release capabilities properly and cause other client requests to be handled slowly.

The values of min_compat_client are based on the upstream Ceph versions. Red Hat recommends that the third-party clients use the same major upstream version as the Red Hat Ceph Storage cluster is based on. See the following table to see the upstream versions and corresponding Red Hat Ceph Storage versions.

Table 4.1. min_compat_client values

ValueUpstream Ceph versionRed Hat Ceph Storage version

luminous

12.2

Red Hat Ceph Storage 3

mimic

13.2

not applicable

nautilus

14.2

Red Hat Ceph Storage 4

Important

If you use Red Hat Enterprise Linux 7, do not set min_compat_client to a later version than luminous because Red Hat Enterprise Linux 7 is considered a luminous client and if you use a later version, CephFS does not allow it to access the mount point.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed

Procedure

  1. Set the minimum client version:

    ceph fs set name min_compat_client release

    Replace name with the name of the Ceph File System and release with the minimum client version. For example to restrict clients to use the nautilus upstream version at minimum on the cephfs Ceph File System:

    $ ceph fs set cephfs min_compat_client nautilus

    See Table 4.1, “min_compat_client values” for the full list of available values and how they correspond with Red Hat Ceph Storage versions.

4.12. Using the ceph mds fail command

Use the ceph mds fail command to:

  • Mark an MDS daemon as failed. If the daemon was active and a suitable standby daemon was available, and if the standby daemon was active after disabling the standby-replay configuration, using this command forces a failover to the standby daemon. By disabling the standby-replay daemon, this prevents new standby-replay daemons from being assigned.
  • Restart a running MDS daemon. If the daemon was active and a suitable standby daemon was available, the "failed" daemon becomes a standby daemon.

Prerequisites

  • Installation and configuration of the Ceph MDS daemons.

Procedure

  1. To fail a daemon:

    Syntax

    ceph mds fail MDS_NAME

    Where MDS_NAME is the name of the standby-replay MDS node.

    Example

    [root@mds ~]# ceph mds fail example01

    Note

    You can find the Ceph MDS name from the ceph fs status command.

Additional Resources

4.13. Ceph File System client evictions

When a Ceph File System (CephFS) client is unresponsive or misbehaving, it might be necessary to forcibly terminate, or evict it from accessing the CephFS. Evicting a CephFS client prevents it from communicating further with Metadata Server (MDS) daemons and Ceph OSD daemons. If a CephFS client is buffering I/O to the CephFS at the time of eviction, then any un-flushed data will be lost. The CephFS client eviction process applies to all client types: FUSE mounts, kernel mounts, NFS gateways, and any process using libcephfs API library.

You can evict CephFS clients automatically, if they fail to communicate promptly with the MDS daemon, or manually.

Automatically Evictions

These scenarios cause an automatic CephFS client eviction:

  • If a CephFS client has not communicated with the active MDS daemon for over the default 300 seconds, or as set by the session_autoclose option.
  • If the mds_cap_revoke_eviction_timeout option is set, and a CephFS client has not responded to the cap revoke messages for over the set amount of seconds. The mds_cap_revoke_eviction_timeout option is disabled by default.
  • During MDS startup or failover, the MDS daemon goes through a reconnect phase waiting for all the CephFS clients to connect to the new MDS daemon. If any CephFS clients fails to reconnect within the default time window of 45 seconds, or as set by the mds_reconnect_timeout option.

Additional Resources

4.14. Blacklist Ceph File System clients

Ceph File System client blacklisting is enabled by default. When you send an eviction command to a single Metadata Server (MDS) daemon, it propagates the blacklist to the other MDS daemons. This is to prevent the CephFS client from accessing any data objects, so it is necessary to update the other CephFS clients, and MDS daemons with the latest Ceph OSD map, which includes the blacklisted client entries.

An internal “osdmap epoch barrier” mechanism is used when updating the Ceph OSD map. The purpose of the barrier is to verify the CephFS clients receiving the capabilities have a sufficiently recent Ceph OSD map, before any capabilities are assigned that might allow access to the same RADOS objects, as to not race with cancelled operations, such as, from ENOSPC or blacklisted clients from evictions.

If you are experiencing frequent CephFS client evictions due to slow nodes or an unreliable network, and you cannot fix the underlying issue, then you can ask the MDS to be less strict. It is possible to respond to slow CephFS clients by simply dropping their MDS sessions, but permit the CephFS client to re-open sessions and to continue talking to Ceph OSDs. By setting the mds_session_blacklist_on_timeout and mds_session_blacklist_on_evict options to false enables this mode.

Note

When blacklisting is disabled, the evicted CephFS client has only an effect on the MDS daemon you send the command to. On a system with multiple active MDS daemons, you would need to send an eviction command to each active daemon.

4.15. Manually evicting a Ceph File System client

You might want to manually evict a Ceph File System (CephFS) client, if the client is misbehaving and you do not have access to the client node, or if a client dies, and you do not want to wait for the client session to time out.

Prerequisites

  • User access to the Ceph Monitor node.

Procedure

  1. Review the client list:

    Syntax

    ceph tell DAEMON_NAME client ls

    Exmaple

    [root@mon]# ceph tell mds.0 client ls
    [
        {
            "id": 4305,
            "num_leases": 0,
            "num_caps": 3,
            "state": "open",
            "replay_requests": 0,
            "completed_requests": 0,
            "reconnecting": false,
            "inst": "client.4305 172.21.9.34:0/422650892",
            "client_metadata": {
                "ceph_sha1": "ae81e49d369875ac8b569ff3e3c456a31b8f3af5",
                "ceph_version": "ceph version 12.0.0-1934-gae81e49 (ae81e49d369875ac8b569ff3e3c456a31b8f3af5)",
                "entity_id": "0",
                "hostname": "senta04",
                "mount_point": "/tmp/tmpcMpF1b/mnt.0",
                "pid": "29377",
                "root": "/"
            }
        }
    ]

  2. Evict the specified CephFS client:

    Syntax

    ceph tell DAEMON_NAME client evict id=ID_NUMBER

    Exmaple

    [root@mon]# ceph tell mds.0 client evict id=4305

4.16. Removing a Ceph File System client from the blacklist

In some situations, it can be useful to allow a previous blacklisted Ceph File System (CephFS) client to reconnect to the storage cluster.

Important

Removing a CephFS client from the blacklist puts data integrity at risk, and does not guarantee a fully healthy, and functional CephFS client as a result. The best way to get a fully healthy CephFS client back after an eviction, is to unmount the CephFS client and do a fresh mount. If other CephFS clients are accessing files that the blacklisted CephFS client was doing buffered I/O to can result in data corruption.

Prerequisites

  • User access to the Ceph Monitor node.

Procedure

  1. Review the blacklist:

    Exmaple

    [root@mon]# ceph osd blacklist ls
    listed 1 entries
    127.0.0.1:0/3710147553 2020-03-19 11:32:24.716146

  2. Remove the CephFS client from the blacklist:

    Syntax

    ceph osd blacklist rm CLIENT_NAME_OR_IP_ADDR

    Exmaple

    [root@mon]# ceph osd blacklist rm 127.0.0.1:0/3710147553
    un-blacklisting 127.0.0.1:0/3710147553

  3. Optionally, to have FUSE-based CephFS clients trying automatically to reconnect when removing them from the blacklist. On the FUSE client, set the following option to true:

    client_reconnect_stale = true

4.17. Additional Resources