Chapter 11. Ceph File System mirrors

As a storage administrator, you can replicate a Ceph File System (CephFS) to a remote Ceph File System on another Red Hat Ceph Storage cluster. A Ceph File System supports asynchronous replication of snapshots directories.

11.1. Prerequisites

  • The source and the target storage clusters must be running Red Hat Ceph Storage 5.0 or later.

11.2. Ceph File System mirroring

The Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote Ceph File System on another Red Hat Ceph Storage cluster. Snapshot synchronization copies snapshot data to a remote Ceph File System, and creates a new snapshot on the remote target with the same name. You can configure specific directories for snapshot synchronization.

Management of CephFS mirrors is done by the CephFS mirroring daemon (cephfs-mirror). This snapshot data is synchronized by doing a bulk copy to the remote CephFS. The chosen order of synchronizing snapshot pairs is based on the creation using the snap-id.

Important

Hard linked files get synchronized as separate files.

Important

Red Hat supports running only one cephfs-mirror daemon per storage cluster.

Ceph Manager Module

The Ceph Manager mirroring module is disabled by default. It provides interfaces for managing directory snapshot mirroring, and is responsible for assigning directories to the cephfs-mirror daemon for synchronization. The Ceph Manager mirroring module also provides a family of commands to control mirroring of directory snapshots. The mirroring module does not manage the cephfs-mirror daemon. The stopping, starting, restarting, and enabling of the cephfs-mirror daemon is controlled by systemctl, but managed by cephadm.

11.3. Configuring a snapshot mirror for a Ceph File System

You can configure a Ceph File System (CephFS) for mirroring to replicate snapshots to another CephFS on a remote Red Hat Ceph Storage cluster.

Note

The time taken for synchronizing to remote storage cluster depends on the file size and the total number of files in the mirroring path.

Prerequisites

  • The source and the target storage clusters must be healthy and running Red Hat Ceph Storage 5.0 or later.
  • Root-level access to a Ceph Monitor node in the source and the target storage clusters.
  • At least one deployment of a Ceph File System.

Procedure

  1. On the source storage cluster, deploy the CephFS mirroring daemon:

    Syntax

    ceph orch apply cephfs-mirror ["NODE_NAME"]

    Example

    [root@mon ~]# ceph orch apply cephfs-mirror "node1.example.com"
    Scheduled cephfs-mirror update...

    This command creates a Ceph user called, cephfs-mirror, and deploys the cephfs-mirror daemon on the given node.

  2. On the target storage cluster, create a user for each CephFS peer:

    Syntax

    ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps

    Example

    [root@mon ~]# ceph fs authorize cephfs client.mirror_remote / rwps
    [client.mirror_remote]
            key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==

  3. On the source storage cluster, enable the CephFS mirroring module:

    Example

    [root@mon ~]# ceph mgr module enable mirroring

  4. On the source storage cluster, enable mirroring on a Ceph File System:

    Syntax

    ceph fs snapshot mirror enable FILE_SYSTEM_NAME

    Example

    [root@mon ~]# ceph fs snapshot mirror enable cephfs

    1. Optional. To disable snapshot mirroring, use the following command:

      Syntax

      ceph fs snapshot mirror disable FILE_SYSTEM_NAME

      Example

      [root@mon ~]# ceph fs snapshot mirror disable cephfs

      Warning

      Disabling snapshot mirroring on a file system removes the configured peers. You have to import the peers again by bootstrapping them.

  5. Prepare the target peer storage cluster.

    1. On a target node, enable the mirroring Ceph Manager module:

      Example

      [root@mon ~]# ceph mgr module enable mirroring

    2. On the same target node, create the peer bootstrap:

      Syntax

      ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME

      The SITE_NAME is a user-defined string to identify the target storage cluster.

      Example

      [root@mon ~]# ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site
      {"token": "eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ=="}

      Copy the token string between the double quotes for use in the next step.

  6. On the source storage cluster, import the bootstrap token from the target storage cluster:

    Syntax

    ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN

    Example

    [root@mon ~]# ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==

  7. On the source storage cluster, list the CephFS mirror peers:

    Syntax

    ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME

    Example

    [root@mon ~]# ceph fs snapshot mirror peer_list cephfs

    1. Optional. To remove a snapshot peer, use the following command:

      Syntax

      ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID

      Example

      [root@mon ~]# ceph fs snapshot mirror peer_remove cephfs a2dc7784-e7a1-4723-b103-03ee8d8768f8

      Note

      See the Viewing the mirror status for a Ceph File System link in the Additional Resources section of this procedure on how to find the peer UUID value.

  8. On the source storage cluster, configure a directory for snapshot mirroring:

    Syntax

    ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH

    Example

    [root@mon ~]# ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1

    Important

    Only absolute paths inside the Ceph File System are valid.

    Note

    The Ceph Manager mirroring module normalizes the path. For example, the /d1/d2/../dN directories are equivalent to /d1/d2. Once a directory has been added for mirroring, its ancestor directories and subdirectories are prevented from being added for mirroring.

    1. Optional. To stop snapshot mirroring for a directory, use the following command:

      Syntax

      ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH

      Example

      [root@mon ~]# ceph fs snapshot mirror remove cephfs /home/user1

Additional Resources

11.4. Viewing the mirror status for a Ceph File System

The Ceph File System (CephFS) mirror daemon (cephfs-mirror) gets asynchronous notifications about changes in the CephFS mirroring status, along with peer updates. You can query the cephfs-mirror admin socket with commands to retrieve the mirror status and peer status.

Prerequisites

  • At least one deployment of a Ceph File System with mirroring enabled.
  • Root-level access to the node running the CephFS mirroring daemon.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Find the Ceph File System ID on the node running the CephFS mirroring daemon:

    Syntax

    ceph --admin-daemon PATH_TO_THE_ASOK_FILE help

    Example

    [ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help
    {
        ...
        "fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e": "get peer mirror status",
        "fs mirror status cephfs@11": "get filesystem mirror status",
        ...
    }

    The Ceph File System ID in this example is cephfs@11.

  3. To view the mirror status:

    Syntax

    ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME@_FILE_SYSTEM_ID

    Example

    [ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11
    {
      "rados_inst": "192.168.0.5:0/1476644347",
      "peers": {
          "1011435c-9e30-4db6-b720-5bf482006e0e": { 1
              "remote": {
                  "client_name": "client.mirror_remote",
                  "cluster_name": "remote-site",
                  "fs_name": "cephfs"
              }
          }
      },
      "snap_dirs": {
          "dir_count": 1
      }
    }

    1
    This is the unique peer UUID.
  4. To view the peer status:

    Syntax

    ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME@FILE_SYSTEM_ID PEER_UUID

    Example

    [ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e
    {
      "/home/user1": {
          "state": "idle", 1
          "last_synced_snap": {
              "id": 120,
              "name": "snap1",
              "sync_duration": 0.079997898999999997,
              "sync_time_stamp": "274900.558797s"
          },
          "snaps_synced": 2, 2
          "snaps_deleted": 0, 3
          "snaps_renamed": 0
      }
    }

    The state can be one of these three values:

    1
    idle means the directory is currently not being synchronized.
    2
    syncing means the directory is currently being synchronized.
    3
    failed means the directory has hit the upper limit of consecutive failures.

The default number of consecutive failures is 10, and the default retry interval is 60 seconds.

The synchronization stats: snaps_synced, snaps_deleted, and snaps_renamed are reset when the cephfs-mirror daemon restarts.

Additional Resources