Chapter 5. Using Ceph Block Device Mirroring

As a technician, you can mirror Ceph Block Devices to protect the data storage in the block devices.

5.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client’s command-line interface.

5.2. Ceph Block Device mirroring

Ceph Block Device mirroring is the asynchronous replication of Ceph block device images between two or more Ceph clusters.

Mirroring has these benefits: * Ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. * Serves primarily for recovery from a disaster. * Can run in either an active-passive or active-active configuration; that is, using mandatory exclusive locks and the journaling feature, Ceph records all modifications to an image in the order in which they occur. * Ensures that a crash-consistent mirror of the remote image is available locally.

Before an image can be mirrored to a peer cluster, you must enable journaling.

Important

The CRUSH hierarchies supporting local and remote pools that mirror block device images SHOULD have the same capacity and performance characteristics, and SHOULD have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MiB/s average write throughput to images in the primary cluster, the network must support N * X throughput in the network connection to the secondary site, plus a safety factor of Y% to mirror N images.

The rbd-mirror daemon

The rbd-mirror daemon is responsible for synchronizing images from one Ceph cluster to another. The rbd-mirror package provides the rbd-mirror daemon. Depending on the type of replication, rbd-mirror runs either on a single cluster or on all clusters that participate in mirroring:

One-way Replication
When data is mirrored from a primary cluster to a secondary cluster that serves as a backup,rbd-mirror runs ONLY on the backup cluster. RBD mirroring may have multiple secondary sites in an active-passive configuration.
Two-way Replication
When the data is mirrored from mirrored from a primary cluster to a secondary cluster and the secondary cluster can mirror back to the primary and each other, both clusters must have rbd-mirror running. Currently, two-way replication, also known as an active-active configuration, is supported only between two sites.
Important

In two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring.

Warning

Only run a single rbd-mirror daemon per a Ceph Storage cluster.

Modes for mirroring

Mirroring is configured on a per-pool basis within peer clusters. Red Hat Ceph Storage supports two modes, depending on what images in a pool are mirrored:

Pool Mode
Mirror all images in a pool with the journaling feature enabled.
Image Mode
Only a specific subset of images within a pool is mirrored and you must enable mirroring for each image separately.

Image states

In an active-passive configuration, the mirrored images are:

  • Primary

    • These mirrored images can be modified.
  • Non-primary

    • These mirrored images cannot be modified.

Images are automatically promoted to primary when mirroring is first enabled on an image. Image promotion can happen implicitly or explicitly based on the mirroring mode. Image promotion happens implicitly when mirroring is enabled in pool mode. Image promotion happens explicitly when mirroring is enabled in image mode. It is also possible to demote primary images and promote non-primary images.

Asynchronous Red Hat Ceph Storage updates

When doing a asynchronous update to a storage cluster using Ceph Block Device mirroring, follow the installation instruction for the update. After the update completes successfully, restart the Ceph Block Device instances.

Note

There is no required order for restarting the ceph-rbd-mirror instances. Red Hat recommends restarting the ceph-rbd-mirror instance pointing to the pool with primary images, followed by the ceph-rbd-mirror instance pointing to the mirrored pool.

Additional Resources

5.3. Configuring Mirroring Between Storage Clusters With the Same Name

Creating Ceph Storage clusters using the same cluster name, by default the storage cluster name is ceph, can cause a challenge for Ceph Block Device mirroring. For example, some Ceph functions expect a storage cluster named of ceph. When both clusters have the same name, currently you must perform additional steps to configure rbd-mirror:

Prerequisites

  • Two running Red Hat Ceph Storage clusters located at different sites.
  • Access to the storage cluster or client node where the rbd-mirror daemon will be running.

Procedure

  1. As root, on both storage clusters, specify the storage cluster name by adding the CLUSTER option to the appropriate file.

    Example

    CLUSTER=master

    Red Hat Enterprise Linux

    Edit the /etc/sysconfig/ceph file and add the CLUSTER option with the Ceph Storage cluster name as the value.

    Ubuntu

    Edit the /etc/default/ceph file and add the CLUSTER option with the Ceph Storage cluster name as the value.

  2. As root, and only for the node running the rbd-mirror daemon, create a symbolic link to the ceph.conf file:

    [root@monitor ~]# ln -s /etc/ceph/ceph.conf /etc/ceph/master.conf
  3. Now when referring to the storage cluster, use the symbolic link name with the --cluster flag.

    Example

    --cluster master

5.4. Enabling Ceph Block Device Journaling for Mirroring

There are two ways to enable the Ceph Block Device journaling feature:

  • On image creation.
  • Dynamically on already existing images.
Important

Journaling depends on the exclusive-lock feature which must be enabled too.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client command-line interface.

Procedure

Enable on Image Creation

  1. As a normal user, execute the following command to enable journaling on image creation creating:

    rbd create $IMAGE_NAME --size $MEGABYTES --pool $POOL_NAME --image-feature $FEATURE_NAME[,$FEATURE_NAME]

    Example

    [user@rbd-client ~]$ rbd create image-1 --size 1024 --pool pool-1 --image-feature exclusive-lock,journaling

Enable on an Existing Image

  1. As a normal user, execute the following command to enable journaling on an already existing image:

    rbd feature enable $POOL_NAME/$IMAGE_NAME $FEATURE_NAME

    Example

    [user@rbd-client ~]$ rbd feature enable pool-1/image-1 exclusive-lock
    [user@rbd-client ~]$ rbd feature enable pool-1/image-1 journaling

Setting the Default

  1. To enable journaling on all new images by default, add the following line to the Ceph configuration file, /etc/ceph/ceph.conf by default:

    rbd default features = 125

Additional Resources

5.5. Configuring Ceph Block Device Mirroring on a Pool

As a technician, you can enable or disable mirroring on a pool, add or remove a cluster peer, and view information on peers and pools.

5.5.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client command-line interface.

5.5.2. Enabling Mirroring on a Pool

When enabling mirroring on an object pool, you must specify which mirroring mode to use.

Prerequisites
  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client command-line interface.
  • An existing object pool.
Procedure
  1. As a normal user, execute the following command to enable mirroring on a pool:

    rbd mirror pool enable $POOL_NAME $MODE

    Example

    To enable pool mode:

    [user@rbd-client ~]$ rbd mirror pool enable data pool

    To enable image mode:

    [user@rbd-client ~]$ rbd mirror pool enable data image
Additional Resources

5.5.3. Disabling Mirroring on a Pool

Before disabling mirroring on a pool, you must remove the cluster peer.

Note

Disabling mirroring on a pool, also disables mirroring on any images within the pool for which mirroring was enabled separately in image mode.

Prerequisites
  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client command-line interface.
  • Removed as a cluster peer.
  • An existing object pool.
Procedure
  1. As a normal user, execute the following command to disable mirroring on a pool:

    rbd mirror pool disable $POOL_NAME

    Example

    [user@rbd-client ~]$ rbd mirror pool disable data

Additional Resources

5.5.4. Adding a cluster peer

In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool.

Prerequisites

  • Two running Red Hat Ceph Storage clusters located at different sites.
  • Access to a Ceph client command-line interface.
  • An existing object pool.

Procedure

  1. As a normal user, execute the following command to add a cluster peer:

    rbd --cluster $CLUSTER_NAME mirror pool peer add $POOL_NAME $CLIENT_NAME@$TARGET_CLUSTER_NAME

    Examples

    Adding the remote cluster as a peer to the local cluster:

    [user@rbd-client ~]$ rbd --cluster local mirror pool peer add data client.remote@remote

Additional Resources

5.5.5. Removing a Cluster Peer

In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool.

Prerequisites
  • Two running Red Hat Ceph Storage clusters located at different sites.
  • Access to a Ceph client command-line interface.
  • An existing cluster peer.
Procedure
  1. Record the peer’s Universally Unique Identifier (UUID) for use in the next step. To view the peer’s UUID, execute the following command as a normal user:

    rbd mirror pool info $POOL_NAME
  2. As a normal user, execute the following command to remove a cluster peer:

    rbd mirror pool peer remove $POOL_NAME $PEER_UUID

    Example

    [user@rbd-client ~]$ rbd mirror pool peer remove data 55672766-c02b-4729-8567-f13a66893445

Additional Resources

5.5.6. Viewing Information About the Cluster Peers

You can view basic information about the cluster peers by doing this procedure.

Prerequisites
  • Two running Red Hat Ceph Storage clusters located at different sites.
  • Access to a Ceph client command-line interface.
  • An existing cluster peer.
Procedure
  1. As a normal user, execute the following command to view information about the cluster peers:

    rbd mirror pool info $POOL_NAME

    Example

    [user@rbd-client ~]$ rbd mirror pool info data
    Enabled: true
    Peers:
      UUID                                 NAME        CLIENT
        786b42ea-97eb-4b16-95f4-867f02b67289 ceph-remote client.admin

Additional Resources

5.5.7. Viewing Mirroring Status for a Pool

You can view the Ceph Block Device mirroring status for a pool by doing this procedure.

Prerequisites
  • Access to a Ceph client command-line interface.
  • An existing cluster peer.
  • An existing object storage pool.
Procedure
  1. As a normal user, execute the following command to view the mirroring status for a pool:

    rbd mirror pool status <pool-name>

    Example

    [user@rbd-client ~]$ rbd mirror pool status data
    health: OK
    images: 1 total

    Note

    To output more details for every mirrored image in a pool, use the --verbose option.

Additional Resources

5.5.8. Additional Resources

5.6. Configuring Ceph Block Device Mirroring on an Image

As a technician, you can enable or disable mirroring on an image, add or remove a cluster peer, and view information on peers and pools.

5.6.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client command-line interface.

5.7. Enabling Image Mirroring

This procedure enables Ceph Block Device mirroring on images.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph client command-line interface.
  • An existing image.

Procedure

  1. As a normal user, execute the following command to enable mirroring on an image:

    rbd mirror image enable $POOL_NAME/$IMAGE_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image enable data/image2

Additional Resources

included::modules/procedure_rbd_disabling-image-mirroring_en-us.adoc[leveloffset=+1]

included::modules/procedure_rbd_promoting-and-demoting-an-image_en-us.adoc[leveloffset=+1]

included::modules/procedure_rbd_resynchronizing-an-image_en-us.adoc[leveloffset=+1]

included::modules/procedure_rbd_viewing-the-mirroring-status-for-a-single-image_en-us.adoc[leveloffset=+1]

5.7.1. Additional Resources

5.8. Configuring Two-Way Mirroring for Ceph Block Devices

Two-way mirroring is an effective active-active mirroring solution suitable for automatic failover.

Prerequisites

  • Two running Red Hat Ceph Storage clusters located at different sites.

    • Each storage cluster has the corresponding configuration files in the /etc/ceph/ directory.
  • One Ceph client, with a connection to both storage clusters.

    • Access to the Ceph client’s command-line interface.
  • An existing object storage pool and an image.

    • The same object pool name exists on each storage cluster.

Procedure

  1. Verify that all images within the object storage pool have exclusive-lock and journaling enabled:

    rbd info $POOL_NAME/$IMAGE_NAME

    Example

    [user@rbd-client ~]$ rbd info data/image1

  2. The rbd-mirror package is provided by the Red Hat Ceph Storage Tools repository. As root, on a Ceph Monitor node of the local and the remote storage clusters, install the rbd-mirror package:

    Red Hat Enterprise Linux

    [root@monitor-remote ~]# yum install rbd-mirror

    Ubuntu

    [user@monitor-remote ~]$ sudo apt-get install rbd-mirror
    Note

    The rbd-mirror daemon can run on any node in the storage cluster. It does not have to be a Ceph Monitor or OSD node. However, only one rbd-mirror daemon per storage cluster.

  3. As root, on both storage clusters, specify the storage cluster name by adding the CLUSTER option to the appropriate file.

    Example

    CLUSTER=local

    Red Hat Enterprise Linux

    Edit the /etc/sysconfig/ceph file and add the CLUSTER option with the Ceph Storage cluster name as the value.

    Ubuntu

    Edit the /etc/default/ceph file and add the CLUSTER option with the Ceph Storage cluster name as the value.

    Note

    See the procedure on handling Ceph Block Device mirroring between two Ceph Storage clusters with the same name.

  4. As a normal user, on both storage clusters, create users with permissions to access the object storage pool and output their keyrings to a file.

    ceph auth get-or-create client.$STORAGE_CLUSTER_NAME mon 'profile rbd' osd 'profile rbd pool=$POOL_NAME' -o $PATH_TO_KEYRING_FILE --cluster $STORAGE_CLUSTER_NAME
    1. On the Ceph Monitor node in the local storage cluster, create the client.local user and output the keyring to the local.client.local.keyring file:

      Example

      [user@monitor-local ~]$ ceph auth get-or-create client.local mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/local.client.local.keyring --cluster local

    2. On the Ceph Monitor node in the remote storage cluster, create the client.remote user and output the keyring to the remote.client.remote.keyring file:

      Example

      [user@monitor-remote ~]$ ceph auth get-or-create client.remote mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/remote.client.remote.keyring --cluster remote

  5. As root, copy the Ceph configuration file and the newly created keyring file for each storage cluster between each storage cluster and to any Ceph client nodes in the both storage clusters.

    scp $PATH_TO_STORAGE_CLUSTER_CONFIG_FILE_NAME $SSH_USER_NAME@$MON_NODE:/etc/ceph/
    scp $PATH_TO_STORAGE_CLUSTER_KEYRING_FILE_NAME $SSH_USER_NAME@$CLIENT_NODE:/etc/ceph/

    Copying Local to Remote Example

    [root@monitor-local ~]# scp /etc/ceph/local.conf example@remote:/etc/ceph/
    [root@monitor-local ~]# scp /etc/ceph/local.client.local.keyring example@remote:/etc/ceph/

    Copying Remote to Local Example

    [root@monitor-remote ~]# scp /etc/ceph/remote.conf example@local:/etc/ceph/
    [root@monitor-remote ~]# scp /etc/ceph/remote.client.remote.keyring example@local:/etc/ceph/

    Copying both Local and Remote to Clients Example

    [root@monitor-local ~]# scp /etc/ceph/local.conf example@rbd-client:/etc/ceph/
    [root@monitor-local ~]# scp /etc/ceph/local.client.local.keyring example@rbd-client:/etc/ceph/

    [root@monitor-remote ~]# scp /etc/ceph/remote.conf example@rbd-client:/etc/ceph/
    [root@monitor-remote ~]# scp /etc/ceph/remote.client.remote.keyring example@rbd-client:/etc/ceph/
  6. As root, on the Ceph Monitor node of both storage clusters, enable and start the rbd-mirror daemon:

    systemctl enable ceph-rbd-mirror.target
    systemctl enable ceph-rbd-mirror@$CLIENT_ID
    systemctl start ceph-rbd-mirror@$CLIENT_ID

    The $CLIENT_ID is the Ceph Storage cluster user that the rbd-mirror daemon will use.

    Example:

    [root@monitor-remote ~]# systemctl enable ceph-rbd-mirror.target
    [root@monitor-remote ~]# systemctl enable ceph-rbd-mirror@remote
    [root@monitor-remote ~]# systemctl start ceph-rbd-mirror@remote

    Note

    The $CLIENT_ID user must have the appropriate cephx authentication access to the storage cluster.

Configuring Two Way Mirroring for Pool Mode

  1. As a normal user, from any Ceph client node that has access to each storage cluster, enable pool mirroring of the object storage pool residing on both storage clusters:

    rbd mirror pool enable $POOL_NAME $MIRROR_MODE --cluster $STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror pool enable data pool --cluster local
    [user@rbd-client ~]$ rbd mirror pool enable data pool --cluster remote

    1. Verify that mirroring has been successfully enabled:

      rbd mirror pool status $POOL_NAME

      Example

      [user@rbd-client ~]$ rbd mirror pool status data
      health: OK
      images: 1 total

  2. As a normal user, add the storage clusters as a peer of the other storage cluster:

    rbd mirror pool peer add $POOL_NAME $CLIENT_NAME@$STORAGE_CLUSTER_NAME --cluster $PEER_STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror pool peer add data client.local@local --cluster remote
    [user@rbd-client ~]$ rbd mirror pool peer add data client.remote@remote --cluster local

    1. Verify that the storage cluster peer was successfully added:

      Example

      [user@rbd-client ~]$ rbd mirror pool info data --cluster local
      Mode: pool
      Peers:
        UUID                                 NAME  CLIENT
        de32f0e3-1319-49d3-87f9-1fc076c83946 remote client.remote
      
      [user@rbd-client ~]$ rbd mirror pool info data --cluster remote
      Mode: pool
      Peers:
        UUID                                 NAME   CLIENT
        87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local

Configuring One Way Mirroring for Image Mode

  1. As a normal user, enable image mirroring of the object storage pool on both storage clusters:

    rbd mirror pool enable $POOL_NAME $MIRROR_MODE --cluster $STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror pool enable data image --cluster local
    [user@rbd-client ~]$ rbd mirror pool enable data image --cluster remote

    1. Verify that mirroring has been successfully enabled:

      rbd mirror pool status $POOL_NAME

      Example

      [user@rbd-client ~]$ rbd mirror pool status data
      health: OK
      images: 1 total

  2. As a normal user, add the storage clusters as a peer of the other storage cluster:

    rbd mirror pool peer add $POOL_NAME $CLIENT_NAME@$STORAGE_CLUSTER_NAME --cluster $PEER_STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror pool peer add data client.local@local --cluster remote
    [user@rbd-client ~]$ rbd mirror pool peer add data client.remote@remote --cluster local

    1. Verify that the storage cluster peer was successfully added:

      rbd mirror pool info --cluster $STORAGE_CLUSTER_NAME

      Example

      [user@rbd-client ~]$ rbd mirror pool info --cluster remote
      Mode: image
      Peers:
        UUID                                 NAME   CLIENT
        87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
      
      [user@rbd-client ~]$ rbd mirror pool info --cluster local
      Mode: image
      Peers:
        UUID                                 NAME   CLIENT
        de32f0e3-1319-49d3-87f9-1fc076c83946 remote client.remote

  3. As a normal user, on the local storage cluster, explicitly enable mirroring for the images:

    rbd mirror image enable $POOL_NAME/$IMAGE_NAME --cluster $STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image enable data/image1 --cluster local
    Mirroring enabled

    1. Verify that mirroring has been successfully enabled:

      [user@rbd-client ~]$ rbd mirror image status data/image1 --cluster local
      image1:
        global_id:   2c928338-4a86-458b-9381-e68158da8970
        state:       up+replaying
        description: replaying, master_position=[object_number=6, tag_tid=2,
      entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
      entry_tid=29598], entries_behind_master=0
        last_update: 2018-04-28 18:47:39
      
      [user@rbd-client ~]$ rbd mirror image status data/image1 --cluster remote
      image1:
        global_id:   2c928338-4a86-458b-9381-e68158da8970
        state:       up+replaying
        description: replaying, master_position=[object_number=6, tag_tid=2,
      entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
      entry_tid=29598], entries_behind_master=0
        last_update: 2018-04-28 18:47:39

Additional Resources

5.9. Delaying Replication Between Storage Clusters

Whether you are using one- or two-way replication, you can delay replication between Ceph Block Device mirroring images. You can implement a replication delay strategy as a cushion of time before unwanted changes to the primary image are propagated to the replicated secondary image. The replication delay can be configured globally or on individual images and must be configured on the destination storage cluster.

Prerequisites

  • Two running Red Hat Ceph Storage clusters located at different sites.
  • Access to the storage cluster or client node where the rbd-mirror daemon will be running.

Procedure

Setting the Replication Delay Globally

  1. As root, edit the Ceph configuration file, on the node running the rbd-mirror daemon, and add the following line:

    rbd_mirroring_replay_delay = $MINIMUM_DELAY_IN_SECONDS

    Example

    rbd_mirroring_replay_delay = 600

Setting the Replication Delay on an Image

  1. As a normal user, on a Ceph client node, set the replication delay for a specific primary image, execute the following command:
rbd image-meta set $POOL_NAME/$IMAGE_NAME conf_rbd_mirroring_replay_delay $MINIMUM_DELAY_IN_SECONDS

+ .Example

[user@rbd-client ~]$ rbd image-meta set data/image1 conf_rbd_mirroring_replay_delay 600

Additional Resources

5.10. Recovering From a Disaster

The following procedure shows how to failover to the mirrored data on a secondary storage cluster after the primary storage cluster terminated in an orderly or non-orderly manner.

Prerequisites

  • Two running Red Hat Ceph Storage clusters located at different sites.
  • One Ceph client, with a connection to both storage clusters.

    • Access to the Ceph client’s command-line interface.

Procedure

Failover After an Orderly Shutdown

  1. Stop all clients that use the primary image. This step depends on which clients are using the image.
  2. As a normal user, on a Ceph client node, demote the primary image located on the local storage cluster:

    rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image demote data/image1 --cluster=local

  3. As a normal user, on a Ceph client node, promote the non-primary image located on the remote storage cluster:

    rbd mirror image promote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image promote data/image1 --cluster=remote

  4. Resume the access to the peer image. This step depends on which clients are using the image.

Failover After a Non-Orderly Shutdown

  1. Verify that the primary storage cluster is down.
  2. Stop all clients that use the primary image. This step depends on which clients are using the image.
  3. As a normal user, on a Ceph client node, promote the non-primary image located on the remote storage cluster. Use the --force option, because the demotion cannot be propagated to the local storage cluster:

    rbd mirror image promote --force $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    $ rbd mirror image promote --force data/image1 --cluster=remote

  4. Resume the access to the peer image. This step depends on which clients are using the image.

Failing Back to the Primary Storage Cluster

  1. Verify the primary storage cluster is available.
  2. If there was a non-orderly shutdown, as a normal user, on a Ceph client node, demote the primary image located on the local storage cluster:

    rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image demote data/image1 --cluster=local

  3. Resynchronize the image ONLY if there was a non-orderly shutdown. As a normal user, on a Ceph client node, resynchronize the image:

    rbd mirror image resync $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image resync data/image1 --cluster=local

  4. Verify that resynchronization is complete and is in the up+replaying state. As a normal user, on a Ceph client node, check the resynchronization status of the image:

    rbd mirror image status $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image status data/image1 --cluster=local

  5. As a normal user, on a Ceph client node, demote the secondary image located on the remote storage cluster:

    rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    [user@rbd-client ~]$ rbd mirror image demote data/image1 --cluster=remote

  6. As a normal user, on a Ceph client node, promote the formerly primary image located on the local storage cluster:

    rbd mirror image promote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME

    Example

    $ rbd mirror image promote data/image1 --cluster=local

Additional Resources

5.11. Additional Resources