Chapter 4. Block Device Mirroring
RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph clusters. Mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. Mirroring can run in either an active-passive or active-active configuration; that is, using mandatory exclusive locks and the RBD journaling feature, RBD records all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of the remote image is available locally. Therefore, before an image can be mirrored to a peer cluster, you must enable journaling. See Section 4.1, “Enabling Journaling” for details.
Since, it is the images stored in the local and remote pools associated to the block device that get mirrored, the CRUSH hierarchy for the local and remote pools should have the same storage capacity and performance characteristics. Additionally, the network connection between the local and remote sites should have sufficient bandwidth to ensure mirroring happens without too much latency.
The CRUSH hierarchies supporting local and remote pools that mirror block device images SHOULD have the same capacity and performance characteristics, and SHOULD have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MiB/s average write throughput to images in the primary cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images.
Mirroring serves primarily for recovery from a disaster. See Section 4.7, “Recovering from a Disaster” for details.
The rbd-mirror Daemon
The rbd-mirror daemon is responsible for synchronizing images from one Ceph cluster to another.
Depending on the type of replication, rbd-mirror runs either on a single cluster or on all clusters that participate in mirroring:
One-way Replication
-
When data is mirrored from a primary cluster to a secondary cluster that serves as a backup,
rbd-mirrorruns ONLY on the backup cluster. RBD mirroring may have multiple secondary sites in an active-passive configuration.
-
When data is mirrored from a primary cluster to a secondary cluster that serves as a backup,
Two-way Replication
-
When the data is mirrored from mirrored from a primary cluster to a secondary cluster and the secondary cluster can mirror back to the primary and each other, both clusters must have
rbd-mirrorrunning. Currently, two-way replication, also known as an active-active configuration, is supported only between two sites.
-
When the data is mirrored from mirrored from a primary cluster to a secondary cluster and the secondary cluster can mirror back to the primary and each other, both clusters must have
The rbd-mirror package provides rbd-mirror.
In two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring.
Only run a single rbd-mirror daemon per a Ceph cluster.
Mirroring Modes
Mirroring is configured on a per-pool basis within peer clusters. Ceph supports two modes, depending on what images in a pool are mirrored:
- Pool Mode
- All images in a pool with the journaling feature enabled are mirrored. See Configuring Pool Mirroring for details.
- Image Mode
- Only a specific subset of images within a pool is mirrored and you must enable mirroring for each image separately. See Configuring Image Mirroring for details.
Image States
In an active-passive configuration, the mirrored images are:
- primary (can be modified)
- non-primary (cannot be modified).
Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen:
- implicitly by enabling mirroring in pool mode (see Section 4.2, “Pool Configuration”)
- explicitly by enabling mirroring of a specific image (see Section 4.3, “Image Configuration”)
It is possible to demote primary images and promote non-primary images. See Section 4.3, “Image Configuration” for details.
4.1. Enabling Journaling
You can enable the RBD journaling feature:
- when an image is created
- dynamically on already existing images
Journaling depends on the exclusive-lock feature which must be enabled too. See the following steps.
To enable journaling when creating an image, use the --image-feature option:
rbd create <image-name> --size <megabytes> --pool <pool-name> --image-feature <feature>
For example:
$ rbd create image-1 --size 1024 --pool pool-1 --image-feature exclusive-lock,journaling
To enable journaling on already existing images, use the rbd feature enable command:
rbd feature enable <pool-name>/<image-name> <feature-name>
For example:
$ rbd feature enable pool-1/image-1 exclusive-lock $ rbd feature enable pool-1/image-1 journaling
To enable journaling on all new images by default, add the following setting to the Ceph configuration file:
rbd default features = 125
4.2. Pool Configuration
This chapter shows how to do the following tasks:
Execute the following commands on both peer clusters.
Enabling Mirroring on a Pool
To enable mirroring on a pool:
rbd mirror pool enable <pool-name> <mode>
Examples
To enable mirroring of the whole pool named data:
$ rbd mirror pool enable data pool
To enable image mode mirroring on the pool named stack:
$ rbd mirror pool enable stack image
See Mirroring Modes for details.
Disabling Mirroring on a Pool
To disable mirroring on a pool:
rbd mirror pool disable <pool-name>
Example
To disable mirroring of a pool named data:
$ rbd mirror pool disable data
Before disabling mirroring, remove the peer clusters. See Section 4.2, “Pool Configuration” for details.
When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode. See Image Configuration for details.
Adding a Cluster Peer
In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool:
rbd mirror pool peer add <pool-name> <client-name>@<cluster-name>
Example
To add the remote cluster as a peer to the local cluster or the other way around:
$ rbd --cluster local mirror pool peer add data client.remote@remote $ rbd --cluster remote mirror pool peer add data client.local@local
Viewing Information about Peers
To view information about the peers:
rbd mirror pool info <pool-name>
Example
$ rbd mirror pool info data
Enabled: true
Peers:
UUID NAME CLIENT
786b42ea-97eb-4b16-95f4-867f02b67289 ceph-remote client.adminRemoving a Cluster Peer
To remove a mirroring peer cluster:
rbd mirror pool peer remove <pool-name> <peer-uuid>
Specify the pool name and the peer Universally Unique Identifier (UUID). To view the peer UUID, use the rbd mirror pool info command.
Example
$ rbd mirror pool peer remove data 55672766-c02b-4729-8567-f13a66893445
Getting Mirroring Status for a Pool
To get the mirroring pool summary:
rbd mirror pool status <pool-name>
Example
To get the status of the data pool:
$ rbd mirror pool status data health: OK images: 1 total
To output status details for every mirroring image in a pool, use the --verbose option.
4.3. Image Configuration
This chapter shows how to do the following tasks:
Execute the following commands on a single cluster only.
Enabling Image Mirroring
To enable mirroring of a specific image:
- Enable mirroring of the whole pool in image mode on both peer clusters. See Section 4.2, “Pool Configuration” for details.
Then explicitly enable mirroring for a specific image within the pool:
rbd mirror image enable <pool-name>/<image-name>
Example
To enable mirroring for the image2 image in the data pool:
$ rbd mirror image enable data/image2
Disabling Image Mirroring
To disable mirroring for a specific image:
rbd mirror image disable <pool-name>/<image-name>
Example
To disable mirroring of the image2 image in the data pool:
$ rbd mirror image disable data/image2
Image Promotion and Demotion
To demote an image to non-primary:
rbd mirror image demote <pool-name>/<image-name>
Example
To demote the image2 image in the data pool:
$ rbd mirror image demote data/image2
To promote an image to primary:
rbd mirror image promote <pool-name>/<image-name>
Example
To promote the image2 image in the data pool:
$ rbd mirror image promote data/image2
See Section 4.7, “Recovering from a Disaster” for details.
Use the --force option to force promote a non-primary image:
$ rbd mirror image promote --force data/image2
Use forced promotion when the demotion cannot be propagated to the peer Ceph cluster, for example because of cluster failure or communication outage. See Failover After a Non-Orderly Shutdown for details.
Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion.
Image Resynchronization
To request a resynchronization to the primary image:
rbd mirror image resync <pool-name>/<image-name>
Example
To request resynchronization of the image2 image in the data pool:
$ rbd mirror image resync data/image2
In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that causing it. For details on fixing this issue, see Section 4.7, “Recovering from a Disaster”.
Getting Mirroring Status for a Single Image
To get status of a mirrored image:
rbd mirror image status <pool-name>/<image-name>
Example
To get the status of the image2 image in the data pool:
$ rbd mirror image status data/image2 image2: global_id: 2c928338-4a86-458b-9381-e68158da8970 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:47:39
4.4. Configuring One-Way Mirroring
One-way mirroring implies that a primary image in one cluster gets replicated in a secondary cluster. In the secondary or remote cluster, the replicated image is non-primary; that is, block device clients cannot write to the image.
One-way mirroring is appropriate for maintaining a crash-consistent copy of an image. One-way mirroring may not be appropriate for all situations, such as using the secondary image for automatic failover and failback with OpenStack, since the cluster cannot failback when using one-way mirroring. In those scenarios, use two-way mirroring. See Section 4.5, “Configuring Two-Way Mirroring” for details.
The following procedures assume:
-
Two Ceph clusters for replicating block device images one way; that is, replicating images from a primary image in a cluster named
localto a second cluster namedremoteas used in this procedure. The clusters have corresponding configuration files located in the/etc/ceph/directory -local.confandremote.conf. For information on installing the Ceph Storage Cluster see the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu. If you have two Ceph clusters with the same name, usually the defaultcephname, see Configuring Mirroring Between Clusters With The Same Name for additional required steps. -
One block device client is connected to the local cluster -
client.local. For information on installing Ceph clients, see the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu -
The
datapool is created on both clusters. See the Pools chapter in the Storage Strategies guide or Red Hat Ceph Storage 3 for details. -
The
datapool on thelocalcluster contains images you want to mirror (in the procedures below namedimage1andimage2) and journaling is enabled on the images. See Enabling Journaling for details.
There are two ways to configure block device mirroring:
- Pool Mirroring: To mirror all images within a pool, use the Configuring Pool Mirroring procedure.
- Image Mirroring: To mirror select images within a pool, use the Configuring Image Mirroring procedure.
Configuring Pool Mirroring
-
Ensure that all images within the
datapool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details. On the monitor node of the
remotecluster, install therbd-mirrorpackage. The package is provided by the Red Hat Ceph Storage 3 Tools repository.Red Hat Enterprise Linux
# yum install rbd-mirror
Ubuntu
$ sudo apt-get install rbd-mirror
NoteThe
rbd-mirrordaemon can run on any host in the cluster. It does not have to be a Monitor or OSD host. However, only onerbd-mirrordaemon per secondary or remote cluster.On both clusters, specify the cluster name by adding the
CLUSTERoption to the appropriate file. On Red Hat Enterprise Linux, update the/etc/sysconfig/cephfile, and on Ubuntu, update the/etc/default/cephfile accordingly:CLUSTER=local CLUSTER=remote
On both clusters, create users with permissions to access the
datapool and output their keyrings to a<cluster-name>.client.<user-name>.keyringfile.On the monitor host in the
localcluster, create theclient.localuser and output the keyring to thelocal.client.local.keyringfile:# ceph auth get-or-create client.local mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/local.client.local.keyring --cluster local
On the monitor host in the
remotecluster, create theclient.remoteuser and output the keyring to theremote.client.remote.keyringfile:# ceph auth get-or-create client.remote mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/remote.client.remote.keyring --cluster remote
Copy the Ceph configuration file and the newly created keyring from the monitor host in the
localcluster to theremotecluster and to the client hosts in theremotecluster:# scp /etc/ceph/local.conf <user>@<remote_mon-host-name>:/etc/ceph/ # scp /etc/ceph/local.client.local.keyring <user>@<remote_mon-host-name>:/etc/ceph/ # scp /etc/ceph/local.conf <user>@<remote_client-host-name>:/etc/ceph/ # scp /etc/ceph/local.client.local.keyring <user>@<remote_client-host-name>:/etc/ceph/
On the monitor host of the
remotecluster, enable and start therbd-mirrordaemon:systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@<client-id> systemctl start ceph-rbd-mirror@<client-id>
Where
<client-id>is the Ceph Storage cluster user that therbd-mirrordaemon will use. The user must have the appropriatecephxaccess to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 3.For example:
# systemctl enable ceph-rbd-mirror.target # systemctl enable ceph-rbd-mirror@remote # systemctl start ceph-rbd-mirror@remote
Enable pool mirroring of the
datapool residing on thelocaland theremotecluster:$ rbd mirror pool enable data pool --cluster local $ rbd mirror pool enable data pool --cluster remote
And ensure that mirroring has been successfully enabled:
$ rbd mirror pool info data --cluster local $ rbd mirror pool info data --cluster remote
Add the
localcluster as a peer of theremotecluster:$ rbd mirror pool peer add data client.local@local --cluster remote
And ensure that the peer was successfully added:
$ rbd mirror pool info data --cluster remote Mode: pool Peers: UUID NAME CLIENT 87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
Configuring Image Mirroring
-
Ensure that select images to mirrored within the
datapool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details. - Follow the steps 2 - 5 in the Configuring Pool Mirroring procedure.
Enable image mirroring of the
datapool on the local cluster:$ rbd --cluster local mirror pool enable data image
And ensure that mirroring has been successfully enabled:
$ rbd mirror pool info --cluster local
Add the
localcluster as a peer ofremote:$ rbd --cluster remote mirror pool peer add data client.local@local
And ensure that the peer was successfully added:
$ rbd --cluster remote mirror pool info Mode: pool Peers: UUID NAME CLIENT 87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
On the
localcluster, explicitly enable image mirroring of theimage1andimage2images:$ rbd --cluster local mirror image enable data/image1 Mirroring enabled $ rbd --cluster local mirror image enable data/image2 Mirroring enabled
And ensure that mirroring has been successfully enabled:
$ rbd mirror image status data/image1 --cluster local image1: global_id: 2c928338-4a86-458b-9381-e68158da8970 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:47:39 $ rbd mirror image status data/image1 --cluster remote image1: global_id: 2c928338-4a86-458b-9381-e68158da8970 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:47:39
$ rbd mirror image status data/image2 --cluster master image1: global_id: 4c818438-4e86-e58b-2382-f61658dc8932 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:48:05 $ rbd mirror image status data/image2 --cluster remote image1: global_id: 4c818438-4e86-e58b-2382-f61658dc8932 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:48:05
4.5. Configuring Two-Way Mirroring
The following procedures assume that:
-
You have two Ceph clusters, named
localandremote. The clusters have corresponding configuration files located in the/etc/ceph/directory -local.confandremote.conf. For information on installing the Ceph Storage Cluster, see the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu. If you have two Ceph clusters with the same name, usually the defaultcephname, see Configuring Mirroring Between Clusters With The Same Name for additional required steps. -
One block device client is connected to each of the clusters -
client.localandclient.remote. For information on installing Ceph clients, see the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu. -
The
datapool is created on both clusters. See the Pools chapter in the Storage Strategies guide for Red Hat Ceph Storage 3. -
The
datapool on thelocalcluster contains images you want to mirror (in the procedures below namedimage1andimage2) and journaling is enabled on the images. See Enabling Journaling for details.
Configuring Pool Mirroring
On both clients, install the
rbd-mirrorpackage. The package is provided by the Red Hat Ceph Storage 3 Tools repository.Red Hat Enterprise Linux
# yum install rbd-mirror
Ubuntu
$ sudo apt-get install rbd-mirror
On both client hosts, specify the cluster name by adding the
CLUSTERoption to the appropriate file: On Red Hat Enterprise Linux, update the/etc/sysconfig/cephfile, and on Ubuntu, update the/etc/default/cephfile accordingly:CLUSTER=local CLUSTER=remote
On both clusters, create users with permissions to access the
datapool and output their keyrings to a<cluster-name>.client.<user-name>.keyringfile.On the monitor host in the
localcluster, create theclient.localuser and output the keyring to thelocal.client.local.keyringfile:# ceph auth get-or-create client.local mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/local.client.local.keyring --cluster local
On the monitor host in the
remotecluster, create theclient.remoteuser and output the keyring to theremote.client.remote.keyringfile:# ceph auth get-or-create client.remote mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/remote.client.remote.keyring --cluster remote
Copy the Ceph configuration files and the newly created keyrings between the peer clusters.
Copy
local.confandlocal.client.local.keyringfrom the monitor host in thelocalcluster to the monitor host in theremotecluster and to the client hosts in theremotecluster:# scp /etc/ceph/local.conf <user>@<remote_mon-host-name>:/etc/ceph/ # scp /etc/ceph/local.client.local.keyring <user>@<remote_mon-host-name>:/etc/ceph/ # scp /etc/ceph/local.conf <user>@<remote_client-host-name>:/etc/ceph/ # scp /etc/ceph/local.client.local.keyring <user>@<remote_client-host-name>:/etc/ceph/
Copy
remote.confandremote.client.remote.keyringfrom the monitor host in theremotecluster to the monitor host in thelocalcluster and to the client hosts in thelocalcluster:# scp /etc/ceph/remote.conf <user>@<local_mon-host-name>:/etc/ceph/ # scp /etc/ceph/remote.client.remote.keyring <user>@<local-mon-host-name>:/etc/ceph/ # scp /etc/ceph/remote.conf <user>@<local_client-host-name>:/etc/ceph/ # scp /etc/ceph/remote.client.remote.keyring <user>@<local_client-host-name>:/etc/ceph/
If the Monitor and Mirroring daemons are not colocated on the same node, then copy
local.client.local.keyringandlocal.conffrom the monitor host in the local cluster to the mirroring host in the local and remote cluster:# scp /etc/ceph/local.client.local.keyring <user>@<local-mirroring-host-name>:/etc/ceph/ # scp /etc/ceph/local.conf <user>@<local-mirroring-host-name>:/etc/ceph/ # scp /etc/ceph/local.client.local.keyring <user>@<remote-mirroring-host-name>:/etc/ceph/ # scp /etc/ceph/local.conf <user>@<remote-mirroring-host-name>:/etc/ceph/
And copy
remote.client.remote.keyringandremote.conffrom the monitor host in the remote cluster to the mirroring host in the remote and local cluster:# scp /etc/ceph/remote.client.remote.keyring <user>@<remote-mirroring-host-name>:/etc/ceph/ # scp /etc/ceph/remote.conf <user>@<remote-mirroring-host-name>:/etc/ceph/ # scp /etc/ceph/remote.client.remote.keyring <user>@<local-mirroring-host-name>:/etc/ceph/ # scp /etc/ceph/remote.conf <user>@<local-mirroring-host-name>:/etc/ceph/
On both client hosts, enable and start the
rbd-mirrordaemon:systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@<client-id> systemctl start ceph-rbd-mirror@<client-id>
Where
<client-id>is a unique client ID for use by therbd-mirrordaemon. The client must have the appropriatecephxaccess to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 3.For example:
# systemctl enable ceph-rbd-mirror.target # systemctl enable ceph-rbd-mirror@local # systemctl start ceph-rbd-mirror@local
Enable pool mirroring of the
datapool on both clusters:$ rbd mirror pool enable data pool --cluster local $ rbd mirror pool enable data pool --cluster remote
And ensure that mirroring has been successfully enabled:
$ rbd mirror pool status data health: OK images: 1 total
Add the clusters as peers:
$ rbd mirror pool peer add data client.remote@remote --cluster local $ rbd mirror pool peer add data client.local@local --cluster remote
And ensure that the peers were successfully added:
$ rbd mirror pool info data --cluster local Mode: pool Peers: UUID NAME CLIENT de32f0e3-1319-49d3-87f9-1fc076c83946 remote client.remote $ rbd mirror pool info data --cluster remote Mode: pool Peers: UUID NAME CLIENT 87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
Configuring Image Mirroring
- Follow the steps 1 - 5 in the Configuring Pool Mirroring procedure.
Enable image mirroring of the
datapool on both clusters:$ rbd --cluster local mirror pool enable data image $ rbd --cluster remote mirror pool enable data image
And ensure that mirroring has been successfully enabled:
$ rbd mirror pool status data health: OK images: 2 total
Add the clusters as peers:
$ rbd --cluster local mirror pool peer add data client.remote@remote $ rbd --cluster remote mirror pool peer add data client.local@local
And ensure that the peers were successfully added:
$ rbd --cluster local mirror pool info Mode: pool Peers: UUID NAME CLIENT de32f0e3-1319-49d3-87f9-1fc076c83946 remote client.remote $ rbd --cluster remote mirror pool info Mode: pool Peers: UUID NAME CLIENT 87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
On the
localcluster, explicitly enable image mirroring of theimage1andimage2images:$ rbd --cluster local mirror image enable data/image1 Mirroring enabled $ rbd --cluster local mirror image enable data/image2 Mirroring enabled
And ensure that mirroring has been successfully enabled:
$ rbd mirror image status data/image1 --cluster local image1: global_id: 2c928338-4a86-458b-9381-e68158da8970 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:47:39 $ rbd mirror image status data/image1 --cluster remote image1: global_id: 2c928338-4a86-458b-9381-e68158da8970 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:47:39
$ rbd mirror image status data/image2 --cluster master image1: global_id: 4c818438-4e86-e58b-2382-f61658dc8932 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:48:05 $ rbd mirror image status data/image2 --cluster remote image1: global_id: 4c818438-4e86-e58b-2382-f61658dc8932 state: up+replaying description: replaying, master_position=[object_number=6, tag_tid=2, entry_tid=22598], mirror_position=[object_number=6, tag_tid=2, entry_tid=29598], entries_behind_master=0 last_update: 2016-04-28 18:48:05
Configuring Mirroring Between Clusters With The Same Name
Sometimes administrators create clusters using the same cluster name, usually the default ceph name. For example, Ceph 2.2 and earlier releases support OSD encryption, but the dm-crypt function expects a cluster named ceph. When both clusters have the same name, currently you must perform additional steps to configure rbd-mirror:
-
Change the name of the cluster in the
/etc/sysconfig/cephfile on therbd-mirrornode on cluster A. For example,CLUSTER=master. On Ubuntu change the cluster name in the/etc/default/cephfile. Create a symlink to the
ceph.conffile. For example:# ln -s ceph.conf master.conf #only on the mirror node on cluster A.
Use the symlink files in each
rbd-mirrorsetup. For example executing either of the following:# ceph -s # ceph -s --cluster master
now refer to the same cluster.
4.6. Delayed Replication
Whether you are using one- or two-way replication, you can delay replication between RADOS Block Device (RBD) mirroring images. You may want to implement delayed replication if you want a window of cushion time in case an unwanted change to the primary image needs to be reverted before being replicated to the secondary image.
To implement delayed replication, the rbd-mirror daemon within the destination cluster should set the rbd mirroring replay delay = <minimum delay in seconds> configuration setting. This setting can either be applied globally within the ceph.conf file utilized by the rbd-mirror daemons, or on an individual image basis.
To utilize delayed replication for a specific image, on the primary image, run the following rbd CLI command:
rbd image-meta set <image-spec> conf_rbd_mirroring_replay_delay <minimum delay in seconds>
For example, to set a 10 minute minimum replication delay on image vm-1 in the pool vms:
rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600
4.7. Recovering from a Disaster
The following procedures show how to fail over to the mirrored data on the secondary cluster (named remote) after the primary one (named local) terminated, and how to Failback. The shutdown can be:
- orderly (Failover After an Orderly Shutdown)
- non-orderly (Failover After a Non-Orderly Shutdown).
If the shutdown is non-orderly, the Failback procedure requires resynchronizing the image.
The procedures assume that you have successfully configured either:
- pool mode mirroring (see Configuring Pool Mirroring),
- or image mode mirroring (see Configuring Image Mirroring).
Failover After an Orderly Shutdown
- Stop all clients that use the primary image. This step depends on what clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
Demote the primary image located on the
localcluster. The following command demotes the image namedimage1in the pool namedstack:$ rbd mirror image demote stack/image1 --cluster=local
See Section 4.3, “Image Configuration” for details.
Promote the non-primary image located on the
remotecluster:$ rbd mirror image promote stack/image1 --cluster=remote
See Section 4.3, “Image Configuration” for details.
- Resume the access to the peer image. This step depends on what clients use the image.
Failover After a Non-Orderly Shutdown
- Verify that the primary cluster is down.
- Stop all clients that use the primary image. This step depends on what clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
Promote the non-primary image located on the
remotecluster. Use the--forceoption, because the demotion cannot be propagated to thelocalcluster:$ rbd mirror image promote --force stack/image1 --cluster remote
See Section 4.3, “Image Configuration” for details
- Resume the access to the peer image. This step depends on what clients use the image.
Failback
When the formerly primary cluster recovers, failback to it.
If there was a non-orderly shutdown, demote the old primary image on the
localcluster. The following command demotes the image namedimage1in the pool namedstackon thelocalcluster:$ rbd mirror image demote stack/image1 --cluster local
Resynchronize the image ONLY if there was a non-orderly shutdown. The following command resynchronizes the image named
image1in the pool namedstack:$ rbd mirror image resync stack/image1 --cluster local
See Section 4.3, “Image Configuration” for details.
Before proceeding further, ensure that resynchronization is complete and in the
up+replayingstate. The following command checks the status of the image namedimage1in the pool namedstack:$ rbd mirror image status stack/image1 --cluster local
Demote the secondary image located on the
remotecluster. The following command demotes the image namedimage1in the pool namedstack:$ rbd mirror image demote stack/image1 --cluster=remote
See Section 4.3, “Image Configuration” for details.
Promote the formerly primary image located on the
localcluster:$ rbd mirror image promote stack/image1 --cluster=local
See Section 4.3, “Image Configuration” for details.
4.8. Updating Instances with Mirroring
When updating a cluster using Ceph Block Device mirroring with an asynchronous update, follow the installation instruction for the update. Then, restart the Ceph Block Device instances.
There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.