Chapter 4. Block Device Mirroring

RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph clusters. Mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. Mirroring can run in either an active-passive or active-active configuration; that is, using mandatory exclusive locks and the RBD journaling feature, RBD records all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of the remote image is available locally. Therefore, before an image can be mirrored to a peer cluster, you must enable journaling. See Section 4.1, “Enabling Journaling” for details.

Since, it is the images stored in the local and remote pools associated to the block device that get mirrored, the CRUSH hierarchy for the local and remote pools should have the same storage capacity and performance characteristics. Additionally, the network connection between the local and remote sites should have sufficient bandwidth to ensure mirroring happens without too much latency.

Important

The CRUSH hierarchies supporting local and remote pools that mirror block device images SHOULD have the same capacity and performance characteristics, and SHOULD have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MiB/s average write throughput to images in the primary cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images.

Mirroring serves primarily for recovery from a disaster. See Section 4.6, “Recovering from a Disaster” for details.

The rbd-mirror Daemon

The rbd-mirror daemon is responsible for synchronizing images from one Ceph cluster to another.

Depending on the type of replication, rbd-mirror runs either on a single cluster or on all clusters that participate in mirroring:

  • One-way Replication

    • When data is mirrored from a primary cluster to a secondary cluster that serves as a backup,rbd-mirror runs ONLY on the backup cluster. RBD mirroring may have multiple secondary sites in an active-passive configuration.
  • Two-way Replication

    • When the data is mirrored from mirrored from a primary cluster to a secondary cluster and the secondary cluster can mirror back to the primary and each other, both clusters must have rbd-mirror running. Currently, two-way replication, also known as an active-active configuration, is supported only between two sites.

The rbd-mirror package provides rbd-mirror.

Important

In two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring.

Warning

Only run a single rbd-mirror daemon per a Ceph cluster.

Mirroring Modes

Mirroring is configured on a per-pool basis within peer clusters. Ceph supports two modes, depending on what images in a pool are mirrored:

Pool Mode
All images in a pool with the journaling feature enabled are mirrored. See Configuring Pool Mirroring for details.
Image Mode
Only a specific subset of images within a pool is mirrored and you must enable mirroring for each image separately. See Configuring Image Mirroring for details.

Image States

In an active-passive configuration, the mirrored images are:

  • primary (can be modified)
  • non-primary (cannot be modified).

Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen:

It is possible to demote primary images and promote non-primary images. See Section 4.3, “Image Configuration” for details.

4.1. Enabling Journaling

You can enable the RBD journaling feature:

  • when an image is created
  • dynamically on already existing images
Important

Journaling depends on the exclusive-lock feature which must be enabled too. See the following steps.

To enable journaling when creating an image, use the --image-feature option:

rbd create <image-name> --size <megabytes> --pool <pool-name> --image-feature <feature>

For example:

$ rbd create image-1 --size 1024 --pool pool-1 --image-feature exclusive-lock,journaling

To enable journaling on already existing images, use the rbd feature enable command:

rbd feature enable <pool-name>/<image-name> <feature-name>

For example:

$ rbd feature enable pool-1/image-1 exclusive-lock
$ rbd feature enable pool-1/image-1 journaling

To enable journaling on all new images by default, add the following setting to the Ceph configuration file:

rbd default features = 125

4.2. Pool Configuration

This chapter shows how to do the following tasks:

Execute the following commands on both peer clusters.

Enabling Mirroring on a Pool

To enable mirroring on a pool:

rbd mirror pool enable <pool-name> <mode>

Examples

To enable mirroring of the whole pool named data:

$ rbd mirror pool enable data pool

To enable image mode mirroring on the pool named stack:

$ rbd mirror pool enable stack image

See Mirroring Modes for details.

Disabling Mirroring on a Pool

To disable mirroring on a pool:

rbd mirror pool disable <pool-name>

Example

To disable mirroring of a pool named data:

$ rbd mirror pool disable data

Before disabling mirroring, remove the peer clusters. See Section 4.2, “Pool Configuration” for details.

Note

When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode. See Image Configuration for details.

Adding a Cluster Peer

In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool:

rbd mirror pool peer add <pool-name> <client-name>@<cluster-name>

Example

To add the remote cluster as a peer to the local cluster or the other way around:

$ rbd --cluster local mirror pool peer add data client.remote@remote
$ rbd --cluster remote mirror pool peer add data client.local@local

Viewing Information about Peers

To view information about the peers:

rbd mirror pool info <pool-name>

Example

$ rbd mirror pool info data
Enabled: true
Peers:
  UUID                                 NAME        CLIENT
    786b42ea-97eb-4b16-95f4-867f02b67289 ceph-remote client.admin

Removing a Cluster Peer

To remove a mirroring peer cluster:

rbd mirror pool peer remove <pool-name> <peer-uuid>

Specify the pool name and the peer Universally Unique Identifier (UUID). To view the peer UUID, use the rbd mirror pool info command.

Example

$ rbd mirror pool peer remove data 55672766-c02b-4729-8567-f13a66893445

Getting Mirroring Status for a Pool

To get the mirroring pool summary:

rbd mirror pool status <pool-name>

Example

To get the status of the data pool:

$ rbd mirror pool status data
health: OK
images: 1 total

To output status details for every mirroring image in a pool, use the --verbose option.

4.3. Image Configuration

This chapter shows how to do the following tasks:

Execute the following commands on a single cluster only.

Enabling Image Mirroring

To enable mirroring of a specific image:

  1. Enable mirroring of the whole pool in image mode on both peer clusters. See Section 4.2, “Pool Configuration” for details.
  2. Then explicitly enable mirroring for a specific image within the pool:

    rbd mirror image enable <pool-name>/<image-name>

Example

To enable mirroring for the image2 image in the data pool:

$ rbd mirror image enable data/image2

Disabling Image Mirroring

To disable mirroring for a specific image:

rbd mirror image disable <pool-name>/<image-name>

Example

To disable mirroring of the image2 image in the data pool:

$ rbd mirror image disable data/image2

Image Promotion and Demotion

To demote an image to non-primary:

rbd mirror image demote <pool-name>/<image-name>

Example

To demote the image2 image in the data pool:

$ rbd mirror image demote data/image2

To promote an image to primary:

rbd mirror image promote <pool-name>/<image-name>

Example

To promote the image2 image in the data pool:

$ rbd mirror image promote data/image2

See Section 4.6, “Recovering from a Disaster” for details.

Use the --force option to force promote a non-primary image:

$ rbd mirror image promote --force data/image2

Use forced promotion when the demotion cannot be propagated to the peer Ceph cluster, for example because of cluster failure or communication outage. See Failover After a Non-Orderly Shutdown for details.

Note

Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion.

Image Resynchronization

To request a resynchronization to the primary image:

rbd mirror image resync <pool-name>/<image-name>

Example

To request resynchronization of the image2 image in the data pool:

$ rbd mirror image resync data/image2

In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that causing it. For details on fixing this issue, see Section 4.6, “Recovering from a Disaster”.

Getting Mirroring Status for a Single Image

To get status of a mirrored image:

rbd mirror image status <pool-name>/<image-name>

Example

To get the status of the image2 image in the data pool:

$ rbd mirror image status data/image2
image2:
  global_id:   2c928338-4a86-458b-9381-e68158da8970
  state:       up+replaying
  description: replaying, master_position=[object_number=6, tag_tid=2,
entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
entry_tid=29598], entries_behind_master=0
  last_update: 2016-04-28 18:47:39

4.4. Configuring One-Way Mirroring

One-way mirroring implies that a primary image in one cluster gets replicated in a secondary cluster. In the secondary or remote cluster, the replicated image is non-primary; that is, block device clients cannot write to the image.

Note

One-way mirroring is appropriate for maintaining a crash-consistent copy of an image. One-way mirroring may not be appropriate for all situations, such as using the secondary image for automatic failover and failback with OpenStack, since the cluster cannot failback when using one-way mirrroring. In those scenarios, use two-way mirroring. See Section 4.5, “Configuring Two-Way Mirroring” for details.

The following procedures assume:

  • Two Ceph clusters for replicating block device images one way; that is, replicating images from a primary image in a cluster named local to a second cluster named remote as used in this procedure. The clusters have corresponding configuration files located in the /etc/ceph/ directory - local.conf and remote.conf. For information on installing the Ceph Storage Cluster see the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu. If you have two Ceph clusters with the same name, usually the default ceph name, see Configuring Mirroring Between Clusters With The Same Name for additional required steps.
  • One block device client is connected to the local cluster - client.local. For information on installing Ceph clients, see the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu.
  • The data pool is created on both clusters. See the Pools chapter in the Storage Strategies for Red Hat Ceph Storage 2 guide for details.
  • The data pool on the local cluster contains images you want to mirror (in the procedures below named image1 and image2) and journaling is enabled on the images. See Enabling Journaling for details.

There are two ways to configure block device mirroring:

Configuring Pool Mirroring

  1. Ensure that all images within the data pool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details.
  2. On the monitor node of the remote cluster, install the rbd-mirror package. The package is provided by the Red Hat Ceph 2 Tools repository.

    # yum install rbd-mirror
    Note

    The rbd-mirror daemon can run on any host in the cluster. It does not have to be a monitor or OSD host. However, only one rbd-mirror daemon per secondary or remote cluster.

  3. On both clusters, specify the cluster name by adding the CLUSTER option to the /etc/sysconfig/ceph file:

    CLUSTER=local
    CLUSTER=remote
  4. On both clusters, create users with permissions to access the data pool and output their keyrings to a <cluster-name>.client.<user-name>.keyring file.

    1. On the monitor host in the local cluster, create the client.local user and output the keyring to the local.client.local.keyring file:

      # ceph auth get-or-create client.local mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow pool data rwx' -o /etc/ceph/local.client.local.keyring --cluster local
    2. On the monitor host in the remote cluster, create the client.remote user and output the keyring to the remote.client.remote.keyring file:

      # ceph auth get-or-create client.remote mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow pool data rwx' -o /etc/ceph/remote.client.remote.keyring --cluster remote
  5. Copy the Ceph configuration file and the newly created keyring from the monitor host in the local cluster to the remote cluster and to the client hosts in the remote cluster:

    # scp /etc/ceph/local.conf <user>@<remote_mon-host-name>:/etc/ceph/
    # scp /etc/ceph/local.client.local.keyring <user>@<remote_mon-host-name>:/etc/ceph/
    
    # scp /etc/ceph/local.conf <user>@<remote_client-host-name>:/etc/ceph/
    # scp /etc/ceph/local.client.local.keyring <user>@<remote_client-host-name>:/etc/ceph/
  6. On the monitor host of the remote cluster, enable and start the rbd-mirror daemon:

    systemctl enable ceph-rbd-mirror.target
    systemctl enable ceph-rbd-mirror@<client-id>
    systemctl start ceph-rbd-mirror@<client-id>

    Where <client-id> is the Ceph Storage cluster user that the rbd-mirror daemon will use. The user must have the appropriate cephx access to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 2.

    For example:

    # systemctl enable ceph-rbd-mirror.target
    # systemctl enable ceph-rbd-mirror@remote
    # systemctl start ceph-rbd-mirror@remote
  7. Enable pool mirroring of the data pool residing on the local and the remote cluster:

    $ rbd mirror pool enable data pool --cluster local
    $ rbd mirror pool enable data pool --cluster remote

    And ensure that mirroring has been successfully enabled:

    $ rbd mirror pool info data --cluster local
    $ rbd mirror pool info data --cluster remote
  8. Add the local cluster as a peer of the remote cluster:

    $ rbd mirror pool peer add data client.local@local --cluster remote

    And ensure that the peer was successfully added:

    $ rbd mirror pool info data --cluster remote
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local

Configuring Image Mirroring

  1. Ensure that select images to mirrored within the data pool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details.
  2. Follow the steps 2 - 5 in the Configuring Pool Mirroring procedure.
  3. Enable image mirroring of the data pool on the local cluster:

    $ rbd --cluster local mirror pool enable data image

    And ensure that mirroring has been successfully enabled:

    $ rbd mirror pool info --cluster local
  4. Add the local cluster as a peer of remote:

    $ rbd --cluster remote mirror pool peer add data client.local@local

    And ensure that the peer was successfully added:

    $ rbd --cluster remote mirror pool info
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
  5. On the local cluster, explicitly enable image mirroring of the image1 and image2 images:

    $ rbd --cluster local mirror image enable data/image1
    Mirroring enabled
    $ rbd --cluster local mirror image enable data/image2
    Mirroring enabled

    And ensure that mirroring has been successfully enabled:

    $ rbd mirror image status data/image1 --cluster local
    image1:
      global_id:   2c928338-4a86-458b-9381-e68158da8970
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:47:39
    
    $ rbd mirror image status data/image1 --cluster remote
    image1:
      global_id:   2c928338-4a86-458b-9381-e68158da8970
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:47:39
    $ rbd mirror image status data/image2 --cluster master
    image1:
      global_id:   4c818438-4e86-e58b-2382-f61658dc8932
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:48:05
    
    $ rbd mirror image status data/image2 --cluster remote
    image1:
      global_id:   4c818438-4e86-e58b-2382-f61658dc8932
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:48:05

4.5. Configuring Two-Way Mirroring

The following procedures assume that:

Configuring Pool Mirroring

  1. On both clients, install the rbd-mirror package. The package is provided by the Red Hat Ceph 2 Tools repository.

    # yum install rbd-mirror
  2. On both client hosts, specify the cluster name by adding the CLUSTER option to the /etc/sysconfig/ceph file:

    CLUSTER=local
    CLUSTER=remote
  3. On both clusters, create users with permissions to access the data pool and output their keyrings to a <cluster-name>.client.<user-name>.keyring file.

    1. On the monitor host in the local cluster, create the client.local user and output the keyring to the local.client.local.keyring file:

      # ceph auth get-or-create client.local mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow pool data rwx' -o /etc/ceph/local.client.local.keyring --cluster local
    2. On the monitor host in the remote cluster, create the client.remote user and output the keyring to the remote.client.remote.keyring file:

      # ceph auth get-or-create client.remote mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow pool data rwx' -o /etc/ceph/remote.client.remote.keyring --cluster remote
  4. Copy the Ceph configuration files and the newly created keyrings between the peer clusters.

    1. Copy local.conf and local.client.local.keyring from the monitor host in the local cluster to the monitor host in the remote cluster and to the client hosts in the remote cluster:

      # scp /etc/ceph/local.conf <user>@<remote_mon-host-name>:/etc/ceph/
      # scp /etc/ceph/local.client.local.keyring <user>@<remote_mon-host-name>:/etc/ceph/
      
      # scp /etc/ceph/local.conf <user>@<remote_client-host-name>:/etc/ceph/
      # scp /etc/ceph/local.client.local.keyring <user>@<remote_client-host-name>:/etc/ceph/
    2. Copy remote.conf and remote.client.remote.keyring from the monitor host in the remote cluster to the monitor host in the local cluster and to the client hosts in the local cluster:

      # scp /etc/ceph/remote.conf <user>@<local_mon-host-name>:/etc/ceph/
      # scp /etc/ceph/remote.client.remote.keyring <user>@<local-mon-host-name>:/etc/ceph/
      
      # scp /etc/ceph/remote.conf <user>@<local_client-host-name>:/etc/ceph/
      # scp /etc/ceph/remote.client.remote.keyring <user>@<local_client-host-name>:/etc/ceph/
  5. If the Monitor and Mirroring daemons are not colocated on the same node, then copy local.client.local.keyring and local.conf from the monitor host in the local cluster to the mirroring host in the local and remote cluster:

    # scp /etc/ceph/local.client.local.keyring <user>@<local-mirroring-host-name>:/etc/ceph/
    
    # scp /etc/ceph/local.conf <user>@<local-mirroring-host-name>:/etc/ceph/
    
    # scp /etc/ceph/local.client.local.keyring <user>@<remote-mirroring-host-name>:/etc/ceph/
    
    # scp /etc/ceph/local.conf <user>@<remote-mirroring-host-name>:/etc/ceph/

    And copy remote.client.remote.keyring and remote.conf from the monitor host in the remote cluster to the mirroring host in the remote and local cluster:

    # scp /etc/ceph/remote.client.remote.keyring <user>@<remote-mirroring-host-name>:/etc/ceph/
    
    # scp /etc/ceph/remote.conf <user>@<remote-mirroring-host-name>:/etc/ceph/
    
    # scp /etc/ceph/remote.client.remote.keyring <user>@<local-mirroring-host-name>:/etc/ceph/
    
    # scp /etc/ceph/remote.conf <user>@<local-mirroring-host-name>:/etc/ceph/
  6. On both client hosts, enable and start the rbd-mirror daemon:

    systemctl enable ceph-rbd-mirror.target
    systemctl enable ceph-rbd-mirror@<client-id>
    systemctl start ceph-rbd-mirror@<client-id>

    Where <client-id> is a unique client ID for use by the rbd-mirror daemon. The client must have the appropriate cephx access to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 2.

    For example:

    # systemctl enable ceph-rbd-mirror.target
    # systemctl enable ceph-rbd-mirror@local
    # systemctl start ceph-rbd-mirror@local
  7. Enable pool mirroring of the data pool on both clusters:

    $ rbd mirror pool enable data pool --cluster local
    $ rbd mirror pool enable data pool --cluster remote

    And ensure that mirroring has been successfully enabled:

    $ rbd mirror pool status data
    health: OK
    images: 1 total
  8. Add the clusters as peers:

    $ rbd mirror pool peer add data client.remote@remote --cluster local
    $ rbd mirror pool peer add data client.local@local --cluster remote

    And ensure that the peers were successfully added:

    $ rbd mirror pool info data --cluster local
    Mode: pool
    Peers:
      UUID                                 NAME  CLIENT
      de32f0e3-1319-49d3-87f9-1fc076c83946 remote client.remote
    
    $ rbd mirror pool info data --cluster remote
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local

Configuring Image Mirroring

  1. Follow the steps 1 - 5 in the Configuring Pool Mirroring procedure.
  2. Enable image mirroring of the data pool on both clusters:

    $ rbd --cluster local mirror pool enable data image
    $ rbd --cluster remote mirror pool enable data image

    And ensure that mirroring has been successfully enabled:

    $ rbd mirror pool status data
    health: OK
    images: 2 total
  3. Add the clusters as peers:

    $ rbd --cluster local mirror pool peer add data client.remote@remote
    $ rbd --cluster remote mirror pool peer add data client.local@local

    And ensure that the peers were successfully added:

    $ rbd --cluster local mirror pool info
    Mode: pool
    Peers:
      UUID                                 NAME  CLIENT
      de32f0e3-1319-49d3-87f9-1fc076c83946 remote client.remote
    
    $ rbd --cluster remote mirror pool info
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      87ea0826-8ffe-48fb-b2e8-9ca81b012771 local client.local
  4. On the local cluster, explicitly enable image mirroring of the image1 and image2 images:

    $ rbd --cluster local mirror image enable data/image1
    Mirroring enabled
    $ rbd --cluster local mirror image enable data/image2
    Mirroring enabled

    And ensure that mirroring has been successfully enabled:

    $ rbd mirror image status data/image1 --cluster local
    image1:
      global_id:   2c928338-4a86-458b-9381-e68158da8970
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:47:39
    
    $ rbd mirror image status data/image1 --cluster remote
    image1:
      global_id:   2c928338-4a86-458b-9381-e68158da8970
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:47:39
    $ rbd mirror image status data/image2 --cluster master
    image1:
      global_id:   4c818438-4e86-e58b-2382-f61658dc8932
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:48:05
    
    $ rbd mirror image status data/image2 --cluster remote
    image1:
      global_id:   4c818438-4e86-e58b-2382-f61658dc8932
      state:       up+replaying
      description: replaying, master_position=[object_number=6, tag_tid=2,
    entry_tid=22598], mirror_position=[object_number=6, tag_tid=2,
    entry_tid=29598], entries_behind_master=0
      last_update: 2016-04-28 18:48:05

Configuring Mirroring Between Clusters With The Same Name

Sometimes administrators create clusters using the same cluster name, usually the default ceph name. For example, Ceph 2.2 and earlier releases support OSD encryption, but the dm-crypt function expects a cluster named ceph. When both clusters have the same name, currently you must perform 3 additional steps to configure rbd-mirror:

  1. Change the name of the cluster in the `/etc/sysconfig/ceph file on the rbd-mirror node on cluster A. For example, CLUSTER=master. On Ubuntu change the cluster name in the /etc/default/ceph file.
  2. Create a symlink to the ceph.conf file. For example:

    # ln -s ceph.conf master.conf  #only on the mirror node on cluster A.
  3. Use the symlink files in each rbd-mirror setup. For example executing either of the following:

    # ceph -s
    # ceph -s --cluster master

    now refer to the same cluster.

4.6. Recovering from a Disaster

The following procedures show how to fail over to the mirrored data on the secondary cluster (named remote) after the primary one (named local) terminated, and how to Failback. The shutdown can be:

If the shutdown is non-orderly, the Failback procedure requires resynchronizing the image.

The procedures assume that you have successfully configured either:

Failover After an Orderly Shutdown

  1. Stop all clients that use the primary image. This step depends on what clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
  2. Demote the primary image located on the local cluster. The following command demotes the image named image1 in the pool named stack:

    $ rbd mirror image demote stack/image1 --cluster=local

    See Section 4.3, “Image Configuration” for details.

  3. Promote the non-primary image located on the remote cluster:

    $ rbd mirror image promote stack/image1 --cluster=remote

    See Section 4.3, “Image Configuration” for details.

  4. Resume the access to the peer image. This step depends on what clients use the image.

Failover After a Non-Orderly Shutdown

  1. Verify that the primary cluster is down.
  2. Stop all clients that use the primary image. This step depends on what clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
  3. Promote the non-primary image located on the remote cluster. Use the --force option, because the demotion cannot be propagated to the local cluster:

    $ rbd mirror image promote --force stack/image1 --cluster remote

    See Section 4.3, “Image Configuration” for details

  4. Resume the access to the peer image. This step depends on what clients use the image.

Failback

When the formerly primary cluster recovers, failback to it.

  1. If there was a non-orderly shutdown, demote the old primary image on the local cluster. The following command demotes the image named image1 in the pool named stack on the local cluster:

    $ rbd mirror image demote stack/image1 --cluster local
  2. Resynchronize the image ONLY if there was a non-orderly shutdown. The following command resynchronizes the image named image1 in the pool named stack:

    $ rbd mirror image resync stack/image1 --cluster local

    See Section 4.3, “Image Configuration” for details.

  3. Before proceeding further, ensure that resynchronization is complete and in the up+replaying state. The following command checks the status of the image named image1 in the pool named stack:

    $ rbd mirror image status stack/image1 --cluster local
  4. Demote the secondary image located on the remote cluster. The following command demotes the image named image1 in the pool named stack:

    $ rbd mirror image demote stack/image1 --cluster=remote

    See Section 4.3, “Image Configuration” for details.

  5. Promote the formerly primary image located on the local cluster:

    $ rbd mirror image promote stack/image1 --cluster=local

    See Section 4.3, “Image Configuration” for details.

4.7. Updating Instances with Mirroring

When updating a cluster using Ceph Block Device mirroring with an asynchronous update, follow the installation instruction for the update. Then, restart the Ceph Block Device instances.

Note

There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool.