Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 4. Block Device Mirroring

RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph clusters. Mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening.

Mirroring uses mandatory exclusive locks and the RBD journaling feature to record all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of an image is available. Before an image can be mirrored to a peer cluster, you must enable journaling. See Section 4.1, “Enabling Journaling” for details.

Since it is the images stored in the primary and secondary pools associated to the block device that get mirrored, the CRUSH hierarchy for the primary and secondary pools should have the same storage capacity and performance characteristics. Additionally, the network connection between the primary and secondary sites should have sufficient bandwidth to ensure mirroring happens without too much latency.

Important

The CRUSH hierarchies supporting primary and secondary pools that mirror block device images must have the same capacity and performance characteristics, and must have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MiB/s average write throughput to images in the primary cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images.

Mirroring serves primarily for recovery from a disaster. Depending on which type of mirroring you use, see either Recovering from a disaster with one-way mirroring or Recovering from a disaster with two-way mirroring, for details.

The rbd-mirror Daemon

The rbd-mirror daemon is responsible for synchronizing images from one Ceph cluster to another.

Depending on the type of replication, rbd-mirror runs either on a single cluster or on all clusters that participate in mirroring:

  • One-way Replication

    • When data is mirrored from a primary cluster to a secondary cluster that serves as a backup, rbd-mirror runs ONLY on the secondary cluster. RBD mirroring may have multiple secondary sites.
  • Two-way Replication

    • Two-way replication adds an rbd-mirror daemon on the primary cluster so images can be demoted on it and promoted on the secondary cluster. Changes can then be made to the images on the secondary cluster and they will be replicated in the reverse direction, from secondary to primary. Both clusters must have rbd-mirror running to allow promoting and demoting images on either cluster. Currently, two-way replication is only supported between two sites.

The rbd-mirror package provides rbd-mirror.

Important

In two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring.

Warning

Only run a single rbd-mirror daemon per a Ceph cluster.

Mirroring Modes

Mirroring is configured on a per-pool basis within peer clusters. Ceph supports two modes, depending on what images in a pool are mirrored:

Pool Mode
All images in a pool with the journaling feature enabled are mirrored. See Configuring Pool Mirroring for details.
Image Mode
Only a specific subset of images within a pool is mirrored and you must enable mirroring for each image separately. See Configuring Image Mirroring for details.

Image States

Whether or not an image can be modified depends on its state:

  • Images in the primary state can be modified
  • Images in the non-primary state cannot be modified

Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen:

It is possible to demote primary images and promote non-primary images. See Section 4.3, “Image Configuration” for details.

4.1. Enabling Journaling

You can enable the RBD journaling feature:

  • when an image is created
  • dynamically on already existing images
Important

Journaling depends on the exclusive-lock feature which must be enabled too. See the following steps.

To enable journaling when creating an image, use the --image-feature option:

rbd create <image-name> --size <megabytes> --pool <pool-name> --image-feature <feature>

For example:

# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling

To enable journaling on previously created images, use the rbd feature enable command:

rbd feature enable <pool-name>/<image-name> <feature-name>

For example:

# rbd feature enable data/image1 exclusive-lock
# rbd feature enable data/image1 journaling

To enable journaling on all new images by default, add the following setting to the Ceph configuration file:

rbd default features = 125

4.2. Pool Configuration

This chapter shows how to do the following tasks:

Execute the following commands on both peer clusters.

Enabling Mirroring on a Pool

To enable mirroring on a pool:

rbd mirror pool enable <pool-name> <mode>

Examples

To enable mirroring of the whole pool named data:

# rbd mirror pool enable data pool

To enable image mode mirroring on the pool named data:

# rbd mirror pool enable data image

See Mirroring Modes for details.

Disabling Mirroring on a Pool

To disable mirroring on a pool:

rbd mirror pool disable <pool-name>

Example

To disable mirroring of a pool named data:

# rbd mirror pool disable data

Before disabling mirroring, remove the peer clusters. See Section 4.2, “Pool Configuration” for details.

Note

When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode. See Image Configuration for details.

Adding a Cluster Peer

In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool:

rbd --cluster <cluster-name> mirror pool peer add <pool-name> <peer-client-name>@<peer-cluster-name> -n <client-name>

Example

To add the site-a cluster as a peer to the site-b cluster run the following command from the client node in the site-b cluster:

# rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b

Viewing Information about Peers

To view information about the peers:

rbd mirror pool info <pool-name>

Example

# rbd mirror pool info data
Mode: pool
Peers:
  UUID                                 NAME   CLIENT
  7e90b4ce-e36d-4f07-8cbc-42050896825d site-a client.site-a

Removing a Cluster Peer

To remove a mirroring peer cluster:

rbd mirror pool peer remove <pool-name> <peer-uuid>

Specify the pool name and the peer Universally Unique Identifier (UUID). To view the peer UUID, use the rbd mirror pool info command.

Example

# rbd mirror pool peer remove data 7e90b4ce-e36d-4f07-8cbc-42050896825d

Getting Mirroring Status for a Pool

To get the mirroring pool summary:

rbd mirror pool status <pool-name>

Example

To get the status of the data pool:

# rbd mirror pool status data
health: OK
images: 1 total

To output status details for every mirroring image in a pool, use the --verbose option.

4.3. Image Configuration

This chapter shows how to do the following tasks:

Execute the following commands on a single cluster only.

Enabling Image Mirroring

To enable mirroring of a specific image:

  1. Enable mirroring of the whole pool in image mode on both peer clusters. See Section 4.2, “Pool Configuration” for details.
  2. Then explicitly enable mirroring for a specific image within the pool:

    rbd mirror image enable <pool-name>/<image-name>

Example

To enable mirroring for the image2 image in the data pool:

# rbd mirror image enable data/image2

Disabling Image Mirroring

To disable mirroring for a specific image:

rbd mirror image disable <pool-name>/<image-name>

Example

To disable mirroring of the image2 image in the data pool:

# rbd mirror image disable data/image2

Image Promotion and Demotion

To demote an image to non-primary:

rbd mirror image demote <pool-name>/<image-name>

Example

To demote the image2 image in the data pool:

# rbd mirror image demote data/image2

To promote an image to primary:

rbd mirror image promote <pool-name>/<image-name>

Example

To promote the image2 image in the data pool:

# rbd mirror image promote data/image2

Depending on which type of mirroring you use, see either Recovering from a disaster with one-way mirroring or Recovering from a disaster with two-way mirroring, for details.

Use the --force option to force promote a non-primary image:

# rbd mirror image promote --force data/image2

Use forced promotion when the demotion cannot be propagated to the peer Ceph cluster, for example because of cluster failure or communication outage. See Failover After a Non-Orderly Shutdown for details.

Note

Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion.

Image Resynchronization

To request a resynchronization to the primary image:

rbd mirror image resync <pool-name>/<image-name>

Example

To request resynchronization of the image2 image in the data pool:

# rbd mirror image resync data/image2

In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that is causing the inconsistency. For details on fixing this issue, see the section on recovering from a disaster. Depending on which type of mirroring you use, see either Recovering from a disaster with one-way mirroring or Recovering from a disaster with two-way mirroring, for details.

Getting Mirroring Status for a Single Image

To get the status of a mirrored image:

rbd mirror image status <pool-name>/<image-name>

Example

To get the status of the image2 image in the data pool:

# rbd mirror image status data/image2
image2:
  global_id:   703c4082-100d-44be-a54a-52e6052435a5
  state:       up+replaying
  description: replaying, master_position=[object_number=0, tag_tid=3, entry_tid=0], mirror_position=[object_number=0, tag_tid=3, entry_tid=0], entries_behind_master=0
  last_update: 2019-04-23 13:39:15

4.4. Configuring One-Way Mirroring

One-way mirroring implies that a primary image in one cluster gets replicated in a secondary cluster. In the secondary cluster, the replicated image is non-primary; that is, block device clients cannot write to the image.

Note

One-way mirroring supports multiple secondary sites. To configure one-way mirroring on multiple secondary sites, repeat the following procedures on each secondary cluster.

Note

One-way mirroring is appropriate for maintaining a crash-consistent copy of an image. One-way mirroring may not be appropriate for all situations, such as using the secondary image for automatic failover and failback with OpenStack, since the cluster cannot failback when using one-way mirroring. In those scenarios, use two-way mirroring. See Section 4.5, “Configuring Two-Way Mirroring” for details.

The following procedures assume:

  • You have two clusters and you want to replicate images from a primary cluster to a secondary cluster. For the purposes of this procedure, we will distinguish the two clusters by referring to the cluster with the primary images as the site-a cluster and the cluster you want to replicate the images to as the site-b cluster. For information on installing a Ceph Storage Cluster see the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu.
  • The site-b cluster has a client node attached to it where the rbd-mirror daemon will run. This daemon will connect to the site-a cluster to sync images to the site-b cluster. For information on installing Ceph clients, see the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu
  • A pool with the same name is created on both clusters. In the examples below the pool is named data. See the Pools chapter in the Storage Strategies Guide or Red Hat Ceph Storage 3 for details.
  • The pool contains images you want to mirror and journaling is enabled on them. In the examples below, the images are named image1 and image2. See Enabling Journaling for details.

There are two ways to configure block device mirroring:

Configuring Pool Mirroring

  1. Ensure that all images within the data pool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details.
  2. On the client node of the site-b cluster, install the rbd-mirror package. The package is provided by the Red Hat Ceph Storage 3 Tools repository.

    Red Hat Enterprise Linux

    # yum install rbd-mirror

    Ubuntu

    $ sudo apt-get install rbd-mirror
  3. On the client node of the site-b cluster, specify the cluster name by adding the CLUSTER option to the appropriate file. On Red Hat Enterprise Linux, update the /etc/sysconfig/ceph file, and on Ubuntu, update the /etc/default/ceph file accordingly:

    CLUSTER=site-b
  4. On both clusters, create users with permissions to access the data pool and output their keyrings to a <cluster-name>.client.<user-name>.keyring file.

    1. On the monitor host in the site-a cluster, create the client.site-a user and output the keyring to the site-a.client.site-a.keyring file:

      # ceph auth get-or-create client.site-a mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/site-a.client.site-a.keyring
    2. On the monitor host in the site-b cluster, create the client.site-b user and output the keyring to the site-b.client.site-b.keyring file:

      # ceph auth get-or-create client.site-b mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/site-b.client.site-b.keyring
  5. Copy the Ceph configuration file and the newly created RBD keyring file from the site-a monitor node to the site-b monitor and client nodes:

    # scp /etc/ceph/ceph.conf <user>@<site-b_mon-host-name>:/etc/ceph/site-a.conf
    # scp /etc/ceph/site-a.client.site-a.keyring <user>@<site-b_mon-host-name>:/etc/ceph/
    
    # scp /etc/ceph/ceph.conf <user>@<site-b_client-host-name>:/etc/ceph/site-a.conf
    # scp /etc/ceph/site-a.client.site-a.keyring <user>@<site-b_client-host-name>:/etc/ceph/
    Note

    The scp commands that transfer the Ceph configuration file from the site-a monitor node to the site-b monitor and client nodes rename the file to site-a.conf. The keyring file name stays the same.

  6. Create a symbolic link named site-b.conf pointing to ceph.conf on the site-b cluster client node:

    # cd /etc/ceph
    # ln -s ceph.conf site-b.conf
  7. Enable and start the rbd-mirror daemon on the site-b client node:

    systemctl enable ceph-rbd-mirror.target
    systemctl enable ceph-rbd-mirror@<client-id>
    systemctl start ceph-rbd-mirror@<client-id>

    Change <client-id> to the Ceph Storage cluster user that the rbd-mirror daemon will use. The user must have the appropriate cephx access to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 3.

    Based on the preceeding examples using site-b, run the following commands:

    # systemctl enable ceph-rbd-mirror.target
    # systemctl enable ceph-rbd-mirror@site-b
    # systemctl start ceph-rbd-mirror@site-b
  8. Enable pool mirroring of the data pool residing on the site-a cluster by running the following command on a monitor node in the site-a cluster:

    # rbd mirror pool enable data pool

    And ensure that mirroring has been successfully enabled:

    # rbd mirror pool info data
    Mode: pool
    Peers: none
  9. Add the site-a cluster as a peer of the site-b cluster by running the following command from the client node in the site-b cluster:

    # rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b

    And ensure that the peer was successfully added:

    # rbd mirror pool info data
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      7e90b4ce-e36d-4f07-8cbc-42050896825d site-a client.site-a
  10. After some time, check the status of the image1 and image2 images. If they are in state up+replaying, mirroring is functioning properly. Run the following commands from a monitor node in the site-b cluster:

    # rbd mirror image status data/image1
    image1:
      global_id:   7d486c3f-d5a1-4bee-ae53-6c4f1e0c8eac
      state:       up+replaying
      description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0
      last_update: 2019-04-22 13:19:27
    # rbd mirror image status data/image2
    image2:
      global_id:   703c4082-100d-44be-a54a-52e6052435a5
      state:       up+replaying
      description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[], entries_behind_master=3
      last_update: 2019-04-22 13:19:19

Configuring Image Mirroring

  1. Ensure the selected images to be mirrored within the data pool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details.
  2. Follow steps 2 - 7 in the Configuring Pool Mirroring procedure.
  3. From a monitor node on the site-a cluster, enable image mirroring of the data pool:

    # rbd mirror pool enable data image

    And ensure that mirroring has been successfully enabled:

    # rbd mirror pool info data
    Mode: image
    Peers: none
  4. From the client node on the site-b cluster, add the site-a cluster as a peer:

    # rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b

    And ensure that the peer was successfully added:

    # rbd mirror pool info data
    Mode: image
    Peers:
      UUID                                 NAME   CLIENT
      9c1da891-b9f4-4644-adee-6268fe398bf1 site-a client.site-a
  5. From a monitor node on the site-a cluster, explicitly enable image mirroring of the image1 and image2 images:

    # rbd mirror image enable data/image1
    Mirroring enabled
    # rbd mirror image enable data/image2
    Mirroring enabled
  6. After some time, check the status of the image1 and image2 images. If they are in state up+replaying, mirroring is functioning properly. Run the following commands from a monitor node in the site-b cluster:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+replaying
      description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0
      last_update: 2019-04-12 17:24:04
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+replaying
      description: replaying, master_position=[object_number=3, tag_tid=1, entry_tid=3], mirror_position=[object_number=3, tag_tid=1, entry_tid=3], entries_behind_master=0
      last_update: 2019-04-12 17:23:51

4.5. Configuring Two-Way Mirroring

Two-way mirroring allows you to replicate images in either direction between two clusters. It does not allow you to write changes to the same image from either cluster and have the changes propogate back and forth. An image is promoted or demoted from a cluster to change where it is writable from, and where it syncs to.

The following procedures assume that:

  • You have two clusters and you want to be able to replicate images between them in either direction. In the examples below, the clusters are referred to as the site-a and site-b clusters. For information on installing a Ceph Storage Cluster see the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu.
  • Both clusters have a client node attached to them where the rbd-mirror daemon will run. The daemon on the site-b cluster will connect to the site-a cluster to sync images to site-b, and the daemon on the site-a cluster will connect to the site-b cluster to sync images to site-a. For information on installing Ceph clients, see the Installation Guide for Red Hat Enterprise Linux or Installation Guide for Ubuntu.
  • A pool with the same name is created on both clusters. In the examples below the pool is named data. See the Pools chapter in the Storage Strategies Guide or Red Hat Ceph Storage 3 for details.
  • The pool contains images you want to mirror and journaling is enabled on them. In the examples below the images are named image1 and image2. See Enabling Journaling for details.

There are two ways to configure block device mirroring:

  • Pool Mirroring: To mirror all images within a pool, follow Configuring Pool Mirroring immediately below.
  • Image Mirroring: To mirror select images within a pool, follow Configuring Image Mirroring below.

Configuring Pool Mirroring

  1. Ensure that all images within the data pool have exclusive lock and journaling enabled. See Section 4.1, “Enabling Journaling” for details.
  2. Set up one way mirroring by following steps 2 - 7 in the equivalent Configuring Pool Mirroring section of Configuring One-Way Mirroring
  3. On the client node of the site-a cluster, install the rbd-mirror package. The package is provided by the Red Hat Ceph Storage 3 Tools repository.

    Red Hat Enterprise Linux

    # yum install rbd-mirror

    Ubuntu

    $ sudo apt-get install rbd-mirror
  4. On the client node of the site-a cluster specify the cluster name by adding the CLUSTER option to the appropriate file. On Red Hat Enterprise Linux, update the /etc/sysconfig/ceph file, and on Ubuntu, update the /etc/default/ceph file accordingly:

    CLUSTER=site-a
  5. Copy the site-b Ceph configuration file and RBD keyring file from the site-b monitor to the site-a monitor and client nodes:

    # scp /etc/ceph/ceph.conf <user>@<site-a_mon-host-name>:/etc/ceph/site-b.conf
    # scp /etc/ceph/site-b.client.site-b.keyring root@<site-a_mon-host-name>:/etc/ceph/
    # scp /etc/ceph/ceph.conf user@<site-a_client-host-name>:/etc/ceph/site-b.conf
    # scp /etc/ceph/site-b.client.site-b.keyring user@<site-a_client-host-name>:/etc/ceph/
    Note

    The scp commands that transfer the Ceph configuration file from the site-b monitor node to the site-a monitor and client nodes rename the file to site-b.conf. The keyring file name stays the same.

  6. Copy the site-a RBD keyring file from the site-a monitor node to the site-a client node:

    # scp /etc/ceph/site-a.client.site-a.keyring <user>@<site-a_client-host-name>:/etc/ceph/
  7. Create a symbolic link named site-a.conf pointing to ceph.conf on the site-a cluster client node:

    # cd /etc/ceph
    # ln -s ceph.conf site-a.conf
  8. Enable and start the rbd-mirror daemon on the site-a client node:

    systemctl enable ceph-rbd-mirror.target
    systemctl enable ceph-rbd-mirror@<client-id>
    systemctl start ceph-rbd-mirror@<client-id>

    Where <client-id> is the Ceph Storage cluster user that the rbd-mirror daemon will use. The user must have the appropriate cephx access to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 3.

    Based on the preceeding examples using site-a, run the following commands:

    # systemctl enable ceph-rbd-mirror.target
    # systemctl enable ceph-rbd-mirror@site-a
    # systemctl start ceph-rbd-mirror@site-a
  9. Enable pool mirroring of the data pool residing on the site-b cluster by running the following command on a monitor node in the site-b cluster:

    # rbd mirror pool enable data pool

    And ensure that mirroring has been successfully enabled:

    # rbd mirror pool info data
    Mode: pool
    Peers: none
  10. Add the site-b cluster as a peer of the site-a cluster by running the following command from the client node in the site-a cluster:

    # rbd --cluster site-a mirror pool peer add data client.site-b@site-b -n client.site-a

    And ensure that the peer was successfully added:

    # rbd mirror pool info data
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      dc97bd3f-869f-48a5-9f21-ff31aafba733 site-b client.site-b
  11. Check the mirroring status from the client node on the site-a cluster.

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-16 15:45:31
    # rbd mirror image status data/image2
    image1:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-16 15:55:33

    The images should be in state up+stopped. Here, up means the rbd-mirror daemon is running and stopped means the image is not a target for replication from another cluster. This is because the images are primary on this cluster.

    Note

    Previously, when setting up one-way mirroring the images were configured to replicate to site-b. That was achieved by installing rbd-mirror on the site-b client node so it could "pull" updates from site-a to site-b. At this point the site-a cluster is ready to be mirrored to but the images are not in a state that requires it. Mirroring in the other direction will start if the images on site-a are demoted and the images on on site-b are promoted. For information on how to promote and demote images, see Image Configuration.

Configuring Image Mirroring

  1. Set up one way mirroring if it is not already set up.

    1. Follow steps 2 - 7 in the Configuring Pool Mirroring section of Configuring One-Way Mirroring
    2. Follow steps 3 - 5 in the Configuring Image Mirroring section of Configuring One-Way Mirroring
  2. Follow steps 3 - 7 in the Configuring Pool Mirroring section of Configuring Two-Way Mirroring. This section is immediately above.
  3. Add the site-b cluster as a peer of the site-a cluster by running the following command from the client node in the site-a cluster:

    # rbd --cluster site-a mirror pool peer add data client.site-b@site-b -n client.site-a

    And ensure that the peer was successfully added:

    # rbd mirror pool info data
    Mode: pool
    Peers:
      UUID                                 NAME   CLIENT
      dc97bd3f-869f-48a5-9f21-ff31aafba733 site-b client.site-b
  4. Check the mirroring status from the client node on the site-a cluster.

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-16 15:45:31
    # rbd mirror image status data/image2
    image1:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-16 15:55:33

    The images should be in state up+stopped. Here, up means the rbd-mirror daemon is running and stopped means the image is not a target for replication from another cluster. This is because the images are primary on this cluster.

    Note

    Previously, when setting up one-way mirroring the images were configured to replicate to site-b. That was achieved by installing rbd-mirror on the site-b client node so it could "pull" updates from site-a to site-b. At this point the site-a cluster is ready to be mirrored to but the images are not in a state that requires it. Mirroring in the other direction will start if the images on site-a are demoted and the images on on site-b are promoted. For information on how to promote and demote images, see Image Configuration.

4.6. Delayed Replication

Whether you are using one- or two-way replication, you can delay replication between RADOS Block Device (RBD) mirroring images. You may want to implement delayed replication if you want a window of cushion time in case an unwanted change to the primary image needs to be reverted before being replicated to the secondary image.

To implement delayed replication, the rbd-mirror daemon within the destination cluster should set the rbd mirroring replay delay = <minimum delay in seconds> configuration setting. This setting can either be applied globally within the ceph.conf file utilized by the rbd-mirror daemons, or on an individual image basis.

To utilize delayed replication for a specific image, on the primary image, run the following rbd CLI command:

rbd image-meta set  <image-spec> conf_rbd_mirroring_replay_delay <minimum delay in seconds>

For example, to set a 10 minute minimum replication delay on image vm-1 in the pool vms:

rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600

4.7. Recovering from a disaster with one-way mirroring

To recover from a disaster when using one-way mirroring use the following procedures. They show how to fail over to the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly.

In the below examples, the primary cluster is known as the site-a cluster, and the secondary cluster is known as the site-b cluster. Additionally, the clusters both have a data pool with two images, image1 and image2.

Important

One-way mirroring supports multiple secondary sites. If you are using additional secondary clusters, choose one of the secondary clusters to fail over to. Synchronize from the same cluster during failback.

Prerequisites

  • At least two running clusters.
  • Pool mirroring or image mirroring configured with one way mirroring.

Failover After an Orderly Shutdown

  1. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 13.
  2. Demote the primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster:

    # rbd mirror image demote data/image1
    # rbd mirror image demote data/image2
  3. Promote the non-primary images located on the site-b cluster by running the following commands on a monitor node in the site-b cluster:

    # rbd mirror image promote data/image1
    # rbd mirror image promote data/image2
  4. After some time, check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopped and the description should say primary:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-17 13:18:36
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-17 13:18:36

Failover After a Non-Orderly Shutdown

  1. Verify that the primary cluster is down.
  2. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
  3. Promote the non-primary images from a monitor node in the site-b cluster. Use the --force option, because the demotion cannot be propagated to the site-a cluster:

    # rbd mirror image promote --force data/image1
    # rbd mirror image promote --force data/image2
  4. Check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopping_replay and the description should say force promoted:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopping_replay
      description: force promoted
      last_update: 2019-04-17 13:25:06
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopping_replay
      description: force promoted
      last_update: 2019-04-17 13:25:06

Prepare for failback

When the formerly primary cluster recovers, failback to it.

If two clusters were originally configured only for one-way mirroring, in order to failback, the primary cluster must be configured for mirroring as well in order to replicate the images in the opposite direction.

  1. On the client node of the site-a cluster, install the rbd-mirror package. The package is provided by the Red Hat Ceph Storage 3 Tools repository.

    Red Hat Enterprise Linux

    # yum install rbd-mirror

    Ubuntu

    $ sudo apt-get install rbd-mirror
  2. On the client node of the site-a cluster, specify the cluster name by adding the CLUSTER option to the appropriate file. On Red Hat Enterprise Linux, update the /etc/sysconfig/ceph file, and on Ubuntu, update the /etc/default/ceph file accordingly:

    CLUSTER=site-b
  3. Copy the site-b Ceph configuration file and RBD keyring file from the site-b monitor to the site-a monitor and client nodes:

    # scp /etc/ceph/ceph.conf <user>@<site-a_mon-host-name>:/etc/ceph/site-b.conf
    # scp /etc/ceph/site-b.client.site-b.keyring root@<site-a_mon-host-name>:/etc/ceph/
    # scp /etc/ceph/ceph.conf user@<site-a_client-host-name>:/etc/ceph/site-b.conf
    # scp /etc/ceph/site-b.client.site-b.keyring user@<site-a_client-host-name>:/etc/ceph/
    Note

    The scp commands that transfer the Ceph configuration file from the site-b monitor node to the site-a monitor and client nodes rename the file to site-a.conf. The keyring file name stays the same.

  4. Copy the site-a RBD keyring file from the site-a monitor node to the site-a client node:

    # scp /etc/ceph/site-a.client.site-a.keyring <user>@<site-a_client-host-name>:/etc/ceph/
  5. Enable and start the rbd-mirror daemon on the site-a client node:

    systemctl enable ceph-rbd-mirror.target
    systemctl enable ceph-rbd-mirror@<client-id>
    systemctl start ceph-rbd-mirror@<client-id>

    Change <client-id> to the Ceph Storage cluster user that the rbd-mirror daemon will use. The user must have the appropriate cephx access to the cluster. For detailed information, see the User Management chapter in the Administration Guide for Red Hat Ceph Storage 3.

    Based on the preceeding examples using site-a, the commands would be:

    # systemctl enable ceph-rbd-mirror.target
    # systemctl enable ceph-rbd-mirror@site-a
    # systemctl start ceph-rbd-mirror@site-a
  6. From the client node on the site-a cluster, add the site-b cluster as a peer:

    # rbd --cluster site-a mirror pool peer add data client.site-b@site-b -n client.site-a

    If you are using multiple secondary clusters, only the secondary cluster chosen to fail over to, and failback from, must be added.

  7. From a monitor node in the site-a cluster, verify the site-b cluster was successfully added as a peer:

    # rbd mirror pool info -p data
    Mode: image
    Peers:
      UUID                                 NAME   CLIENT
      d2ae0594-a43b-4c67-a167-a36c646e8643 site-b client.site-b

Failback

When the formerly primary cluster recovers, failback to it.

  1. From a monitor node on the site-a cluster determine if the images are still primary:

    # rbd info data/image1
    # rbd info data/image2

    In the output from the commands, look for mirroring primary: true or mirroring primary: false, to determine the state.

  2. Demote any images that are listed as primary by running a command like the following from a monitor node in the site-a cluster:

    # rbd mirror image demote data/image1
  3. Resynchronize the images ONLY if there was a non-orderly shutdown. Run the following commands on a monitor node in the site-a cluster to resynchronize the images from site-b to site-a:

    # rbd mirror image resync data/image1
    Flagged image for resync from primary
    # rbd mirror image resync data/image2
    Flagged image for resync from primary
    1. After some time, ensure resynchronization of the images is complete by verifying they are in the up+replaying state. Check their state by running the following commands on a monitor node in the site-a cluster:

      # rbd mirror image status data/image1
      # rbd mirror image status data/image2
  4. Demote the images on the site-b cluster by running the following commands on a monitor node in the site-b cluster:

    # rbd mirror image demote data/image1
    # rbd mirror image demote data/image2
    Note

    If there are multiple secondary clusters, this only needs to be done from the secondary cluster where it was promoted.

  5. Promote the formerly primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster:

    # rbd mirror image promote data/image1
    # rbd mirror image promote data/image2
  6. Check the status of the images from a monitor node in the site-a cluster. They should show a status of up+stopped and the description should say local image is primary:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 11:14:51
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 11:14:51

Remove two-way mirroring

In the Prepare for failback section above, functions for two-way mirroring were configured to enable synchronization from the site-b cluster to the site-a cluster. After failback is complete these functions can be disabled.

  1. Remove the site-b cluster as a peer from the site-a cluster:

    $ rbd mirror pool peer remove data client.remote@remote --cluster local
    # rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a
  2. Stop and disable the rbd-mirror daemon on the site-a client:

    systemctl stop ceph-rbd-mirror@<client-id>
    systemctl disable ceph-rbd-mirror@<client-id>
    systemctl disable ceph-rbd-mirror.target

    For example:

    # systemctl stop ceph-rbd-mirror@site-a
    # systemctl disable ceph-rbd-mirror@site-a
    # systemctl disable ceph-rbd-mirror.target

Additional Resources

4.8. Recovering from a disaster with two-way mirroring

To recover from a disaster when using two-way mirroring use the following procedures. They show how to fail over to the mirrored data on the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly.

In the below examples, the primary cluster is known as the site-a cluster, and the secondary cluster is known as the site-b cluster. Additionally, the clusters both have a data pool with two images, image1 and image2.

Prerequisites

  • At least two running clusters.
  • Pool mirroring or image mirroring configured with one way mirroring.

Failover After an Orderly Shutdown

  1. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
  2. Demote the primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster:

    # rbd mirror image demote data/image1
    # rbd mirror image demote data/image2
  3. Promote the non-primary images located on the site-b cluster by running the following commands on a monitor node in the site-b cluster:

    # rbd mirror image promote data/image1
    # rbd mirror image promote data/image2
  4. After some time, check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopped and be listed as primary:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-17 16:04:37
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-17 16:04:37
  5. Resume the access to the images. This step depends on which clients use the image.

Failover After a Non-Orderly Shutdown

  1. Verify that the primary cluster is down.
  2. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. See the Block Storage and Volumes chapter in the Storage Guide for Red Hat OpenStack Platform 10.
  3. Promote the non-primary images from a monitor node in the site-b cluster. Use the --force option, because the demotion cannot be propagated to the site-a cluster:

    # rbd mirror image promote --force data/image1
    # rbd mirror image promote --force data/image2
  4. Check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopping_replay and the description should say force promoted:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopping_replay
      description: force promoted
      last_update: 2019-04-17 13:25:06
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopping_replay
      description: force promoted
      last_update: 2019-04-17 13:25:06

Failback

When the formerly primary cluster recovers, failback to it.

  1. Check the status of the images from a monitor node in the site-b cluster again. They should show a state of up-stopped and the description should say local image is primary:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 17:37:48
    # rbd mirror image status data/image2
    image2:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 17:38:18
  2. From a monitor node on the site-a cluster determine if the images are still primary:

    # rbd info data/image1
    # rbd info data/image2

    In the output from the commands, look for mirroring primary: true or mirroring primary: false, to determine the state.

  3. Demote any images that are listed as primary by running a command like the following from a monitor node in the site-a cluster:

    # rbd mirror image demote data/image1
  4. Resynchronize the images ONLY if there was a non-orderly shutdown. Run the following commands on a monitor node in the site-a cluster to resynchronize the images from site-b to site-a:

    # rbd mirror image resync data/image1
    Flagged image for resync from primary
    # rbd mirror image resync data/image2
    Flagged image for resync from primary
  5. After some time, ensure resynchronization of the images is complete by verifying they are in the up+replaying state. Check their state by running the following commands on a monitor node in the site-a cluster:

    # rbd mirror image status data/image1
    # rbd mirror image status data/image2
  6. Demote the images on the site-b cluster by running the following commands on a monitor node in the site-b cluster:

    # rbd mirror image demote data/image1
    # rbd mirror image demote data/image2
    Note

    If there are multiple secondary clusters, this only needs to be done from the secondary cluster where it was promoted.

  7. Promote the formerly primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster:

    # rbd mirror image promote data/image1
    # rbd mirror image promote data/image2
  8. Check the status of the images from a monitor node in the site-a cluster. They should show a status of up+stopped and the description should say local image is primary:

    # rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 11:14:51
    # rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 11:14:51

Additional Resources

4.9. Updating Instances with Mirroring

When updating a cluster using Ceph Block Device mirroring with an asynchronous update, follow the installation instruction for the update. Then, restart the Ceph Block Device instances.

Note

There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool.