Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

10.3. Preparing to Deploy Geo-replication

This section provides an overview of geo-replication deployment scenarios, lists prerequisites, and describes how to setup the environment for geo-replication session.

10.3.1. Exploring Geo-replication Deployment Scenarios

Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and the Internet. This section illustrates the most common deployment scenarios for geo-replication, including the following:
  • Geo-replication over LAN
  • Geo-replication over WAN
  • Geo-replication over the Internet
  • Multi-site cascading geo-replication
Geo-replication over LAN
Geo-replication over LAN
Geo-replication over WAN
Geo-replication over WAN
Geo-replication over Internet
Geo-replication over Internet
Multi-site cascading Geo-replication
Multi-site cascading geo-replication

10.3.2. Geo-replication Deployment Overview

Deploying geo-replication involves the following steps:
  1. Verify that your environment matches the minimum system requirements. See Section 10.3.3, “Prerequisites”.
  2. Determine the appropriate deployment scenario. See Section 10.3.1, “Exploring Geo-replication Deployment Scenarios”.
  3. Start geo-replication on the master and slave systems. See Section 10.4, “Starting Geo-replication”.

10.3.3. Prerequisites

The following are prerequisites for deploying geo-replication:
  • The master and slave volumes must be Red Hat Storage instances.
  • Slave node must not be a peer of the any of the nodes of the Master trusted storage pool.
  • Password-less SSH access is required between one node of the master volume (the node from which the geo-replication create command will be executed), and one node of the slave volume (the node whose IP/hostname will be mentioned in the slave name when running the geo-replication create command).
    Create the public and private keys using ssh-keygen (without passphrase) on the master node:
    # ssh-keygen
    Copy the public key to the slave node using the following command:
    # ssh-copy-id root@slave_node_IPaddress/Hostname
    If you are setting up a non-root geo-replicaton session, then copy the public key to the respective user location.

    Note

    - Password-less SSH access is required from the master node to slave node, whereas password-less SSH access is not required from the slave node to master node.
    - ssh-copy-id command does not work if ssh authorized_keys file is configured in the custom location. You must copy the contents of .ssh/id_rsa.pub file from the Master and paste it to authorized_keys file in the custom location on the Slave node.
    A password-less SSH connection is also required for gsyncd between every node in the master to every node in the slave. The gluster system:: execute gsec_create command creates secret-pem files on all the nodes in the master, and is used to implement the password-less SSH connection. The push-pem option in the geo-replication create command pushes these keys to all the nodes in the slave.
    For more information on the gluster system::execute gsec_create and push-pem commands, see Section 10.3.4, “Setting Up your Environment for Geo-replication Session”.

10.3.4. Setting Up your Environment for Geo-replication Session

Before configuring the geo-replication environment, ensure that the time on all the servers are synchronized.
Time Synchronization

Creating Geo-replication Sessions

  1. To create a common pem pub file, run the following command on the master node where the password-less SSH connection is configured:
    # gluster system:: execute gsec_create
  2. Create the geo-replication session using the following command. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force]
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol create push-pem

    Note

    There must be password-less SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. If the verification fails, you can use the force option which will ignore the failed verification and create a geo-replication session.
  3. Verify the status of the created session by running the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status

10.3.5. Setting Up your Environment for a Secure Geo-replication Slave

Geo-replication supports access to Red Hat Storage slaves through SSH using an unprivileged account (user account with non-zero UID). This method is recommended as it is more secure and it reduces the master's capabilities over slave to the minimum. This feature relies on mountbroker, an internal service of glusterd which manages the mounts for unprivileged slave accounts. You must perform additional steps to configure glusterd with the appropriate mountbroker's access control directives. The following example demonstrates this process:
Perform the following steps on all the Slave nodes to setup an auxiliary glusterFS mount for the unprivileged account:
  1. Create a new group. For example, geogroup.
  2. Create a unprivileged account. For example, geoaccount. Add geoaccount as a member of geogroup group.
  3. As a root, create a new directory with permissions 0711. Ensure that the location where this directory is created is writeable only by root but geoaccount is able to access it. For example, create a mountbroker-root directory at /var/mountbroker-root.
  4. Add the following options to the glusterd.vol file, assuming the name of the slave Red Hat Storage volume as slavevol:
    option mountbroker-root /var/mountbroker-root
    option mountbroker-geo-replication.geoaccount slavevol
    option geo-replication-log-group geogroup 
    option rpc-auth-allow-insecure on
    See Section 2.4, “Storage Concepts” for information on volume file of a Red Hat Storage volume.
    If you are unable to locate the glusterd.vol file at /etc/glusterfs/ directory, create a vol file containing both the default configuration and the above options and save it at /etc/glusterfs/.
    The following is the sample glusterd.vol file along with default options:
    volume management
        type mgmt/glusterd
        option working-directory /var/lib/glusterd
        option transport-type socket,rdma
        option transport.socket.keepalive-time 10
        option transport.socket.keepalive-interval 2
        option transport.socket.read-fail-log off
        option rpc-auth-allow-insecure on
        
        option mountbroker-root /var/mountbroker-root 
        option mountbroker-geo-replication.geoaccount slavevol
        option geo-replication-log-group geogroup
    end-volume
    • If you have multiple slave volumes on Slave, repeat Step 2 for each of them and add the following options to the vol file:
      option mountbroker-geo-replication.geoaccount2 slavevol2
      option mountbroker-geo-replication.geoaccount3 slavevol3
    • You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of mountbroker-geo-replication.geogroup. You can also have multiple options of the form mountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile:
      option mountbroker-geo-replication.geoaccount1 slavevol11,slavevol12,slavevol13
      option mountbroker-geo-replication.geoaccount2 slavevol21,slavevol22
      option mountbroker-geo-replication.geoaccount3 slavevol31
  5. Restart glusterd service on all the Slave nodes.
    After you setup an auxiliary glusterFS mount for the unprivileged account on all the Slave nodes, perform the following steps to setup a non-root geo-replication session.:
  6. Setup a passwdless SSH from one of the master node to the user on one of the slave node. For example, to geoaccount.
  7. Create a common pem pub file by running the following command on the master node where the password-less SSH connection is configured to the user on the slave node:
    # gluster system:: execute gsec_create
  8. Create a geo-replication relationship between master and slave to the user by running the following command on the master node:
    For example,
    # gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol create push-pem
    If you have multiple slave volumes and/or multiple accounts, create a geo-replication session with that particular user and volume.
    For example,
    # gluster volume geo-replication MASTERVOL geoaccount2@SLAVENODE::slavevol2 create push-pem
  9. In the slavenode, which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with user name, master volume name, and slave volume names as the arguments.
    For example,
     # /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount MASTERVOL SLAVEVOL_NAME 
  10. Start the geo-replication with slave user by running the following command on the master node:
    For example,
    # gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol start
  11. Verify the status of geo-replication session by running the following command on the master node:
    # gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status