10.3. Preparing to Deploy Geo-replication
This section provides an overview of geo-replication deployment scenarios, lists prerequisites, and describes how to setup the environment for geo-replication session.
10.3.1. Exploring Geo-replication Deployment Scenarios
Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and the Internet. This section illustrates the most common deployment scenarios for geo-replication, including the following:
- Geo-replication over LAN
- Geo-replication over WAN
- Geo-replication over the Internet
- Multi-site cascading geo-replication
Geo-replication over LAN
Geo-replication over WAN
Geo-replication over Internet
Multi-site cascading Geo-replication
10.3.2. Geo-replication Deployment Overview
Deploying geo-replication involves the following steps:
- Verify that your environment matches the minimum system requirements. See Section 10.3.3, “Prerequisites”.
- Determine the appropriate deployment scenario. See Section 10.3.1, “Exploring Geo-replication Deployment Scenarios”.
- Start geo-replication on the master and slave systems. See Section 10.4, “Starting Geo-replication”.
The following are prerequisites for deploying geo-replication:
- The master and slave volumes must be Red Hat Storage instances.
- Slave node must not be a peer of the any of the nodes of the Master trusted storage pool.
- Password-less SSH access is required between one node of the master volume (the node from which the
geo-replication createcommand will be executed), and one node of the slave volume (the node whose IP/hostname will be mentioned in the slave name when running the
geo-replication createcommand).Create the public and private keys using
ssh-keygen(without passphrase) on the master node:
# ssh-keygenCopy the public key to the slave node using the following command:
# ssh-copy-id root@slave_node_IPaddress/HostnameIf you are setting up a non-root geo-replicaton session, then copy the public key to the respective
- Password-less SSH access is required from the master node to slave node, whereas password-less SSH access is not required from the slave node to master node. -
ssh-copy-idcommand does not work if
ssh authorized_keysfile is configured in the custom location. You must copy the contents of
.ssh/id_rsa.pubfile from the Master and paste it to authorized_keys file in the custom location on the Slave node.A password-less SSH connection is also required for
gsyncdbetween every node in the master to every node in the slave. The
gluster system:: execute gsec_createcommand creates
secret-pemfiles on all the nodes in the master, and is used to implement the password-less SSH connection. The
push-pemoption in the
geo-replication createcommand pushes these keys to all the nodes in the slave.For more information on the
gluster system::execute gsec_createand
push-pemcommands, see Section 10.3.4, “Setting Up your Environment for Geo-replication Session”.
10.3.4. Setting Up your Environment for Geo-replication Session
Before configuring the geo-replication environment, ensure that the time on all the servers are synchronized.
- All the servers' time must be uniform on bricks of a geo-replicated master volume. It is recommended to set up a NTP (Network Time Protocol) service to keep the bricks' time synchronized, and avoid out-of-time sync effects.For example: In a replicated volume where brick1 of the master has the time 12:20, and brick2 of the master has the time 12:10 with a 10 minute time lag, all the changes on brick2 between in this period may go unnoticed during synchronization of files with a Slave.For more information on configuring NTP, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Migration_Planning_Guide/sect-Migration_Guide-Networking-NTP.html.
Creating Geo-replication Sessions
- To create a common
pem pubfile, run the following command on the master node where the password-less SSH connection is configured:
# gluster system:: execute gsec_create
- Create the geo-replication session using the following command. The
push-pemoption is needed to perform the necessary
pem-filesetup on the slave nodes.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force]For example:
# gluster volume geo-replication master-vol example.com::slave-vol create push-pem
NoteThere must be password-less SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. If the verification fails, you can use the
forceoption which will ignore the failed verification and create a geo-replication session.
- Verify the status of the created session by running the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
10.3.5. Setting Up your Environment for a Secure Geo-replication Slave
Geo-replication supports access to Red Hat Storage slaves through SSH using an unprivileged account (user account with non-zero UID). This method is recommended as it is more secure and it reduces the master's capabilities over slave to the minimum. This feature relies on
mountbroker, an internal service of glusterd which manages the mounts for unprivileged slave accounts. You must perform additional steps to configure glusterd with the appropriate
mountbroker'saccess control directives. The following example demonstrates this process:
Perform the following steps on all the Slave nodes to setup an auxiliary glusterFS mount for the unprivileged account:
- Create a new group. For example,
- Create a unprivileged account. For example,
geoaccountas a member of
- As a root, create a new directory with permissions 0711. Ensure that the location where this directory is created is writeable only by root but
geoaccountis able to access it. For example, create a
- Add the following options to the glusterd.vol file, assuming the name of the slave Red Hat Storage volume as
option mountbroker-root /var/mountbroker-root
option mountbroker-geo-replication.geoaccount slavevol
option geo-replication-log-group geogroup
option rpc-auth-allow-insecure onSee Section 2.4, “Storage Concepts” for information on volume file of a Red Hat Storage volume.If you are unable to locate the
/etc/glusterfs/directory, create a vol file containing both the default configuration and the above options and save it at
/etc/glusterfs/.The following is the sample
glusterd.volfile along with default options:
volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option rpc-auth-allow-insecure on option mountbroker-root /var/mountbroker-root option mountbroker-geo-replication.geoaccount slavevol option geo-replication-log-group geogroup end-volume
- If you have multiple slave volumes on Slave, repeat Step 2 for each of them and add the following options to the vol file:
option mountbroker-geo-replication.geoaccount2 slavevol2 option mountbroker-geo-replication.geoaccount3 slavevol3
- You can add multiple slave volumes within the same account (geoaccount) by providing comma-separated list (without spaces) as the argument of
mountbroker-geo-replication.geogroup. You can also have multiple options of the form
mountbroker-geo-replication.*. It is recommended to use one service account per Master machine. For example, if there are multiple slave volumes on Slave for the master machines Master1, Master2, and Master3, then create a dedicated service user on Slave for them by repeating Step 2. for each (like geogroup1, geogroup2, and geogroup3), and then add the following corresponding options to the volfile:
option mountbroker-geo-replication.geoaccount1 slavevol11,slavevol12,slavevol13
option mountbroker-geo-replication.geoaccount2 slavevol21,slavevol22
option mountbroker-geo-replication.geoaccount3 slavevol31
glusterdservice on all the Slave nodes.After you setup an auxiliary glusterFS mount for the unprivileged account on all the Slave nodes, perform the following steps to setup a non-root geo-replication session.:
- Setup a passwdless SSH from one of the master node to the
useron one of the slave node. For example, to geoaccount.
- Create a common pem pub file by running the following command on the master node where the password-less SSH connection is configured to the
useron the slave node:
# gluster system:: execute gsec_create
- Create a geo-replication relationship between master and slave to the
userby running the following command on the master node:For example,
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol create push-pemIf you have multiple slave volumes and/or multiple accounts, create a geo-replication session with that particular user and volume.For example,
# gluster volume geo-replication MASTERVOL geoaccount2@SLAVENODE::slavevol2 create push-pem
- In the slavenode, which is used to create relationship, run
/usr/libexec/glusterfs/set_geo_rep_pem_keys.shas a root with user name, master volume name, and slave volume names as the arguments.For example,
# /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount MASTERVOL SLAVEVOL_NAME
- Start the geo-replication with slave user by running the following command on the master node:For example,
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol start
- Verify the status of geo-replication session by running the following command on the master node:
# gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status