Language and Page Formatting Options
14.5. Starting Geo-replication on a Newly Added Brick or Node
14.5.1. Starting Geo-replication for a New Brick or New Node
If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:
- Run the following command on the master node where passwordless SSH connection is configured, in order to create a common
# gluster system:: execute gsec_create
- Create the geo-replication session using the following command. The
forceoptions are required to perform the necessary
pem-filesetup on the slave nodes.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem forceFor example:
# gluster volume geo-replication Volume1 example.com::slave-vol create push-pem force
NoteThere must be passwordless SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
- After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the
/etc/fstabentry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands:
# mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo "<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0" >> /etc/fstabFor more information on setting up shared storage volume, see Section 10.8, “Setting up Shared Storage Volume”.
- Configure the meta-volume for geo-replication:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume trueFor example:
# gluster volume geo-replication Volume1 example.com::slave-vol config use_meta_volume trueFor more information on configuring meta-volume, see Section 14.3.5, “Configuring a Meta-Volume”.
- If a node is added at slave, stop the geo-replication session using the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- Start the geo-replication session between the slave and master forcefully, using the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
- Verify the status of the created session, using the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
14.5.2. Starting Geo-replication for a New Brick on an Existing Node
When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.