11.4. Restoring Data from the Slave

You can restore data from the slave to the master volume, whenever the master volume becomes faulty for reasons like hardware failure.
The example in this section assumes that you are using the Master Volume (Volume1) with the following configuration:
machine1# gluster volume info
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: machine1:/export/dir16
Brick2: machine2:/export/dir16
Options Reconfigured:
geo-replication.indexing: on
The data is syncing from master volume (Volume1) to slave directory (example.com:/data/remote_dir). To view the status of this geo-replication session run the following command on Master:
# gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status

MASTER    SLAVE                             STATUS
______    ______________________________    ____________
Volume1  root@example.com:/data/remote_dir   OK
Before Failure
Assume that the Master volume had 100 files and was mounted at /mnt/gluster on one of the client machines (client). Run the following command on client machine to view the list of files:
client# ls /mnt/gluster | wc –l
100
The slave directory (example.com) will have same data as in the master volume and same can be viewed by running the following command on slave:
example.com# ls /data/remote_dir/ | wc –l
100
After Failure
If one of the bricks (machine2) fails, then the status of Geo-replication session is changed from "OK" to "Faulty". To view the status of this geo-replication session run the following command on Master:
# gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status

MASTER    SLAVE                              STATUS
______    ______________________________     ____________
Volume1   root@example.com:/data/remote_dir  Faulty
Machine2 is failed and now you can see discrepancy in number of files between master and slave. Few files will be missing from the master volume but they will be available only on slave as shown below.
Run the following command on Client:
client # ls /mnt/gluster | wc –l
52
Run the following command on slave (example.com):
Example.com# # ls /data/remote_dir/ | wc –l
100
To restore data from the slave machine
  1. Stop all Master's geo-replication sessions using the following command:
    # gluster volume geo-replication MASTER SLAVE stop
    For example:
    machine1# gluster volume geo-replication Volume1
    example.com:/data/remote_dir stop
    
    Stopping geo-replication session between Volume1 &
    example.com:/data/remote_dir has been successful

    Note

    Repeat gluster volume geo-replication MASTER SLAVE stop command on all active geo-replication sessions of master volume.
  2. Replace the faulty brick in the master by using the following command:
    # gluster volume replace-brick VOLNAME BRICK NEW-BRICK start
    For example:
    machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 start
    Replace-brick started successfully
  3. Commit the migration of data using the following command:
    # gluster volume replace-brick VOLNAME BRICK NEW-BRICK commit force
    For example:
    machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 commit force
    Replace-brick commit successful
  4. Verify the migration of brick by viewing the volume info using the following command:
    # gluster volume info VOLNAME
    For example:
    machine1# gluster volume info
    Volume Name: Volume1
    Type: Distribute
    Status: Started
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: machine1:/export/dir16
    Brick2: machine3:/export/dir16
    Options Reconfigured:
    geo-replication.indexing: on
  5. Run rsync command manually to sync data from slave to master volume's client (mount point).
    For example:
    example.com# rsync -PavhS --xattrs --ignore-existing /data/remote_dir/ client:/mnt/gluster
    Verify that the data is synced by using the following command:
    On master volume, run the following command:
    Client # ls | wc –l
    100
    On the Slave run the following command:
    example.com# ls /data/remote_dir/ | wc –l
    100
    Now Master volume and Slave directory is synced.
  6. Restart geo-replication session from master to slave using the following command:
    # gluster volume geo-replication MASTER SLAVE start
    For example:
    machine1# gluster volume geo-replication Volume1
    example.com:/data/remote_dir start
    Starting geo-replication session between Volume1 &
    example.com:/data/remote_dir has been successful