- 11.1. Replicated Volumes vs Geo-replication
- 11.2. Preparing to Deploy Geo-replication
- 11.3. Starting Geo-replication
- 11.4. Restoring Data from the Slave
- 11.5. Triggering Geo-replication Failover and Failback
- 11.6. Best Practices
- 11.7. Troubleshooting Geo-replication
- 11.7.1. Locating Log Files
- 11.7.2. Rotating Geo-replication Logs
- 11.7.3. Synchronization is not complete
- 11.7.4. Issues in Data Synchronization
- 11.7.5. Geo-replication status displays Faulty very often
- 11.7.6. Intermediate Master goes to Faulty State
- 11.7.7. Remote gsyncd Not Found
- 11.7.8. Remote gsyncd Not Found
Geo-replication provides a continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Network (WANs), and across the Internet.
Geo-replication uses a master–slave model, whereby replication and mirroring occurs between the following partners:
- Master – a Red Hat Storage volume
- Slave – a slave which can be of the following types:
- A local directory which can be represented as file URL like
file:///path/to/dir. You can use shortened form, for example,/path/to/dir. - A Red Hat Storage Volume - Slave volume can be either a local volume like
gluster://localhost:volname(shortened form -:volname) or a volume served by different host likegluster://host:volname(shortened form -host:volname).
Note
Both of the above types can be accessed remotely using an SSH tunnel. To use SSH, add an SSH prefix to either a file URL or glusterFS type URL. For example,ssh://root@remote-host:/path/to/dir(shortened form -root@remote-host:/path/to/dir) orssh://root@remote-host:gluster://localhost:volname(shortened from -root@remote-host::volname).
This section introduces Geo-replication, illustrates the various deployment scenarios, and explains how to configure the system to provide replication and mirroring in your environment.
The following table lists the difference between replicated volumes and geo-replication:
| Replicated Volumes | Geo-replication |
|---|---|
| Mirrors data across bricks within one trusted storage pool | Mirrors data across geographically distributed trusted storage pools |
| Provides high-availability | Ensures backing up of data for disaster recovery |
| Synchronous replication (each and every file operation is sent across all the bricks) | Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences) |