35.2. Configure Cross-Datacenter Replication
35.2.1. Configure Cross-Datacenter Replication (Remote Client-Server Mode)
Procedure 35.1. Set Up Cross-Datacenter Replication
Set Up RELAY
Add the following configuration to thestandalone.xmlfile to set upRELAY:<subsystem xmlns="urn:infinispan:server:jgroups:8.0"> <channels default="cluster"> <channel name="cluster"/> <channel name="xsite" stack="tcp"/> </channels> <stacks default="udp"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <...other protocols...> <relay site="LON"> <remote-site name="NYC" channel="xsite"/> <remote-site name="SFO" channel="xsite"/> </relay> </stack> </stacks> </subsystem>{TheRELAYprotocol creates an additional stack (running parallel to the existingUDPstack) to communicate with the remote site. If aTCPbased stack is used for the local cluster, twoTCPbased stack configurations are required: one for local communication and one to connect to the remote site. For an illustration, see Section 35.1, “Cross-Datacenter Replication Operations”Set Up Sites
Use the following configuration in thestandalone.xmlfile to set up sites for each distributed cache in the cluster:<distributed-cache name="namedCache"> <!-- Additional configuration elements here --> <backups> <backup site="{FIRSTSITENAME}" strategy="{SYNC/ASYNC}" /> <backup site="{SECONDSITENAME}" strategy="{SYNC/ASYNC}" /> </backups> </distributed-cache>Configure Local Site Transport
Add the name of the local site in thetransportelement to configure transport:<transport executor="infinispan-transport" lock-timeout="60000" cluster="LON" stack="udp"/>
$JDG_SERVER/docs/examples/configs/clustered-xsite.xml.
35.2.2. Configure Cross-Data Replication (Library Mode)
35.2.2.1. Configure Cross-Datacenter Replication Declaratively
relay.RELAY2 protocol creates an additional stack (running parallel to the existing TCP stack) to communicate with the remote site. If a TCP-based stack is used for the local cluster, two TCP based stack configurations are required: one for local communication and one to connect to the remote site.
Procedure 35.2. Setting Up Cross-Datacenter Replication
Configure the Local Site
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.0 http://www.infinispan.org/schemas/infinispan-config-8.0.xsd" xmlns="urn:infinispan:config:8.0"> <jgroups> <stack-file name="udp" path="jgroups-with-relay.xml"/> </jgroups> <cache-container default-cache="default"> <transport cluster="infinispan-cluster" lock-timeout="50000" stack="udp" node-name="node1" machine="machine1" rack="rack1" site="LON"/> <local-cache name="default"> <backups> <backup site="NYC" strategy="SYNC" failure-policy="IGNORE" timeout="12003"/> <backup site="SFO" strategy="ASYNC"/> </backups> </local-cache> <!-- Additional configuration information here --> </infinispan>- Add the
siteattribute to thetransportelement to define the local site (in this example, the local site is namedLON). - Cross-site replication requires a non-default JGroups configuration. Define the
jgroupselement and define a customstack-file, passing in the name of the file to be referenced and the location to this custom configuration. In this example, the JGroups configuration file is namedjgroups-with-relay.xml. - Configure the cache in site
LONto back up to the sitesNYCandSFO. - Configure the back up caches:
- Configure the cache in site
NYCto receive back up data fromLON:<local-cache name="backupNYC"> <backups/> <backup-for remote-cache="default" remote-site="LON"/> </local-cache> - Configure the cache in site
SFOto receive back up data fromLON:<local-cache name="backupSFO"> <backups/> <backup-for remote-cache="default" remote-site="LON"/> </local-cache>
Add the Contents of the Configuration File
As a default, Red Hat JBoss Data Grid includes JGroups configuration files such asdefault-configs/default-jgroups-tcp.xmlanddefault-configs/default-jgroups-udp.xmlin theinfinispan-embedded-{VERSION}.jarpackage.Copy the JGroups configuration to a new file (in this example, it is namedjgroups-with-relay.xml) and add the provided configuration information to this file. Note that therelay.RELAY2protocol configuration must be the last protocol in the configuration stack.<config> ... <relay.RELAY2 site="LON" config="relay.xml" relay_multicasts="false" /> </config>Configure the relay.xml File
Set up therelay.RELAY2configuration in therelay.xmlfile. This file describes the global cluster configuration.<RelayConfiguration> <sites> <site name="LON" id="0"> <bridges> <bridge config="jgroups-global.xml" name="global"/> </bridges> </site> <site name="NYC" id="1"> <bridges> <bridge config="jgroups-global.xml" name="global"/> </bridges> </site> <site name="SFO" id="2"> <bridges> <bridge config="jgroups-global.xml" name="global"/> </bridges> </site> </sites> </RelayConfiguration>Configure the Global Cluster
The filejgroups-global.xmlreferenced inrelay.xmlcontains another JGroups configuration which is used for the global cluster: communication between sites.The global cluster configuration is usuallyTCP-based and uses theTCPPINGprotocol (instead ofPINGorMPING) to discover members. Copy the contents ofdefault-configs/default-jgroups-tcp.xmlintojgroups-global.xmland add the following configuration in order to configureTCPPING:<config> <TCP bind_port="7800" ... /> <TCPPING initial_hosts="lon.hostname[7800],nyc.hostname[7800],sfo.hostname[7800]" ergonomics="false" /> <!-- Rest of the protocols --> </config>Replace the hostnames (or IP addresses) inTCPPING.initial_hostswith those used for your site masters. The ports (7800in this example) must match theTCP.bind_port.For more information about theTCPPINGprotocol, see Section 30.2.1.3, “Using the TCPPing Protocol”.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.