Chapter 3. Configuring Data Grid for Cross-Site Replication

Configuring Data Grid to replicate data across sites, you first set up cluster transport so Data Grid clusters can discover each other and site masters can communicate. You then add backup locations to cache definitions in your Data Grid configuration.

3.1. Configuring Cluster Transport for Cross-Site Replication

Add JGroups RELAY2 to your transport layer so that Data Grid clusters can communicate with backup locations.


  1. Open infinispan.xml for editing.
  2. Add the RELAY2 protocol to a JGroups stack, for example:

       <stack name="xsite" extends="udp">
          <relay.RELAY2 site="LON" xmlns="urn:org:jgroups" max_site_masters="1000"/>
          <remote-sites default-stack="tcp">
             <remote-site name="LON"/>
             <remote-site name="NYC"/>
  3. Configure Data Grid cluster transport to use the stack, as in the following example:

    <cache-container name="default" statistics="true">
      <transport cluster="${}" stack="xsite"/>
  4. Save and close infinispan.xml.

3.1.1. JGroups RELAY2 Stacks

Data Grid clusters use JGroups RELAY2 for inter-cluster discovery and communication.

   <stack name="xsite" 1
          extends="udp"> 2
      <relay.RELAY2 xmlns="urn:org:jgroups" 3
                    site="LON" 4
                    max_site_masters="1000"/> 5
      <remote-sites default-stack="tcp"> 6
         <remote-site name="LON"/> 7
         <remote-site name="NYC"/>
Defines a stack named "xsite" that declares which protocols to use for your Data Grid cluster transport.
Uses the default JGroups UDP stack for intra-cluster traffic.
Adds RELAY2 to the stack for inter-cluster transport.
Names the local site. Data Grid replicates data in caches from this site to backup locations.
Configures a maximum of 1000 site masters for the local cluster. You should set max_site_masters >= the number of nodes in the Data Grid cluster for optimal performance with backup requests.
Specifies all site names and uses the default JGroups TCP stack for inter-cluster transport.
Names each remote site as a backup location.

3.1.2. Custom JGroups RELAY2 Stacks

   <stack name="relay-global" extends="tcp"> 1
         <MPING stack.combine="REMOVE"/>
         <TCPPING initial_hosts="[7800]" stack.combine="INSERT_AFTER" stack.position="TCP"/>
   <stack name="xsite" extends="udp">
      <relay.RELAY2 site="LON" xmlns="urn:org:jgroups"
                    max_site_masters="10" 2
      <remote-sites default-stack="relay-global">
         <remote-site name="LON"/>
         <remote-site name="NYC"/>
Adds a custom RELAY2 stack that extends the TCP stack and uses TCPPING instead of MPING for discovery.
Sets the maximum number of site masters and optionally specifies additional RELAY2 properties. See JGroups RELAY2 documentation.

You can also reference externally defined JGroups stack files as follows:

<stack-file name="relay-global" path="jgroups-relay.xml"/>

In the preceding configuration jgroups-relay.xml provides a JGroups stack such as this:

<config xmlns="urn:org:jgroups"

    <!-- Use TCP for inter-cluster transport. -->
    <TCP bind_addr=""


    <!-- Use TCPPING for inter-cluster discovery. -->
    <TCPPING timeout="3000"

    <!-- Provide other JGroups stack configuration as required. -->

3.2. Adding Backup Locations to Caches

Specify the names of remote sites so Data Grid can back up data to those locations.


  1. Add the backups element to your cache definition.
  2. Specify the name of each remote site with the backup element.

    As an example, in the LON configuration, specify NYC as the remote site.

  3. Repeat the preceding steps so that each site is a backup for all other sites. For example, you cannot add LON as a backup for NYC without adding NYC as a backup for LON.

Cache configurations can be different across sites and use different backup strategies. Data Grid replicates data based on cache names.

Example "customers" configuration in LON

<replicated-cache name="customers">
    <backup site="NYC" strategy="ASYNC" />

Example "customers" configuration in NYC

<distributed-cache name="customers">
    <backup site="LON" strategy="SYNC" />

3.3. Backing Up to Caches with Different Names

By default, Data Grid replicates data between caches that have the same name.


  • Use backup-for to replicate data from a remote site into a cache with a different name on the local site.

For example, the following configuration backs up the "customers" cache on LON to the "eu-customers" cache on NYC.

<distributed-cache name="eu-customers">
    <backup site="LON" strategy="SYNC" />
  <backup-for remote-cache="customers" remote-site="LON" />

3.4. Verifying Cross-Site Views

After you configure Data Grid for cross-site replication, you should verify that Data Grid clusters successfully form cross-site views.


  • Check log messages for ISPN000439: Received new x-site view messages.

For example, if the Data Grid cluster in LON has formed a cross-site view with the Data Grid cluster in NYC, it provides the following messages:

INFO  [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [NYC]
INFO  [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [NYC, LON]

3.5. Configuring Hot Rod Clients for Cross-Site Replication

Configure Hot Rod clients to use Data Grid clusters at different sites.

# Servers at the active site
infinispan.client.hotrod.server_list = LON_host1:11222,LON_host2:11222,LON_host3:11222

# Servers at the backup site
infinispan.client.hotrod.cluster.NYC = NYC_hostA:11222,NYC_hostB:11222,NYC_hostC:11222,NYC_hostD:11222


ConfigurationBuilder builder = new ConfigurationBuilder();


Use the following methods to switch Hot Rod clients to the default cluster or to a cluster at a different site:

  • RemoteCacheManager.switchToDefaultCluster()
  • RemoteCacheManager.switchToCluster(${})