-
Language:
English
-
Language:
English
Chapter 11. Run Red Hat JBoss Data Grid in Library Mode (Multi-Node Setup)
11.1. Sharing JGroup Channels
Red Hat JBoss Data Grid offers an easy to use form of clustering using JGroups as the network transport. As a result, JGroups manages the initial operations required to form a cluster for JBoss Data Grid.
All caches created from a single CacheManager share the same JGroups channel by default. This JGroups channel is used to multiplex replication/distribution messages.
In the following example, all three caches used the same JGroups channel:
Shared JGroups Channel
EmbeddedCacheManager cm = $LOCATION Cache<Object, Object> cache1 = cm.getCache("replSyncCache"); Cache<Object, Object> cache2 = cm.getCache("replAsyncCache"); Cache<Object, Object> cache3 = cm.getCache("invalidationSyncCache");
Substitute $LOCATION
with the CacheManager’s location.
An example of a clustered setup is found in the Hello World Quickstart.
11.2. Configure the Cluster
11.2.1. Configuring the Cluster
Use the following steps to add and configure your cluster:
Configure the Cluster
- Add the default configuration for a new cluster.
- Customize the default cluster configuration according to the requirements of your network. This is done declaratively (using XML) or programmatically.
- Configure the replicated or distributed data grid.
11.2.2. Add the Default Cluster Configuration
Add a cluster configuration to ensure that Red Hat JBoss Data Grid is aware that a cluster exists and is defined. The following is a default configuration that serves this purpose:
Default Configuration
new ConfigurationBuilder() .clustering().cacheMode(CacheMode.REPL_SYNC) .build()
Use the new GlobalConfigurationBuilder().clusteredDefault()
to quickly create a preconfigured and cluster-aware GlobalConfiguration
for clusters. This configuration can also be customized.
11.2.3. Customize the Default Cluster Configuration
Depending on the network requirements, you may need to customize your JGroups configuration.
Programmatic Configuration:
Use the following GlobalConfiguration code to specify the name of the file to use for JGroups configuration:
manager = new DefaultCacheManager( new GlobalConfigurationBuilder() .clusteredDefault() .transport().addProperty("configurationFile", "default-configs/default-jgroups-tcp.xml") .build(), new ConfigurationBuilder() .clustering().cacheMode(CacheMode.REPL_SYNC) .build());
Replace default-configs/default-jgroups-tcp.xml
with the desired file name.
To bind JGroups solely to your loopback interface (to avoid any configured firewalls), use the system property -Djgroups.bind_addr="127.0.0.1"
. This is particularly useful to test a cluster where all nodes are on a single machine.
Declarative Configuration:
Use the following XML snippet in the infinispan.xml
file to configure the JGroups properties to use Red Hat JBoss Data Grid’s XML configuration:
<?xml version="1.0" encoding="UTF-8"?> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.5 http://www.infinispan.org/schemas/infinispan-config-8.5.xsd" xmlns="urn:infinispan:config:8.5"> <jgroups> <stack-file name="jgroupsStack" path="${infinispan.jgroups.config:default-configs/default-jgroups-udp.xml}"/> </jgroups> <cache-container name="default" default-cache="localCache" statistics="true"> <transport stack="jgroupsStack" lock-timeout="600000" cluster="default" /> <serialization></serialization> <jmx> <property name="enabled">true</property> </jmx> <local-cache name="localCache"/> <!-- this is here so we don't ever mistakingly use the default cache in a test (local cache will give superb results) --> </cache-container> </infinispan>
11.2.4. Configure the Replicated Data Grid
Red Hat JBoss Data Grid replicated mode ensures that every entry is replicated on every node in the data grid.
This mode offers security against data loss due to node failures and excellent data availability. These benefits are at the cost of limiting the storage capacity to the amount of storage available on the node with the least memory.
Programmatic Configuration:
Use the following code snippet to programmatically configure the cache for replication mode (either synchronous or asynchronous):
private static EmbeddedCacheManager createCacheManagerProgramatically() { return new DefaultCacheManager( new GlobalConfigurationBuilder() .clusteredDefault() .transport().addProperty("configurationFile", "default-configs/default-jgroups-tcp.xml") .build(), new ConfigurationBuilder() .clustering().cacheMode(CacheMode.REPL_SYNC) .build() ); }
Declarative Configuration:
Edit the infinispan.xml file to include the following XML code to declaratively configure the cache for replication mode (either synchronous or asynchronous):
<?xml version="1.0" encoding="UTF-8"?> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.5 http://www.infinispan.org/schemas/infinispan-config-8.5.xsd" xmlns="urn:infinispan:config:8.5"> <jgroups> <stack-file name="jgroupsStack" path="${infinispan.jgroups.config:default-configs/default-jgroups-udp.xml}"/> </jgroups> <cache-container name="default" default-cache="localCache" statistics="true"> <transport stack="jgroupsStack" lock-timeout="600000" cluster="default" /> <serialization></serialization> <jmx> <property name="enabled">true</property> </jmx> <local-cache name="localCache"/> <replicated-cache name="replCache" mode="SYNC" remote-timeout="60000" statistics="true"> <locking acquire-timeout="3000" concurrency-level="1000" /> </replicated-cache> </cache-container> </infinispan>
Use the following code to initialize and return a DefaultCacheManager with the XML configuration file:
private static EmbeddedCacheManager createCacheManagerFromXml() throws IOException { return new DefaultCacheManager("infinispan.xml");}
JBoss EAP includes its own underlying JMX. This can cause a collision when using the sample code with JBoss EAP and display an error such as org.infinispan.jmx.JmxDomainConflictException: Domain already registered org.infinispan
.
To avoid this, configure global configuration as follows:
GlobalConfiguration glob = new GlobalConfigurationBuilder() .clusteredDefault() .globalJmxStatistics() .allowDuplicateDomains(true) .enable() .build();
11.2.5. Configure the Distributed Data Grid
Red Hat JBoss Data Grid’s distributed mode ensures that each entry is stored on a subset of the total nodes in the data grid. The number of nodes in the subset is controlled by the numOwners
parameter, which sets how many owners each entry has.
Distributed mode offers increased storage capacity but also results in increased access times and less durability (protection against node failures). Adjust the numOwners
value to set the desired trade off between space, durability and availability. Durability is further improved by JBoss Data Grid’s topology aware consistent hash, which locates entry owners across a variety of data centers, racks and nodes.
Programmatic Configuration:
Programmatically configure the cache for distributed mode (either synchronous or asynchronous) as follows:
new ConfigurationBuilder() .clustering() .cacheMode(CacheMode.DIST_SYNC) .hash().numOwners(2) .build()
Declarative Configuration:
Edit the infinispan.xml file to include the following XML code to declaratively configure the cache for distributed mode (either synchronous or asynchronous):
<?xml version="1.0" encoding="UTF-8"?> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.5 http://www.infinispan.org/schemas/infinispan-config-8.5.xsd" xmlns="urn:infinispan:config:8.5"> <jgroups> <stack-file name="jgroupsStack" path="${infinispan.jgroups.config:default-configs/default-jgroups-udp.xml}"/> </jgroups> <cache-container name="default" default-cache="localCache" statistics="true"> <transport stack="jgroupsStack" lock-timeout="600000" cluster="default" /> <serialization></serialization> <jmx> <property name="enabled">true</property> </jmx> <local-cache name="localCache"/> <distributed-cache name="distCache" mode="SYNC" remote-timeout="60000" statistics="true" l1-lifespan="-1" owners="2" segments="512" > <locking acquire-timeout="3000" concurrency-level="1000" /> <state-transfer timeout="60000" /> </distributed-cache> </cache-container> </infinispan>