Chapter 33. Configure JGroups

33.1. About JGroups

JGroups is the underlying group communication library used to connect Red Hat JBoss Data Grid instances. For a full list of JGroups protocols supported in JBoss Data Grid, see Supported JGroups Protocols.

33.2. Configure Red Hat JBoss Data Grid Interface Binding (Remote Client-Server Mode)

33.2.1. Interfaces

Red Hat JBoss Data Grid allows users to specify an interface type rather than a specific (unknown) IP address.

Note

If you use JBoss Data Grid as a containerized image with OpenShift, you must set environment variables to change the XML configuration during pod creation. See the Data Grid for OpenShift Guide.

  • link-local: Uses a 169.x.x.x or 254.x.x.x address. This suits the traffic within one box.

    <interfaces>
        <interface name="link-local">
            <link-local-address/>
        </interface>
        <!-- Additional configuration elements here -->
    </interfaces>
  • site-local: Uses a private IP address, for example 192.168.[replaceable]x.[replaceable]x````. This prevents extra bandwidth charged from GoGrid, and similar providers.

    <interfaces>
        <interface name="site-local">
            <site-local-address/>
        </interface>
        <!-- Additional configuration elements here -->
    </interfaces>
  • global: Picks a public IP address. This should be avoided for replication traffic.

    <interfaces>
        <interface name="global">
            <any-address/>
        </interface>
        <!-- Additional configuration elements here -->
    </interfaces>
  • non-loopback: Uses the first address found on an active interface that is not a 127.x.x.x address.

    <interfaces>
        <interface name="non-loopback">
            <not>
    	    <loopback />
    	</not>
        </interface>
    </interfaces>

33.2.2. Binding Sockets

33.2.2.1. Binding Sockets

Socket bindings provide a method of associating a name with networking details, such as an interface, a port, a multicast-address, or other details. Sockets may be bound to the interface either individually or using a socket binding group.

33.2.2.2. Binding a Single Socket Example

The following is an example depicting the use of JGroups interface socket binding to bind an individual socket using the socket-binding element.

Socket Binding

<socket-binding name="jgroups-udp" <!-- Additional configuration elements here --> interface="site-local"/>

33.2.2.3. Binding a Group of Sockets Example

The following is an example depicting the use of Groups interface socket bindings to bind a group, using the socket-binding-group element:

Bind a Group

<socket-binding-group name="ha-sockets" default-interface="global">
	<!-- Additional configuration elements here -->
	<socket-binding name="jgroups-tcp" port="7600"/>
	<socket-binding name="jgroups-tcp-fd" port="57600"/>
	<!-- Additional configuration elements here -->
</socket-binding-group>

The two sample socket bindings in the example are bound to the same default-interface (global), therefore the interface attribute does not need to be specified.

33.2.3. Configure JGroups Socket Binding

Each JGroups stack, configured in the JGroups subsystem, uses a specific socket binding. Set up the socket binding as follows:

JGroups UDP Socket Binding Configuration

The following example utilizes UDP automatically form the cluster. In this example the jgroups-udp socket binding is defined for the transport:

<subsystem xmlns="urn:jboss:domain:jgroups:3.0" default-stack="udp">
    <stack name="udp">
        <transport type="UDP" socket-binding="jgroups-udp">
            <!-- Additional configuration elements here -->
        </transport>
        <protocol type="PING"/>
        <protocol type="MERGE3"/>
        <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
        <protocol type="FD_ALL"/>
        <protocol type="VERIFY_SUSPECT"/>
        <protocol type="pbcast.NAKACK2"/>
        <protocol type="UNICAST3"/>
        <protocol type="pbcast.STABLE"/>
        <protocol type="pbcast.GMS"/>
        <protocol type="UFC"/>
        <protocol type="MFC"/>
        <protocol type="FRAG3"/>
    </stack>
</subsystem>

JGroups TCP Socket Binding Configuration

The following example uses TCP to establish direct communication between two clusters nodes. In the example below node1 is located at 192.168.1.2:7600, and node2 is located at 192.168.1.3:7600. The port in use will be defined by the jgroups-tcp property in the socket-binding section.

<subsystem xmlns="urn:infinispan:server:jgroups:8.0" default-stack="tcp">
    <stack name="tcp">
        <transport type="TCP" socket-binding="jgroups-tcp"/>
        <protocol type="TCPPING">
            <property name="initial_hosts">192.168.1.2[7600],192.168.1.3[7600]</property>
            <property name="num_initial_members">2</property>
            <property name="port_range">0</property>
            <property name="timeout">2000</property>
        </protocol>
        <protocol type="MERGE3"/>
        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
        <protocol type="FD_ALL"/>
        <protocol type="VERIFY_SUSPECT"/>
        <protocol type="pbcast.NAKACK2">
            <property name="use_mcast_xmit">false</property>
        </protocol>
        <protocol type="UNICAST3"/>
        <protocol type="pbcast.STABLE"/>
        <protocol type="pbcast.GMS"/>
        <protocol type="MFC"/>
        <protocol type="FRAG3"/>
    </stack>
</subsystem>

The decision of UDP vs TCP must be made in each environment. By default JGroups uses UDP, as it allows for dynamic detection of clustered members and scales better in larger clusters due to a smaller network footprint. In addition, when using UDP only one packet per cluster is required, as multicast packets are received by all subscribers to the multicast address; however, in environments where multicast traffic is prohibited, or if UDP traffic can not reach the remote cluster nodes, such as when cluster members are on separate VLANs, TCP traffic can be used to create a cluster.

Important

When using UDP as the JGroups transport, the socket binding has to specify the regular (unicast) port, multicast address, and multicast port.

33.3. Configure JGroups (Library Mode)

33.3.1. Configure JGroups for Clustered Modes

Red Hat JBoss Data Grid must have an appropriate JGroups configuration in order to operate in clustered mode.

JGroups XML Configuration

<infinispan xmlns="urn:infinispan:config:8.5">
    <jgroups>
        <stack-file name="jgroupsStack" path="/path/to/jgroups/xml/jgroups.xml}"/>
    </jgroups>
    <cache-container name="default" default-cache="default">
        <transport stack="jgroupsStack" lock-timeout="600000" cluster="default" />
    </cache-container>
</infinispan>

JBoss Data Grid will first search for jgroups.xml in the classpath; if no instances are found in the classpath it will then search for an absolute path name.

33.3.2. JGroups Transport Protocols

33.3.2.1. JGroups Transport Protocols

A transport protocol is the protocol at the bottom of a protocol stack. Transport Protocols are responsible for sending and receiving messages from the network.

Red Hat JBoss Data Grid ships with both UDP and TCP transport protocols.

33.3.2.2. The UDP Transport Protocol

UDP is a transport protocol that uses:

  • IP multicasting to send messages to all members of a cluster.
  • UDP datagrams for unicast messages, which are sent to a single member.

When the UDP transport is started, it opens a unicast socket and a multicast socket. The unicast socket is used to send and receive unicast messages, the multicast socket sends and receives multicast sockets. The physical address of the channel will be the same as the address and port number of the unicast socket.

UDP is the default transport protocol used in JBoss Data Grid as it allows for dynamic discovery of additional cluster nodes, and scales well with cluster sizes.

33.3.2.3. The TCP Transport Protocol

TCP/IP is a replacement transport for UDP in situations where IP multicast cannot be used, such as operations over a WAN where routers may discard IP multicast packets.

TCP is a transport protocol used to send unicast and multicast messages.

  • When sending multicast messages, TCP sends multiple unicast messages.

As IP multicasting cannot be used to discover initial members, another mechanism must be used to find initial membership.

33.3.2.4. Using the TCPPing Protocol

Some networks only allow TCP to be used. The pre-configured default-configs/default-jgroups-tcp.xml includes the MPING protocol, which uses UDP multicast for discovery. When UDP multicast is not available, the MPING protocol, has to be replaced by a different mechanism. The recommended alternative is the TCPPING protocol. The TCPPING configuration contains a static list of IP addresses which are contacted for node discovery.

Configure the JGroups Subsystem to Use TCPPING

<TCP bind_port="7800" />
<TCPPING initial_hosts="${jgroups.tcpping.initial_hosts:HostA[7800],HostB[7801]}"
         port_range="1" />

33.3.3. Pre-Configured JGroups Files

33.3.3.1. Pre-Configured JGroups Files

Red Hat JBoss Data Grid ships with pre-configured JGroups files in infinispan-embedded.jar, available on the classpath by default.

These JGroups configuration files contain default settings that you should adjust and tune to achieve optimal network performance.

To use a JGroups configuration file, replace jgroups.xml with one of the following:

  • default-configs/default-jgroups-udp.xml
  • default-configs/default-jgroups-tcp.xml
  • default-configs/default-jgroups-ec2.xml
  • default-configs/default-jgroups-google.xml
  • default-configs/default-jgroups-kubernetes.xml

33.3.3.2. default-jgroups-udp.xml

  • uses UDP for transport and UDP multicast for discovery, allowing for dynamic discovery of nodes.
  • is suitable for larger clusters because it requires less resources than TCP.
  • is recommended for clusters that send the same information to all nodes, such as when using Invalidation or Replication modes.

Add the following system properties to the JVM at start up to configure these settings:

Table 33.1. default-jgroups-udp.xml System Properties

System PropertyDescriptionDefaultRequired?

jgroups.udp.mcast_addr

IP address to use for multicast (both for communications and discovery). Must be a valid Class D IP address, suitable for IP multicast

228.6.7.8

No

jgroups.udp.mcast_port

Port to use for multicast socket

46655

No

jgroups.ip_ttl

Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped

2

No

jgroups.thread_pool.min_threads

Minimum thread pool size for the thread pool

0

No

jgroups.thread_pool.max_threads

Maximum thread pool size for the thread pool

200

No

jgroups.join_timeout

The maximum number of milliseconds to wait for a new node JOIN request to succeed

5000

No

33.3.3.3. default-jgroups-tcp.xml

  • uses TCP for transport and UDP multicast for discovery.
  • is typically used where multicast UDP is not an option.
  • is recommended for point-to-point (unicast) communication, such as when using Distribution mode because it has a more refined method of flow control.

Add the following system properties to the JVM at start up to configure these settings:

Table 33.2. default-jgroups-tcp.xml System Properties

System PropertyDescriptionDefaultRequired?

jgroups.tcp.address

IP address to use for the TCP transport.

127.0.0.1

Not required, but should be changed for clustering servers located on different systems

jgroups.tcp.port

Port to use for TCP socket

7800

No

jgroups.thread_pool.min_threads

Minimum thread pool size for the thread pool

0

No

jgroups.thread_pool.max_threads

Maximum thread pool size for the thread pool

200

No

jgroups.mping.mcast_addr

IP address to use for multicast (for discovery). Must be a valid Class D IP address, suitable for IP multicast.

228.2.4.6

No

jgroups.mping.mcast_port

Port to use for multicast socket

43366

No

jgroups.udp.ip_ttl

Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped

2

No

jgroups.join_timeout

The maximum number of milliseconds to wait for a new node JOIN request to succeed

5000

No

33.3.3.4. default-jgroups-ec2.xml

  • uses TCP for transport and S3_PING for discovery.
  • is suitable on EC2 nodes where UDP multicast is not available.

Add the following system properties to the JVM at start up to configure these settings:

Table 33.3. default-jgroups-ec2.xml System Properties

System PropertyDescriptionDefaultRequired?

jgroups.tcp.address

IP address to use for the TCP transport.

127.0.0.1

Not required, but should be changed for clustering servers located on different EC2 nodes

jgroups.tcp.port

Port to use for TCP socket

7800

No

jgroups.thread_pool.min_threads

Minimum thread pool size for the thread pool

0

No

jgroups.thread_pool.max_threads

Maximum thread pool size for the thread pool

200

No

jgroups.s3.access_key

The Amazon S3 access key used to access an S3 bucket

 

Yes

jgroups.s3.secret_access_key

The Amazon S3 secret key used to access an S3 bucket

 

Yes

jgroups.s3.bucket

Name of the Amazon S3 bucket to use. Must be unique and must already exist

 

Yes

jgroups.s3.pre_signed_delete_url

The pre-signed URL to be used for the DELETE operation.

 

Yes

jgroups.s3.pre_signed_put_url

The pre-signed URL to be used for the PUT operation.

 

Yes

jgroups.s3.prefix

If set, S3_PING searches for a bucket with a name that starts with the prefix value.

 

No

jgroups.join_timeout

The maximum number of milliseconds to wait for a new node JOIN request to succeed

5000

No

33.3.3.5. default-jgroups-google.xml

  • uses TCP for transport and GOOGLE_PING for discovery.
  • is suitable on Google Compute Engine nodes where UDP multicast is not available.

Add the following system properties to the JVM at start up to configure these settings:

Table 33.4. default-jgroups-google.xml System Properties

System PropertyDescriptionDefaultRequired?

jgroups.tcp.address

IP address to use for the TCP transport.

127.0.0.1

Not required, but should be changed for clustering systems located on different nodes

jgroups.tcp.port

Port to use for TCP socket

7800

No

jgroups.thread_pool.min_threads

Minimum thread pool size for the thread pool

0

No

jgroups.thread_pool.max_threads

Maximum thread pool size for the thread pool

200

No

jgroups.google.access_key

The Google Compute Engine User’s access key used to access the bucket

 

Yes

jgroups.google.secret_access_key

The Google Compute Engine User’s secret access key used to access the bucket

 

Yes

jgroups.google.bucket

Name of the Google Compute Engine bucket to use. Must be unique and already exist

 

Yes

jgroups.join_timeout

The maximum number of milliseconds to wait for a new node JOIN request to succeed

5000

No

33.3.3.6. default-jgroups-kubernetes.xml

  • uses TCP for transport and KUBE_PING for discovery.
  • is suitable for Kubernetes and OpenShift nodes where UDP multicast is not available.

Add the following system properties to the JVM at start up to configure these settings:

Table 33.5. default-jgroups-kubernetes.xml System Properties

System PropertyDescriptionDefaultRequired?

jgroups.tcp.address

IP address to use for the TCP transport.

eth0

Not required, but should be changed for clustering systems located on different nodes

jgroups.tcp.port

Port to use for TCP socket

7800

No

33.4. Test Multicast Using JGroups

33.4.1. Test Multicast Using JGroups

Learn how to ensure that the system has correctly configured multicasting within the cluster.

33.4.2. Testing With Different Red Hat JBoss Data Grid Versions

The following table details which Red Hat JBoss Data Grid versions are compatible with this multicast test:

Note

${infinispan.version} corresponds to the version of Infinispan included in the specific release of JBoss Data Grid. This will appear in a x.y.z format, with the major version, minor version, and revision being included.

Table 33.6. Testing with Different JBoss Data Grid Versions

VersionTest CaseDetails

JBoss Data Grid 7.2.0

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 7.1.0

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 7.0.0

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 6.6.0

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 6.5.1

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 6.5.0

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 6.4.0

Available

The location of the test classes depends on the distribution:

  • For library mode, they are inside the infinispan-embedded-${infinispan.version}.Final-redhat-# JAR file
  • For Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main/ directory."

JBoss Data Grid 6.3.0

Available

The location of the test classes depends on the distribution:

  • In Library mode, they are in the JGroups JAR file in the lib directory.
  • In Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main .

JBoss Data Grid 6.2.1

Available

The location of the test classes depends on the distribution:

  • In Library mode, they are in the JGroups JAR file in the lib directory.
  • In Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main

JBoss Data Grid 6.2.0

Available

The location of the test classes depends on the distribution:

  • In Library mode, they are in the JGroups JAR file in the lib directory.
  • In Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/system/layers/base/org/jgroups/main .

JBoss Data Grid 6.1.0

Available

The location of the test classes depends on the distribution:

  • In Library mode, they are in the JGroups JAR file in the lib directory.
  • In Remote Client-Server mode, they are in the JGroups JAR file in the ${JDG_HOME}/modules/org/jgroups/main/ directory.

JBoss Data Grid 6.0.1

Not Available

This version of JBoss Data Grid is based on JBoss Enterprise Application Platform 6.0, which does not include the test classes used for this test.

JBoss Data Grid 6.0.0

Not Available

This version of JBoss Data Grid is based on JBoss Enterprise Application Server 6.0, which does not include the test classes used for this test.

33.4.3. Testing Multicast Using JGroups

The following procedure details the steps to test multicast using JGroups if you are using Red Hat JBoss Data Grid:

Prerequisites

Ensure that the following prerequisites are met before starting the testing procedure.

  1. Set the bind_addr value to the appropriate IP address for the instance.
  2. For added accuracy, set mcast_addr and port values that are the same as the cluster communication values.
  3. Start two command line terminal windows. Navigate to the location of the JGroups JAR file for one of the two nodes in the first terminal and the same location for the second node in the second terminal.

Test Multicast Using JGroups

  1. Run the Multicast Server on Node One

    Run the following command on the command line terminal for the first node (replace jgroups.jar with the infinispan-embedded.jar for Library mode):

    java -cp jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 230.1.2.3 -port 5555 -bind_addr $YOUR_BIND_ADDRESS
  2. Run the Multicast Server on Node Two

    Run the following command on the command line terminal for the second node (replace jgroups.jar with the infinispan-embedded.jar for Library mode):

    java -cp jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr  230.1.2.3 -port 5555 -bind_addr $YOUR_BIND_ADDRESS
  3. Transmit Information Packets

    Enter information on instance for node two (the node sending packets) and press enter to send the information.

  4. View Receives Information Packets

    View the information received on the node one instance. The information entered in the previous step should appear here.

  5. Confirm Information Transfer

    Repeat steps 3 and 4 to confirm all transmitted information is received without dropped packets.

  6. Repeat Test for Other Instances

    Repeat steps 1 to 4 for each combination of sender and receiver. Repeating the test identifies other instances that are incorrectly configured.

Result

All information packets transmitted from the sender node must appear on the receiver node. If the sent information does not appear as expected, multicast is incorrectly configured in the operating system or the network.