Administration and Configuration Guide
For use with Red Hat JBoss Data Grid 7.0
Abstract
Chapter 1. Setting up Red Hat JBoss Data Grid
1.1. Prerequisites
1.2. Steps to Set up Red Hat JBoss Data Grid
Procedure 1.1. Set Up JBoss Data Grid
Set Up the Cache Manager
The first step in a JBoss Data Grid configuration is a cache manager. Cache managers can retrieve cache instances and create cache instances quickly and easily using previously specified configuration templates. For details about setting up a cache manager, refer to theCache Manager
section in the JBoss Data Grid Getting Started Guide.Set Up JVM Memory Management
An important step in configuring your JBoss Data Grid is to set up memory management for your Java Virtual Machine (JVM). JBoss Data Grid offers features such as eviction and expiration to help manage the JVM memory.Set Up Eviction
Use eviction to specify the logic used to remove entries from the in-memory cache implementation based on how often they are used. JBoss Data Grid offers different eviction strategies for finer control over entry eviction in your data grid. Eviction strategies and instructions to configure them are available in Chapter 2, Set Up Eviction.Set Up Expiration
To set upper limits to an entry's time in the cache, attach expiration information to each entry. Use expiration to set up the maximum period an entry is allowed to remain in the cache and how long the retrieved entry can remain idle before being removed from the cache. For details, see Chapter 3, Set Up Expiration
Monitor Your Cache
JBoss Data Grid uses logging via JBoss Logging to help users monitor their caches.Set Up Logging
It is not mandatory to set up logging for your JBoss Data Grid, but it is highly recommended. JBoss Data Grid uses JBoss Logging, which allows the user to easily set up automated logging for operations in the data grid. Logs can subsequently be used to troubleshoot errors and identify the cause of an unexpected failure. For details, see Chapter 4, Set Up Logging
Set Up Cache Modes
Cache modes are used to specify whether a cache is local (simple, in-memory cache) or a clustered cache (replicates state changes over a small subset of nodes). Additionally, if a cache is clustered, either replication, distribution or invalidation mode must be applied to determine how the changes propagate across the subset of nodes. For details, see Part III, “Set Up Cache Modes”Set Up Locking for the Cache
When replication or distribution is in effect, copies of entries are accessible across multiple nodes. As a result, copies of the data can be accessed or modified concurrently by different threads. To maintain consistency for all copies across nodes, configure locking. For details, see Part VI, “Set Up Locking for the Cache” and Chapter 17, Set Up Isolation LevelsSet Up and Configure a Cache Store
JBoss Data Grid offers the passivation feature (or cache writing strategies if passivation is turned off) to temporarily store entries removed from memory in a persistent, external cache store. To set up passivation or a cache writing strategy, you must first set up a cache store.Set Up a Cache Store
The cache store serves as a connection to the persistent store. Cache stores are primarily used to fetch entries from the persistent store and to push changes back to the persistent store. For details, see Part VII, “Set Up and Configure a Cache Store”Set Up Passivation
Passivation stores entries evicted from memory in a cache store. This feature allows entries to remain available despite not being present in memory and prevents potentially expensive write operations to the persistent cache. For details, see Part VIII, “Set Up Passivation”Set Up a Cache Writing Strategy
If passivation is disabled, every attempt to write to the cache results in writing to the cache store. This is the default Write-Through cache writing strategy. Set the cache writing strategy to determine whether these cache store writes occur synchronously or asynchronously. For details, see Part IX, “Set Up Cache Writing”
Monitor Caches and Cache Managers
JBoss Data Grid includes three primary tools to monitor the cache and cache managers once the data grid is up and running.Set Up JMX
JMX is the standard statistics and management tool used for JBoss Data Grid. Depending on the use case, JMX can be configured at a cache level or a cache manager level or both. For details, see Chapter 22, Set Up Java Management Extensions (JMX)Access the Administration Console
Red Hat JBoss Data Grid 7.0.0 introduces an Administration Console, allowing for web-based monitoring and management of caches and cache managers. For usage detals refer to Section 24.3.1, “Red Hat JBoss Data Grid Administration Console Getting Started”.Set Up Red Hat JBoss Operations Network (JON)
Red Hat JBoss Operations Network (JON) is the second monitoring solution available for JBoss Data Grid. JBoss Operations Network (JON) offers a graphical interface to monitor runtime parameters and statistics for caches and cache managers. For details, see Chapter 23, Set Up JBoss Operations Network (JON)Note
The JON plugin has been deprecated in JBoss Data Grid 7.0 and is expected to be removed in a subsequent version.
Introduce Topology Information
Optionally, introduce topology information to your data grid to specify where specific types of information or objects in your data grid are located. Server hinting is one of the ways to introduce topology information in JBoss Data Grid.Set Up Server Hinting
When set up, server hinting provides high availability by ensuring that the original and backup copies of data are not stored on the same physical server, rack or data center. This is optional in cases such as a replicated cache, where all data is backed up on all servers, racks and data centers. For details, see Chapter 34, High Availability Using Server Hinting
Part I. Set Up JVM Memory Management
Chapter 2. Set Up Eviction
2.1. About Eviction
2.2. Eviction Strategies
Table 2.1. Eviction Strategies
Strategy Name | Operations | Details |
---|---|---|
EvictionStrategy.NONE | No eviction occurs. | This is the default eviction strategy in Red Hat JBoss Data Grid. |
EvictionStrategy.LRU | Least Recently Used eviction strategy. This strategy evicts entries that have not been used for the longest period. This ensures that entries that are reused periodically remain in memory. | |
EvictionStrategy.UNORDERED | Unordered eviction strategy. This strategy evicts entries without any ordered algorithm and may therefore evict entries that are required later. However, this strategy saves resources because no algorithm related calculations are required before eviction. | This strategy is recommended for testing purposes and not for a real work implementation. |
EvictionStrategy.LIRS | Low Inter-Reference Recency Set eviction strategy. | LIRS is an eviction algorithm that suits a large variety of production use cases. |
2.2.1. LRU Eviction Algorithm Limitations
- Single use access entries are not replaced in time.
- Entries that are accessed first are unnecessarily replaced.
2.3. Using Eviction
eviction
/> element is used to enable eviction without any strategy or maximum entries settings, the following default values are used:
- Strategy: If no eviction strategy is specified,
EvictionStrategy.NONE
is assumed as a default. - size: If no value is specified, the
size
value is set to-1
, which allows unlimited entries.
2.3.1. Initialize Eviction
size
attributes value to a number greater than zero. Adjust the value set for size
to discover the optimal value for your configuration. It is important to remember that if too large a value is set for size
, Red Hat JBoss Data Grid runs out of memory.
Procedure 2.1. Initialize Eviction
Add the Eviction Tag
Add the <eviction> tag to your project's <cache> tags as follows:<eviction />
Set the Eviction Strategy
Set thestrategy
value to set the eviction strategy employed. Possible values areLRU
,UNORDERED
andLIRS
(orNONE
if no eviction is required). The following is an example of this step:<eviction strategy="LRU" />
Set the Maximum Size to use for Eviction
Set the maximum number of entries allowed in memory by defining thesize
element. The default value is-1
for unlimited entries. The following demonstrates this step:<eviction strategy="LRU" size="200" />
Eviction is configured for the target cache.
2.3.2. Eviction Configuration Examples
<eviction strategy="LRU" size="2000"/>
2.3.3. Utilizing Memory Based Eviction
Only keys and values that are stored as primitives, primitive wrappers (such as java.lang.Integer
), java.lang.String
instances, or an Array
of these values may be used with memory based eviction.
store-as-binary
must be enabled on the cache, or the data from the custom class may be serialized, storing it in a byte array.
Memory based eviction is only supported with the LRU
eviction strategy.
This eviction method may be used by defining MEMORY
as the eviction type, as seen in the following example:
<local-cache name="local"> <eviction size="10000000000" strategy="LRU" type="MEMORY"/> </local-cache>
2.3.4. Eviction and Passivation
Chapter 3. Set Up Expiration
3.1. About Expiration
- A lifespan value.
- A maximum idle time value.
lifespan
or max-idle
value.
lifespan
or max-idle
defined are mortal, as they will eventually be removed from the cache once one of these conditions are met.
- expiration removes entries based on the period they have been in memory. Expiration only removes entries when the life span period concludes or when an entry has been idle longer than the specified idle time.
- eviction removes entries based on how recently (and often) they are used. Eviction only removes entries when too many entries are present in the memory. If a cache store has been configured, evicted entries are persisted in the cache store.
3.2. Expiration Operations
lifespan
) or maximum idle time (max-idle
) defined for an individual key/value pair overrides the cache-wide default for the entry in question.
3.3. Eviction and Expiration Comparison
lifespan
) and idle time (max-idle
) values are replicated alongside each cache entry.
3.4. Cache Entry Expiration Behavior
- An entry is passivated/overflowed to disk and is discovered to have expired.
- The expiration maintenance thread discovers that an entry it has found is expired.
3.5. Configure Expiration
Procedure 3.1. Configure Expiration
Add the Expiration Tag
Add the <expiration> tag to your project's <cache> tags as follows:<expiration />
Set the Expiration Lifespan
Set thelifespan
value to set the period of time (in milliseconds) an entry can remain in memory. The following is an example of this step:<expiration lifespan="1000" />
Set the Maximum Idle Time
Set the time that entries are allowed to remain idle (unused) after which they are removed (in milliseconds). The default value is-1
for unlimited time.<expiration lifespan="1000" max-idle="1000" />
3.6. Troubleshooting Expiration
put()
are passed a life span value as a parameter. This value defines the interval after which the entry must expire. In cases where eviction is not configured and the life span interval expires, it can appear as if Red Hat JBoss Data Grid has not removed the entry. For example, when viewing JMX statistics, such as the number of entries, you may see an out of date count, or the persistent store associated with JBoss Data Grid may still contain this entry. Behind the scenes, JBoss Data Grid has marked it as an expired entry, but has not removed it. Removal of such entries happens as follows:
- An entry is passivated/overflowed to disk and is discovered to have expired.
- The expiration maintenance thread discovers that an entry it has found is expired.
get()
or containsKey()
for the expired entry causes JBoss Data Grid to return a null value. The expired entry is later removed by the expiration thread.
Part II. Monitor Your Cache
Chapter 4. Set Up Logging
4.1. About Logging
4.2. Supported Application Logging Frameworks
- JBoss Logging, which is included with Red Hat JBoss Data Grid 7.
4.2.1. About JBoss Logging
4.2.2. JBoss Logging Features
- Provides an innovative, easy to use typed logger.
- Full support for internationalization and localization. Translators work with message bundles in properties files while developers can work with interfaces and annotations.
- Build-time tooling to generate typed loggers for production, and runtime generation of typed loggers for development.
4.3. Boot Logging
4.3.1. Configure Boot Logging
logging.properties
file to configure the boot log. This file is a standard Java properties file and can be edited in a text editor. Each line in the file has the format of property=value
.
logging.properties
file is available in the $JDG_HOME/standalone/configuration
folder.
4.3.2. Default Log File Locations
Table 4.1. Default Log File Locations
Log File | Location | Description |
---|---|---|
boot.log | $JDG_HOME/standalone/log/ |
The Server Boot Log. Contains log messages related to the start up of the server.
By default this file is prepended to the
server.log . This file may be created independently of the server.log by defining the org.jboss.boot.log property in logging.properties .
|
server.log | $JDG_HOME/standalone/log/ | The Server Log. Contains all log messages once the server has launched. |
4.4. Logging Attributes
4.4.1. About Log Levels
TRACE
DEBUG
INFO
WARN
ERROR
FATAL
WARN
will only record messages of the levels WARN
, ERROR
and FATAL
.
4.4.2. Supported Log Levels
Table 4.2. Supported Log Levels
Log Level | Value | Description |
---|---|---|
FINEST | 300 | - |
FINER | 400 | - |
TRACE | 400 | Used for messages that provide detailed information about the running state of an application. TRACE level log messages are captured when the server runs with the TRACE level enabled. |
DEBUG | 500 | Used for messages that indicate the progress of individual requests or activities of an application. DEBUG level log messages are captured when the server runs with the DEBUG level enabled. |
FINE | 500 | - |
CONFIG | 700 | - |
INFO | 800 | Used for messages that indicate the overall progress of the application. Used for application start up, shut down and other major lifecycle events. |
WARN | 900 | Used to indicate a situation that is not in error but is not considered ideal. Indicates circumstances that can lead to errors in the future. |
WARNING | 900 | - |
ERROR | 1000 | Used to indicate an error that has occurred that could prevent the current activity or request from completing but will not prevent the application from running. |
SEVERE | 1000 | - |
FATAL | 1100 | Used to indicate events that could cause critical service failure and application shutdown and possibly cause JBoss Data Grid to shut down. |
4.4.3. About Log Categories
WARNING
log level results in log values of 900
, 1000
and 1100
are captured.
4.4.4. About the Root Logger
server.log
. This file is sometimes referred to as the server log.
4.4.5. About Log Handlers
Console
File
Periodic
Size
Async
Custom
4.4.6. Log Handler Types
Table 4.3. Log Handler Types
Log Handler Type | Description | Use Case |
---|---|---|
Console | Console log handlers write log messages to either the host operating system’s standard out (stdout ) or standard error (stderr ) stream. These messages are displayed when JBoss Data Grid is run from a command line prompt. | The Console log handler is preferred when JBoss Data Grid is administered using the command line. In such a case, the messages from a Console log handler are not saved unless the operating system is configured to capture the standard out or standard error stream. |
File | File log handlers are the simplest log handlers. Their primary use is to write log messages to a specified file. | File log handlers are most useful if the requirement is to store all log entries according to the time in one place. |
Periodic | Periodic file handlers write log messages to a named file until a specified period of time has elapsed. Once the time period has elapsed, the specified time stamp is appended to the file name. The handler then continues to write into the newly created log file with the original name. | The Periodic file handler can be used to accumulate log messages on a weekly, daily, hourly or other basis depending on the requirements of the environment. |
Size | Size log handlers write log messages to a named file until the file reaches a specified size. When the file reaches a specified size, it is renamed with a numeric prefix and the handler continues to write into a newly created log file with the original name. Each size log handler must specify the maximum number of files to be kept in this fashion. | The Size handler is best suited to an environment where the log file size must be consistent. |
Async | Async log handlers are wrapper log handlers that provide asynchronous behavior for one or more other log handlers. These are useful for log handlers that have high latency or other performance problems such as writing a log file to a network file system. | The Async log handlers are best suited to an environment where high latency is a problem or when writing to a network file system. |
Custom | Custom log handlers enable to you to configure new types of log handlers that have been implemented. A custom handler must be implemented as a Java class that extends java.util.logging.Handler and be contained in a module. | Custom log handlers create customized log handler types and are recommended for advanced users. |
4.4.7. Selecting Log Handlers
- The
Console
log handler is preferred when JBoss Data Grid is administered using the command line. In such a case, errors and log messages appear on the console window and are not saved unless separately configured to do so. - The
File
log handler is used to direct log entries into a specified file. This simplicity is useful if the requirement is to store all log entries according to the time in one place. - The
Periodic
log handler is similar to theFile
handler but creates files according to the specified period. As an example, this handler can be used to accumulate log messages on a weekly, daily, hourly or other basis depending on the requirements of the environment. - The
Size
log handler also writes log messages to a specified file, but only while the log file size is within a specified limit. Once the file size reaches the specified limit, log files are written to a new log file. This handler is best suited to an environment where the log file size must be consistent. - The
Async
log handler is a wrapper that forces other log handlers to operate asynchronously. This is best suited to an environment where high latency is a problem or when writing to a network file system. - The
Custom
log handler creates new, customized types of log handlers. This is an advanced log handler.
4.4.8. About Log Formatters
java.util.Formatter
class.
4.5. Logging Sample Configurations
4.5.1. Logging Sample Configuration Location
standalone.xml
or clustered.xml
for standalone instances, or domain.xml
for managed domain instances.
4.5.2. Sample XML Configuration for the Root Logger
Procedure 4.1. Configure the Root Logger
Set the
level
PropertyThelevel
property sets the maximum level of log message that the root logger records.<subsystem xmlns="urn:jboss:domain:logging:3.0"> <root-logger> <level name="INFO"/>
List
handlers
handlers
is a list of log handlers that are used by the root logger.<subsystem xmlns="urn:jboss:domain:logging:3.0"> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem>
4.5.3. Sample XML Configuration for a Log Category
Procedure 4.2. Configure a Log Category
<subsystem xmlns="urn:jboss:domain:logging:3.0"> <logger category="com.company.accounts.rec" use-parent-handlers="true"> <level name="WARN"/> <handlers> <handler name="accounts-rec"/> </handlers> </logger> </subsystem>
- Use the
category
property to specify the log category from which log messages will be captured.Theuse-parent-handlers
is set to"true"
by default. When set to"true"
, this category will use the log handlers of the root logger in addition to any other assigned handlers. - Use the
level
property to set the maximum level of log message that the log category records. - The
handlers
element contains a list of log handlers.
4.5.4. Sample XML Configuration for a Console Log Handler
Procedure 4.3. Configure the Console Log Handler
<subsystem xmlns="urn:jboss:domain:logging:3.0"> <console-handler name="CONSOLE" autoflush="true"> <level name="INFO"/> <encoding value="UTF-8"/> <target value="System.out"/> <filter-spec value="not(match("JBAS.*"))"/> <formatter> <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> </subsystem>
Add the Log Handler Identifier Information
Thename
property sets the unique identifier for this log handler.Whenautoflush
is set to"true"
the log messages will be sent to the handler's target immediately upon request.Set the
level
PropertyThelevel
property sets the maximum level of log messages recorded.Set the
encoding
OutputUseencoding
to set the character encoding scheme to be used for the output.Define the
target
ValueThetarget
property defines the system output stream where the output of the log handler goes. This can beSystem.err
for the system error stream, orSystem.out
for the standard out stream.Define the
filter-spec
PropertyThefilter-spec
property is an expression value that defines a filter. The example provided defines a filter that does not match a pattern:not(match("JBAS.*"))
.Specify the
formatter
Useformatter
to list the log formatter used by the log handler.
4.5.5. Sample XML Configuration for a File Log Handler
Procedure 4.4. Configure the File Log Handler
<file-handler name="accounts-rec-trail" autoflush="true"> <level name="INFO"/> <encoding value="UTF-8"/> <file relative-to="jboss.server.log.dir" path="accounts-rec-trail.log"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <append value="true"/> </file-handler>
Add the File Log Handler Identifier Information
Thename
property sets the unique identifier for this log handler.Whenautoflush
is set to"true"
the log messages will be sent to the handler's target immediately upon request.Set the
level
PropertyThelevel
property sets the maximum level of log message that the root logger records.Set the
encoding
OutputUseencoding
to set the character encoding scheme to be used for the output.Set the
file
ObjectThefile
object represents the file where the output of this log handler is written to. It has two configuration properties:relative-to
andpath
.Therelative-to
property is the directory where the log file is written to. JBoss Enterprise Application Platform 6 file path variables can be specified here. Thejboss.server.log.dir
variable points to thelog/
directory of the server.Thepath
property is the name of the file where the log messages will be written. It is a relative path name that is appended to the value of therelative-to
property to determine the complete path.Specify the
formatter
Useformatter
to list the log formatter used by the log handler.Set the
append
PropertyWhen theappend
property is set to"true"
, all messages written by this handler will be appended to an existing file. If set to"false"
a new file will be created each time the application server launches. Changes toappend
require a server reboot to take effect.
4.5.6. Sample XML Configuration for a Periodic Log Handler
Procedure 4.5. Configure the Periodic Log Handler
<periodic-rotating-file-handler name="FILE" autoflush="true"> <level name="INFO"/> <encoding value="UTF-8"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler>
Add the Periodic Log Handler Identifier Information
Thename
property sets the unique identifier for this log handler.Whenautoflush
is set to"true"
the log messages will be sent to the handler's target immediately upon request.Set the
level
PropertyThelevel
property sets the maximum level of log message that the root logger records.Set the
encoding
OutputUseencoding
to set the character encoding scheme to be used for the output.Specify the
formatter
Useformatter
to list the log formatter used by the log handler.Set the
file
ObjectThefile
object represents the file where the output of this log handler is written to. It has two configuration properties:relative-to
andpath
.Therelative-to
property is the directory where the log file is written to. JBoss Enterprise Application Platform 6 file path variables can be specified here. Thejboss.server.log.dir
variable points to thelog/
directory of the server.Thepath
property is the name of the file where the log messages will be written. It is a relative path name that is appended to the value of therelative-to
property to determine the complete path.Set the
suffix
ValueThesuffix
is appended to the filename of the rotated logs and is used to determine the frequency of rotation. The format of thesuffix
is a dot (.) followed by a date string, which is parsable by thejava.text.SimpleDateFormat
class. The log is rotated on the basis of the smallest time unit defined by thesuffix
. For example,yyyy-MM-dd
will result in daily log rotation. See http://docs.oracle.com/javase/6/docs/api/index.html?java/text/SimpleDateFormat.htmlSet the
append
PropertyWhen theappend
property is set to"true"
, all messages written by this handler will be appended to an existing file. If set to"false"
a new file will be created each time the application server launches. Changes toappend
require a server reboot to take effect.
4.5.7. Sample XML Configuration for a Size Log Handler
Procedure 4.6. Configure the Size Log Handler
<size-rotating-file-handler name="accounts_debug" autoflush="false"> <level name="DEBUG"/> <encoding value="UTF-8"/> <file relative-to="jboss.server.log.dir" path="accounts-debug.log"/> <rotate-size value="500k"/> <max-backup-index value="5"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <append value="true"/> </size-rotating-file-handler>
Add the Size Log Handler Identifier Information
Thename
property sets the unique identifier for this log handler.Whenautoflush
is set to"true"
the log messages will be sent to the handler's target immediately upon request.Set the
level
PropertyThelevel
property sets the maximum level of log message that the root logger records.Set the
encoding
OutputUseencoding
to set the character encoding scheme to be used for the output.Set the
file
ObjectThefile
object represents the file where the output of this log handler is written to. It has two configuration properties:relative-to
andpath
.Therelative-to
property is the directory where the log file is written to. JBoss Enterprise Application Platform 6 file path variables can be specified here. Thejboss.server.log.dir
variable points to thelog/
directory of the server.Thepath
property is the name of the file where the log messages will be written. It is a relative path name that is appended to the value of therelative-to
property to determine the complete path.Specify the
rotate-size
ValueThe maximum size that the log file can reach before it is rotated. A single character appended to the number indicates the size units:b
for bytes,k
for kilobytes,m
for megabytes,g
for gigabytes. For example:50m
for 50 megabytes.Set the
max-backup-index
NumberThe maximum number of rotated logs that are kept. When this number is reached, the oldest log is reused.Specify the
formatter
Useformatter
to list the log formatter used by the log handler.Set the
append
PropertyWhen theappend
property is set to"true"
, all messages written by this handler will be appended to an existing file. If set to"false"
a new file will be created each time the application server launches. Changes toappend
require a server reboot to take effect.
4.5.8. Sample XML Configuration for a Async Log Handler
Procedure 4.7. Configure the Async Log Handler
<async-handler name="Async_NFS_handlers"> <level name="INFO"/> <queue-length value="512"/> <overflow-action value="block"/> <subhandlers> <handler name="FILE"/> <handler name="accounts-record"/> </subhandlers> </async-handler>
- The
name
property sets the unique identifier for this log handler. - The
level
property sets the maximum level of log message that the root logger records. - The
queue-length
defines the maximum number of log messages that will be held by this handler while waiting for sub-handlers to respond. - The
overflow-action
defines how this handler responds when its queue length is exceeded. This can be set toBLOCK
orDISCARD
.BLOCK
makes the logging application wait until there is available space in the queue. This is the same behavior as an non-async log handler.DISCARD
allows the logging application to continue but the log message is deleted. - The
subhandlers
list is the list of log handlers to which this async handler passes its log messages.
Part III. Set Up Cache Modes
Chapter 5. Cache Modes
- Local mode is the only non-clustered cache mode offered in JBoss Data Grid. In local mode, JBoss Data Grid operates as a simple single-node in-memory data cache. Local mode is most effective when scalability and failover are not required and provides high performance in comparison with clustered modes.
- Clustered mode replicates state changes to a subset of nodes. The subset size should be sufficient for fault tolerance purposes, but not large enough to hinder scalability. Before attempting to use clustered mode, it is important to first configure JGroups for a clustered configuration. For details about configuring JGroups, see Section 30.2, “Configure JGroups (Library Mode)”
5.1. About Cache Containers
cache-container
element acts as a parent of one or more (local or clustered) caches. To add clustered caches to the container, transport must be defined.
Procedure 5.1. How to Configure the Cache Container
<subsystem xmlns="urn:infinispan:server:core:8.3" default-cache-container="local"> <cache-container name="local" default-cache="default" statistics="true" start="EAGER"> <local-cache name="default" start="EAGER" statistics="false"> <!-- Additional configuration information here --> </local-cache> </cache-container> </subsystem>
Configure the Cache Container
Thecache-container
element specifies information about the cache container using the following parameters:- The
name
parameter defines the name of the cache container. - The
default-cache
parameter defines the name of the default cache used with the cache container. - The
statistics
attribute is optional and istrue
by default. Statistics are useful in monitoring JBoss Data Grid via JMX or JBoss Operations Network, however they adversely affect performance. Disable this attribute by setting it tofalse
if it is not required. - The
start
parameter indicates when the cache container starts, i.e. whether it will start lazily when requested or "eagerly" when the server starts up. Valid values for this parameter areEAGER
andLAZY
.
Configure Per-cache Statistics
Ifstatistics
are enabled at the container level, per-cache statistics can be selectively disabled for caches that do not require monitoring by setting thestatistics
attribute tofalse
.
5.2. Local Mode
- Write-through and write-behind caching to persist data.
- Entry eviction to prevent the Java Virtual Machine (JVM) running out of memory.
- Support for entries that expire after a defined period.
ConcurrentMap
, resulting in a simple migration process from a map to JBoss Data Grid.
5.2.1. Configure Local Mode
local-cache
element.
Procedure 5.2. The local-cache
Element
<cache-container name="local" default-cache="default" statistics="true"> <local-cache name="default" start="EAGER" batching="false" statistics="true"> <!-- Additional configuration information here --> </local-cache>
local-cache
element specifies information about the local cache used with the cache container using the following parameters:
- The
name
parameter specifies the name of the local cache to use. - The
start
parameter indicates when the cache container starts, i.e. whether it will start lazily when requested or "eagerly" when the server starts up. Valid values for this parameter areEAGER
andLAZY
. - The
batching
parameter specifies whether batching is enabled for the local cache. - If
statistics
are enabled at the container level, per-cache statistics can be selectively disabled for caches that do not require monitoring by setting thestatistics
attribute tofalse
.
DefaultCacheManager
with the "no-argument" constructor. Both of these methods create a local default cache.
<transport/>
it can only contain local caches. The container used in the example can only contain local caches as it does not have a <transport/>
.
ConcurrentMap
and is compatible with multiple cache systems.
5.3. Clustered Modes
- Replication Mode replicates any entry that is added across all cache instances in the cluster.
- Invalidation Mode does not share any data, but signals remote caches to initiate the removal of invalid entries.
- Distribution Mode stores each entry on a subset of nodes instead of on all nodes in the cluster.
5.3.1. Asynchronous and Synchronous Operations
5.3.2. About Asynchronous Communications
local-cache
, distributed-cache
and replicated-cache
elements respectively. Each of these elements contains a mode
property, the value of which can be set to SYNC
for synchronous or ASYNC
for asynchronous communications.
Example 5.1. Asynchronous Communications Example Configuration
<replicated-cache name="default" start="EAGER" mode="ASYNC" batching="false" statistics="true"> <!-- Additional configuration information here --> </replicated-cache>
Note
5.3.3. Cache Mode Troubleshooting
5.3.3.1. Invalid Data in ReadExternal
readExternal
, it can be because when using Cache.putAsync()
, starting serialization can cause your object to be modified, causing the datastream passed to readExternal
to be corrupted. This can be resolved if access to the object is synchronized.
5.3.3.2. Cluster Physical Address Retrieval
The physical address can be retrieved using an instance method call. For example: AdvancedCache.getRpcManager().getTransport().getPhysicalAddresses()
.
Chapter 6. Set Up Distribution Mode
6.1. About Distribution Mode
6.2. Distribution Mode's Consistent Hash Algorithm
numSegments
and cannot be changed without restarting the cluster. The mapping of keys to segments is also fixed — a key maps to the same segment, regardless of how the topology of the cluster changes.
6.3. Locating Entries in Distribution Mode
PUT
operation can result in as many remote calls as specified by the owners
parameter, while a GET
operation executed on any node in the cluster results in a single remote call. In the background, the GET
operation results in the same number of remote calls as a PUT
operation (specifically the value of the owners
parameter), but these occur in parallel and the returned entry is passed to the caller as soon as one returns.
6.4. Return Values in Distribution Mode
6.5. Configure Distribution Mode
Procedure 6.1. The distributed-cache
Element
<cache-container name="clustered" default-cache="default" statistics="true"> <!-- Additional configuration information here --> <distributed-cache name="default" mode="SYNC" segments="20" start="EAGER" owners="2" statistics="true"> <!-- Additional configuration information here --> </distributed-cache> </cache-container>
distributed-cache
element configures settings for the distributed cache using the following parameters:
- The
name
parameter provides a unique identifier for the cache. - The
mode
parameter sets the clustered cache mode. Valid values areSYNC
(synchronous) andASYNC
(asynchronous). - The (optional)
segments
parameter specifies the number of hash space segments per cluster. The recommended value for this parameter is ten multiplied by the cluster size and the default value is20
. - The
start
parameter specifies whether the cache starts when the server starts up or when it is requested or deployed. - The
owners
parameter indicates the number of nodes that will contain the hash segment. - If
statistics
are enabled at the container level, per-cache statistics can be selectively disabled for caches that do not require monitoring by setting thestatistics
attribute tofalse
.
Important
6.6. Synchronous and Asynchronous Distribution
Example 6.1. Communication Mode example
A
, B
and C
, and a key K
that maps nodes A
and B
. Perform an operation on node C
that requires a return value, for example Cache.remove(K)
. To execute successfully, the operation must first synchronously forward the call to both node A
and B
, and then wait for a result returned from either node A
or B
. If asynchronous communication was used, the usefulness of the returned values cannot be guaranteed, despite the operation behaving as expected.
Chapter 7. Set Up Replication Mode
7.1. About Replication Mode
7.2. Optimized Replication Mode Usage
7.3. Configure Replication Mode
Procedure 7.1. The replicated-cache
Element
<cache-container name="clustered" default-cache="default" statistics="true"> <!-- Additional configuration information here --> <replicated-cache name="default" mode="SYNC" start="EAGER" statistics="true"> <!-- Additional configuration information here --> </replicated-cache> </cache-container>
Important
replicated-cache
element configures settings for the distributed cache using the following parameters:
- The
name
parameter provides a unique identifier for the cache. - The
mode
parameter sets the clustered cache mode. Valid values areSYNC
(synchronous) andASYNC
(asynchronous). - The
start
parameter specifies whether the cache starts when the server starts up or when it is requested or deployed. - If
statistics
are enabled at the container level, per-cache statistics can be selectively disabled for caches that do not require monitoring by setting thestatistics
attribute tofalse
.
cache-container
and locking
, see the appropriate chapter.
7.4. Synchronous and Asynchronous Replication
- Synchronous replication blocks a thread or caller (for example on a
put()
operation) until the modifications are replicated across all nodes in the cluster. By waiting for acknowledgments, synchronous replication ensures that all replications are successfully applied before the operation is concluded. - Asynchronous replication operates significantly faster than synchronous replication because it does not need to wait for responses from nodes. Asynchronous replication performs the replication in the background and the call returns immediately. Errors that occur during asynchronous replication are written to a log. As a result, a transaction can be successfully completed despite the fact that replication of the transaction may not have succeeded on all the cache instances in the cluster.
7.4.1. Troubleshooting Asynchronous Replication Behavior
- Disable state transfer and use a
ClusteredCacheLoader
to lazily look up remote state as and when needed. - Enable state transfer and
REPL_SYNC
. Use the Asynchronous API (for example, thecache.putAsync(k, v)
) to activate 'fire-and-forget' capabilities. - Enable state transfer and
REPL_ASYNC
. All RPCs end up becoming synchronous, but client threads will not be held up if a replication queue is enabled (which is recommended for asynchronous mode).
7.5. The Replication Queue
- Previously set intervals.
- The queue size exceeding the number of elements.
- A combination of previously set intervals and the queue size exceeding the number of elements.
7.5.1. Replication Queue Usage
- Disable asynchronous marshalling.
- Set the
max-threads
count value to1
for theexecutor
attribute of thetransport
element. Theexecutor
is only available in Library Mode, and is therefore defined in its configuration file as follows:<transport executor="infinispan-transport"/>
queue-flush-interval
, value is in milliseconds) and queue size (queue-size
) as follows:
Example 7.1. Replication Queue in Asynchronous Mode
<replicated-cache name="asyncCache" start="EAGER" mode="ASYNC" batching="false" indexing="NONE" statistics="true" queue-size="1000" queue-flush-interval="500"> <!-- Additional configuration information here --> </replicated-cache>
7.6. About Replication Guarantees
7.7. Replication Traffic on Internal Networks
IP
addresses than for traffic over public IP
addresses, or do not charge at all for internal network traffic (for example, GoGrid). To take advantage of lower rates, you can configure Red Hat JBoss Data Grid to transfer replication traffic using the internal network. With such a configuration, it is difficult to know the internal IP
address you are assigned. JBoss Data Grid uses JGroups interfaces to solve this problem.
Chapter 8. Set Up Invalidation Mode
8.1. About Invalidation Mode
8.2. Configure Invalidation Mode
Procedure 8.1. The invalidation-cache
Element
<cache-container name="local" default-cache="default" statistics="true"> <invalidation-cache name="default" mode="ASYNC" start="EAGER" statistics="true"> <!-- Additional configuration information here --> </invalidation-cache> </cache-container>
invalidation-cache
element configures settings for the distributed cache using the following parameters:
- The
name
parameter provides a unique identifier for the cache. - The
mode
parameter sets the clustered cache mode. Valid values areSYNC
(synchronous) andASYNC
(asynchronous). - The
start
parameter specifies whether the cache starts when the server starts up or when it is requested or deployed. - If
statistics
are enabled at the container level, per-cache statistics can be selectively disabled for caches that do not require monitoring by setting thestatistics
attribute tofalse
.
Important
cache-container
, locking
, and transaction
elements, see the appropriate chapter.
8.3. Synchronous/Asynchronous Invalidation
- Synchronous invalidation blocks the thread until all caches in the cluster have received invalidation messages and evicted the obsolete data.
- Asynchronous invalidation operates in a fire-and-forget mode that allows invalidation messages to be broadcast without blocking a thread to wait for responses.
8.4. The L1 Cache and Invalidation
Chapter 9. State Transfer
owners
copies of each key in the cache (as determined through consistent hashing). In invalidation mode the initial state transfer is similar to replication mode, the only difference being that the nodes are not guaranteed to have the same state. When a node leaves, a replicated mode or invalidation mode cache does not perform any state transfer. A distributed cache needs to make additional copies of the keys that were stored on the leaving nodes, again to keep owners
copies of each key.
ClusterLoader
must be configured, otherwise a node will become the owner or backup owner of a key without the data being loaded into its cache. In addition, if State Transfer is disabled in distributed mode then a key will occasionally have less than owners
owners.
9.1. Non-Blocking State Transfer
- Minimize the interval(s) where the entire cluster cannot respond to requests because of a state transfer in progress.
- Minimize the interval(s) where an existing member stops responding to requests because of a state transfer in progress.
- Allow state transfer to occur with a drop in the performance of the cluster. However, the drop in the performance during the state transfer does not throw any exception, and allows processes to continue.
- Allows a
GET
operation to successfully retrieve a key from another node without returning a null value during a progressive state transfer.
- The blocking protocol queues the transaction delivery during the state transfer.
- State transfer control messages (such as CacheTopologyControlCommand) are sent according to the total order information.
9.2. Suppress State Transfer via JMX
getCache()
call will timeout after stateTransfer.timeout
expires unless rebalancing is re-enabled or stateTransfer.awaitInitialTransfer
is set to false
.
9.3. The rebalancingEnabled Attribute
rebalancingEnabled
JMX attribute, and requires no specific configuration.
rebalancingEnabled
attribute can be modified for the entire cluster from the LocalTopologyManager
JMX Mbean on any node. This attribute is true
by default, and is configurable programmatically.
<await-initial-transfer="false"/>
Part IV. Enabling APIs
Chapter 10. Enabling APIs Declaratively
10.1. Batching API
Batching may be enabled on a per-cache basis by defining a transaction mode of BATCH
. The following example demonstrates this:
<local-cache> <transaction mode="BATCH"/> </local-cache>
10.2. Grouping API
The grouping API may be enabled on a per-cache basis by adding the groups
element as seen in the following example:
<distributed-cache> <groups enabled="true"/> </distributed-cache>
Assuming a custom Grouper
exists it may be defined by passing in the classname as seen below:
<distributed-cache> <groups enabled="true"> <grouper class="com.acme.KXGrouper" /> </groups> </distributed-cache>
10.3. Externalizable API
Externalizer
is a class that can:
- Marshall a given object type to a byte array.
- Unmarshall the contents of a byte array into an instance of the object type.
10.3.1. Register the Advanced Externalizer (Declaratively)
Procedure 10.1. Register the Advanced Externalizer
<infinispan> <cache-container> <serialization> <advanced-externalizer class="Book$BookExternalizer" /> </serialization> </cache-container> </infinispan>
- Add the
serialization
element to thecache-container
element. - Add the
advanced-externalizer
element, defining the custom Externalizer with theclass
attribute. Replace the Book$BookExternalizer values as required.
10.3.2. Custom Externalizer ID Values
Table 10.1. Reserved Externalizer ID Ranges
ID Range | Reserved For |
---|---|
1000-1099 | The Infinispan Tree Module |
1100-1199 | Red Hat JBoss Data Grid Server modules |
1200-1299 | Hibernate Infinispan Second Level Cache |
1300-1399 | JBoss Data Grid Lucene Directory |
1400-1499 | Hibernate OGM |
1500-1599 | Hibernate Search |
1600-1699 | Infinispan Query Module |
1700-1799 | Infinispan Remote Query Module |
1800-1849 | JBoss Data Grid Scripting Module |
1850-1899 | JBoss Data Grid Server Event Logger Module |
1900-1999 | JBoss Data Grid Remote Store |
10.3.2.1. Customize the Externalizer ID (Declaratively)
Procedure 10.2. Customizing the Externalizer ID (Declaratively)
<infinispan> <cache-container> <serialization> <advanced-externalizer id="123" class="Book$BookExternalizer"/> </serialization> </global> </infinispan>
- Add the
serialization
element to thecache-container
element. - Add the
advanced-externalizer
element to add information about the new advanced externalizer. - Define the externalizer ID using the
id
attribute. Ensure that the selected ID is not from the range of IDs reserved for other modules. - Define the externalizer class using the
class
attribute. Replace the Book$BookExternalizer values as required.
Chapter 11. Set Up and Configure the Infinispan Query API
11.1. Set Up Infinispan Query
11.1.1. Infinispan Query Dependencies in Library Mode
<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-embedded-query</artifactId> <version>${infinispan.version}</version> </dependency>
infinispan-embedded-query.jar
and infinispan-embedded.jar
files from the JBoss Data Grid distribution.
Warning
infinispan-embedded-query.jar
file. Do not include other versions of Hibernate Search and Lucene in the same deployment as infinispan-embedded-query
. This action will cause classpath conflicts and result in unexpected behavior.
11.2. Indexing Modes
11.2.1. Managing Indexes
- Each node can maintain an individual copy of the global index.
- The index can be shared across all nodes.
indexLocalOnly
to true
, each write to cache must be forwarded to all other nodes so that they can update their indexes. If the index is shared, by setting indexLocalOnly
to false
, only the node where the write originates is required to update the shared index.
directory provider
, which is used to store the index. The index can be stored, for example, as in-memory, on filesystem, or in distributed cache.
11.2.2. Managing the Index in Local Mode
indexLocalOnly
option is meaningless in local mode.
11.2.3. Managing the Index in Replicated Mode
indexLocalOnly
to false
, so that each node will apply the required updates it receives from other nodes in addition to the updates started locally.
indexLocalOnly
must be set to true
so that each node will only apply the changes originated locally. While there is no risk of having an out of sync index, this causes contention on the node used for updating the index.
Figure 11.1. Replicated Cache Querying
11.2.4. Managing the Index in Distribution Mode
indexLocalOnly
set to true
.
Figure 11.2. Querying with a Shared Index
11.2.5. Managing the Index in Invalidation Mode
11.3. Directory Providers
- RAM Directory Provider
- Filesystem Directory Provider
- Infinispan Directory Provider
11.3.1. RAM Directory Provider
- maintain its own index.
- use Lucene's in-memory or filesystem-based index directory.
<local-cache name="indexesInMemory"> <indexing index="LOCAL"> <property name="default.directory_provider">ram</property> </indexing> </local-cache>
11.3.2. Filesystem Directory Provider
Example 11.1. Disk-based Index Store
<local-cache name="indexesInInfinispan"> <indexing index="ALL"> <property name="default.directory_provider">filesystem</property> <property name="default.indexBase">/tmp/ispn_index</property> </indexing> </local-cache>
11.3.3. Infinispan Directory Provider
infinispan-directory
module.
Note
infinispan-directory
in the context of the Querying feature, not as a standalone feature.
infinispan-directory
allows Lucene to store indexes within the distributed data grid. This allows the indexes to be distributed, stored in-memory, and optionally written to disk using the cache store for durability.
Important
true
, as this provides major performance increases; however, if external applications access the same index in use by Infinispan this property must be set to false
. The default value is recommended for the majority of applications and use cases due to the performance increases, so only change this if absolutely necessary.
InfinispanIndexManager
provides a default back end that sends all updates to master node which later applies the updates to the index. In case of master node failure, the update can be lost, therefore keeping the cache and index non-synchronized. Non-default back ends are not supported.
Example 11.2. Enable Shared Indexes
<local-cache name="indexesInInfinispan"> <indexing index="ALL"> <property name="default.directory_provider">infinispan</property> <property name="default.indexmanager">org.infinispan.query.indexmanager.InfinispanIndexManager</property> </indexing> </local-cache>
11.4. Configure Indexing
11.4.1. Configure the Index in Remote Client-Server Mode
- NONE
- LOCAL = indexLocalOnly="true"
- ALL = indexLocalOnly="false"
Example 11.3. Configuration in Remote Client-Server Mode
<indexing index="LOCAL"> <property name="default.directory_provider">ram</property> <!-- Additional configuration information here --> </indexing>
By default the Lucene caches will be created as local caches; however, with this configuration the Lucene search results are not shared between nodes in the cluster. To prevent this define the caches required by Lucene in a clustered mode, as seen in the following configuration snippet:
Example 11.4. Configuring the Lucene cache in Remote Client-Server Mode
<cache-container name="clustered" default-cache="repltestcache"> [...] <replicated-cache name="LuceneIndexesMetadata" mode="SYNC"> <transaction mode="NONE"/> <indexing index="NONE"/> </replicated-cache> <distributed-cache name="LuceneIndexesData" mode="SYNC"> <transaction mode="NONE"/> <indexing index="NONE"/> </distributed-cache> <replicated-cache name="LuceneIndexesLocking" mode="SYNC"> <transaction mode="NONE"/> <indexing index="NONE"/> </replicated-cache> [...] </cache-container>
11.4.2. Rebuilding the Index
- The definition of what is indexed in the types has changed.
- A parameter affecting how the index is defined, such as the
Analyser
changes. - The index is destroyed or corrupted, possibly due to a system administration error.
MassIndexer
and start it as follows:
SearchManager searchManager = Search.getSearchManager(cache); searchManager.getMassIndexer().start();
11.5. Tuning the Index
11.5.1. Near-Realtime Index Manager
<property name="default.indexmanager">near-real-time</property>
11.5.2. Tuning Infinispan Directory
- Data cache
- Metadata cache
- Locking cache
Example 11.5. Tuning the Infinispan Directory
<distributed-cache name="indexedCache" mode="SYNC" owners="2"> <indexing index="LOCAL"> <property name="default.indexmanager">org.infinispan.query.indexmanager.InfinispanIndexManager</property> <property name="default.metadata_cachename>lucene_metadata_repl</property> <property name="default.data_cachename">lucene_data_dist</property> <property name="default.locking_cachename">lucene_locking_repl</property> </indexing> </distributed-cache> <replicated-cache name="lucene_metadata_repl" mode="SYNC" /> <distributed-cache name="lucene_data_dist" mode="SYNC" owners="2" /> <replicated-cache name="lucene_locking_repl" mode="SYNC" />
11.5.3. Per-Index Configuration
default.
prefix for each property. To specify different configuration for each index, replace default
with the index name. By default, this is the full class name of the indexed object, however you can override the index name in the @Indexed
annotation.
Part V. Remote Client-Server Mode Interfaces
- The Asynchronous API (can only be used in conjunction with the Hot Rod Client in Remote Client-Server Mode)
- The REST Interface
- The Memcached Interface
- The Hot Rod Interface
- The RemoteCache API
Chapter 12. The REST Interface
Important
authentication
and encryption
parameters from the connector.
12.1. The REST Interface Connector
12.1.1. Configure REST Connectors
rest-connector
element in Red Hat JBoss Data Grid's Remote Client-Server mode.
Procedure 12.1. Configuring REST Connectors for Remote Client-Server Mode
<subsystem xmlns="urn:infinispan:server:endpoint:8.0"> <rest-connector cache-container="local" context-path="${CONTEXT_PATH}"/> </subsystem>
rest-connector
element specifies the configuration information for the REST connector.
- The
cache-container
parameter names the cache container used by the REST connector. This is a mandatory parameter. - The
context-path
parameter specifies the context path for the REST connector. The default value for this parameter is an empty string (""
). This is an optional parameter. - The
security-domain
parameter specifies that the specified domain, declared in the security subsystem, should be used to authenticate access to the REST endpoint. This is an optional parameter. If this parameter is omitted, no authentication is performed. - The
auth-method
parameter specifies the method used to retrieve credentials for the end point. The default value for this parameter isBASIC
. Supported alternate values includeBASIC
,DIGEST
, andCLIENT-CERT
. This is an optional parameter. - The
security-mode
parameter specifies whether authentication is required only for write operations (such as PUT, POST and DELETE) or for read operations (such as GET and HEAD) as well. Valid values for this parameter areWRITE
for authenticating write operations only, orREAD_WRITE
to authenticate read and write operations. The default value for this parameter isREAD_WRITE
.
Chapter 13. The Memcached Interface
13.1. About Memcached Servers
- Standalone, where each server acts independently without communication with any other memcached servers.
- Clustered, where servers replicate and distribute data to other memcached servers.
13.2. Memcached Statistics
Table 13.1. Memcached Statistics
Statistic | Data Type | Details |
---|---|---|
uptime | 32-bit unsigned integer. | Contains the time (in seconds) that the memcached instance has been available and running. |
time | 32-bit unsigned integer. | Contains the current time. |
version | String | Contains the current version. |
curr_items | 32-bit unsigned integer. | Contains the number of items currently stored by the instance. |
total_items | 32-bit unsigned integer. | Contains the total number of items stored by the instance during its lifetime. |
cmd_get | 64-bit unsigned integer | Contains the total number of get operation requests (requests to retrieve data). |
cmd_set | 64-bit unsigned integer | Contains the total number of set operation requests (requests to store data). |
get_hits | 64-bit unsigned integer | Contains the number of keys that are present from the keys requested. |
get_misses | 64-bit unsigned integer | Contains the number of keys that were not found from the keys requested. |
delete_hits | 64-bit unsigned integer | Contains the number of keys to be deleted that were located and successfully deleted. |
delete_misses | 64-bit unsigned integer | Contains the number of keys to be deleted that were not located and therefore could not be deleted. |
incr_hits | 64-bit unsigned integer | Contains the number of keys to be incremented that were located and successfully incremented |
incr_misses | 64-bit unsigned integer | Contains the number of keys to be incremented that were not located and therefore could not be incremented. |
decr_hits | 64-bit unsigned integer | Contains the number of keys to be decremented that were located and successfully decremented. |
decr_misses | 64-bit unsigned integer | Contains the number of keys to be decremented that were not located and therefore could not be decremented. |
cas_hits | 64-bit unsigned integer | Contains the number of keys to be compared and swapped that were found and successfully compared and swapped. |
cas_misses | 64-bit unsigned integer | Contains the number of keys to be compared and swapped that were not found and therefore not compared and swapped. |
cas_badval | 64-bit unsigned integer | Contains the number of keys where a compare and swap occurred but the original value did not match the supplied value. |
evictions | 64-bit unsigned integer | Contains the number of eviction calls performed. |
bytes_read | 64-bit unsigned integer | Contains the total number of bytes read by the server from the network. |
bytes_written | 64-bit unsigned integer | Contains the total number of bytes written by the server to the network. |
13.3. The Memcached Interface Connector
memcached
socket binding, and exposes the memcachedCache
cache declared in the local
container, using defaults for all other settings.
<memcached-connector socket-binding="memcached" cache-container="local"/>
13.3.1. Configure Memcached Connectors
connectors
element in Red Hat JBoss Data Grid's Remote Client-Server Mode.
Procedure 13.1. Configuring the Memcached Connector in Remote Client-Server Mode
memcached-connector
element defines the configuration elements for use with memcached.
<subsystem xmlns="urn:infinispan:server:endpoint:8.0"> <memcached-connector socket-binding="memcached" cache-container="local" worker-threads="${VALUE}" idle-timeout="{VALUE}" tcp-nodelay="{TRUE/FALSE}" send-buffer-size="{VALUE}" receive-buffer-size="${VALUE}" /> </subsystem>
- The
socket-binding
parameter specifies the socket binding port used by the memcached connector. This is a mandatory parameter. - The
cache-container
parameter names the cache container used by the memcached connector. This is a mandatory parameter. - The
worker-threads
parameter specifies the number of worker threads available for the memcached connector. The default value for this parameter is 160. This is an optional parameter. - The
idle-timeout
parameter specifies the time (in milliseconds) the connector can remain idle before the connection times out. The default value for this parameter is-1
, which means that no timeout period is set. This is an optional parameter. - The
tcp-nodelay
parameter specifies whether TCP packets will be delayed and sent out in batches. Valid values for this parameter aretrue
andfalse
. The default value for this parameter istrue
. This is an optional parameter. - The
send-buffer-size
parameter indicates the size of the send buffer for the memcached connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter. - The
receive-buffer-size
parameter indicates the size of the receive buffer for the memcached connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter.
Chapter 14. The Hot Rod Interface
14.1. About Hot Rod
14.2. The Benefits of Using Hot Rod over Memcached
- Memcached
- The memcached protocol causes the server endpoint to use the memcached text wire protocol. The memcached wire protocol has the benefit of being commonly used, and is available for almost any platform. All of JBoss Data Grid's functions, including clustering, state sharing for scalability, and high availability, are available when using memcached.However the memcached protocol lacks dynamicity, resulting in the need to manually update the list of server nodes on your clients in the event one of the nodes in a cluster fails. Also, memcached clients are not aware of the location of the data in the cluster. This means that they will request data from a non-owner node, incurring the penalty of an additional request from that node to the actual owner, before being able to return the data to the client. This is where the Hot Rod protocol is able to provide greater performance than memcached.
- Hot Rod
- JBoss Data Grid's Hot Rod protocol is a binary wire protocol that offers all the capabilities of memcached, while also providing better scaling, durability, and elasticity.The Hot Rod protocol does not need the hostnames and ports of each node in the remote cache, whereas memcached requires these parameters to be specified. Hot Rod clients automatically detect changes in the topology of clustered Hot Rod servers; when new nodes join or leave the cluster, clients update their Hot Rod server topology view. Consequently, Hot Rod provides ease of configuration and maintenance, with the advantage of dynamic load balancing and failover.Additionally, the Hot Rod wire protocol uses smart routing when connecting to a distributed cache. This involves sharing a consistent hash algorithm between the server nodes and clients, resulting in faster read and writing capabilities than memcached.
Warning
cacheManager.getCache
method.
14.3. Hot Rod Hash Functions
14.4. The Hot Rod Interface Connector
hotrod
socket binding.
<hotrod-connector socket-binding="hotrod" cache-container="local" />
<topology-state-transfer />
child element to the connector as follows:
<hotrod-connector socket-binding="hotrod" cache-container="local"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000" /> </hotrod-connector>
Note
14.4.1. Configure Hot Rod Connectors
hotrod-connector
and topology-state-transfer
elements must be configured based on the following procedure.
Procedure 14.1. Configuring Hot Rod Connectors for Remote Client-Server Mode
<subsystem xmlns="urn:infinispan:server:endpoint:8.0"> <hotrod-connector socket-binding="hotrod" cache-container="local" worker-threads="${VALUE}" idle-timeout="${VALUE}" tcp-nodelay="${TRUE/FALSE}" send-buffer-size="${VALUE}" receive-buffer-size="${VALUE}" > <topology-state-transfer lock-timeout"="${MILLISECONDS}" replication-timeout="${MILLISECONDS}" external-host="${HOSTNAME}" external-port="${PORT}" lazy-retrieval="${TRUE/FALSE}" /> </hotrod-connector> </subsystem>
- The
hotrod-connector
element defines the configuration elements for use with Hot Rod.- The
socket-binding
parameter specifies the socket binding port used by the Hot Rod connector. This is a mandatory parameter. - The
cache-container
parameter names the cache container used by the Hot Rod connector. This is a mandatory parameter. - The
worker-threads
parameter specifies the number of worker threads available for the Hot Rod connector. The default value for this parameter is160
. This is an optional parameter. - The
idle-timeout
parameter specifies the time (in milliseconds) the connector can remain idle before the connection times out. The default value for this parameter is-1
, which means that no timeout period is set. This is an optional parameter. - The
tcp-nodelay
parameter specifies whether TCP packets will be delayed and sent out in batches. Valid values for this parameter aretrue
andfalse
. The default value for this parameter istrue
. This is an optional parameter. - The
send-buffer-size
parameter indicates the size of the send buffer for the Hot Rod connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter. - The
receive-buffer-size
parameter indicates the size of the receive buffer for the Hot Rod connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter.
- The
topology-state-transfer
element specifies the topology state transfer configurations for the Hot Rod connector. This element can only occur once within ahotrod-connector
element.- The
lock-timeout
parameter specifies the time (in milliseconds) after which the operation attempting to obtain a lock times out. The default value for this parameter is10
seconds. This is an optional parameter. - The
replication-timeout
parameter specifies the time (in milliseconds) after which the replication operation times out. The default value for this parameter is10
seconds. This is an optional parameter. - The
external-host
parameter specifies the hostname sent by the Hot Rod server to clients listed in the topology information. The default value for this parameter is the host address. This is an optional parameter. - The
external-port
parameter specifies the port sent by the Hot Rod server to clients listed in the topology information. The default value for this parameter is the configured port. This is an optional parameter. - The
lazy-retrieval
parameter indicates whether the Hot Rod connector will carry out retrieval operations lazily. The default value for this parameter istrue
. This is an optional parameter.
Part VI. Set Up Locking for the Cache
Chapter 15. Locking
15.1. Configure Locking (Remote Client-Server Mode)
locking
element within the cache tags (for example, invalidation-cache
, distributed-cache
, replicated-cache
or local-cache
).
Note
READ_COMMITTED
. If the isolation
attribute is included to explicitly specify an isolation mode, it is ignored, a warning is thrown, and the default value is used instead.
Procedure 15.1. Configure Locking (Remote Client-Server Mode)
<distributed-cache> <locking acquire-timeout="30000" concurrency-level="1000" striping="false" /> <!-- Additional configuration here --> </distributed-cache>
- The
acquire-timeout
parameter specifies the number of milliseconds after which lock acquisition will time out. - The
concurrency-level
parameter defines the number of lock stripes used by the LockManager. - The
striping
parameter specifies whether lock striping will be used for the local cache.
15.2. Configure Locking (Library Mode)
locking
element and its parameters are set within the default
element and for each named cache, it occurs within the local-cache
element. The following is an example of this configuration:
Procedure 15.2. Configure Locking (Library Mode)
<local-cache name="default"> <locking concurrency-level="${VALUE}" isolation="${LEVEL}" acquire-timeout="${TIME}" striping="${TRUE/FALSE}" write-skew="${TRUE/FALSE}" /> </local-cache>
- The
concurrency-level
parameter specifies the concurrency level for the lock container. Set this value according to the number of concurrent threads interacting with the data grid. - The
isolation
parameter specifies the cache's isolation level. Valid isolation levels areREAD_COMMITTED
andREPEATABLE_READ
. For details about isolation levels, see Section 17.1, “About Isolation Levels” - The
acquire-timeout
parameter specifies time (in milliseconds) after which a lock acquisition attempt times out. - The
striping
parameter specifies whether a pool of shared locks are maintained for all entries that require locks. If set toFALSE
, locks are created for each entry in the cache. For details, see Section 16.1, “About Lock Striping” - The
write-skew
parameter is only valid if theisolation
is set toREPEATABLE_READ
. If this parameter is set toFALSE
, a disparity between a working entry and the underlying entry at write time results in the working entry overwriting the underlying entry. If the parameter is set toTRUE
, such conflicts (namely write skews) throw an exception. Thewrite-skew
parameter can be only used withOPTIMISTIC
transactions and it requires entry versioning to be enabled, withSIMPLE
versioning scheme.
15.3. Locking Types
15.3.1. About Optimistic Locking
write-skew
enabled, transactions in optimistic locking mode roll back if one or more conflicting modifications are made to the data before the transaction completes.
15.3.2. About Pessimistic Locking
15.3.3. Pessimistic Locking Types
- Explicit Pessimistic Locking, which uses the JBoss Data Grid Lock API to allow cache users to explicitly lock cache keys for the duration of a transaction. The Lock call attempts to obtain locks on specified cache keys across all nodes in a cluster. This attempt either fails or succeeds for all specified cache keys. All locks are released during the commit or rollback phase.
- Implicit Pessimistic Locking ensures that cache keys are locked in the background as they are accessed for modification operations. Using Implicit Pessimistic Locking causes JBoss Data Grid to check and ensure that cache keys are locked locally for each modification operation. Discovering unlocked cache keys causes JBoss Data Grid to request a cluster-wide lock to acquire a lock on the unlocked cache key.
15.3.4. Explicit Pessimistic Locking Example
Procedure 15.3. Transaction with Explicit Pessimistic Locking
tx.begin() cache.lock(K) cache.put(K,V5) tx.commit()
- When the line
cache.lock(K)
executes, a cluster-wide lock is acquired onK
. - When the line
cache.put(K,V5)
executes, it guarantees success. - When the line
tx.commit()
executes, the locks held for this process are released.
15.3.5. Implicit Pessimistic Locking Example
Procedure 15.4. Transaction with Implicit Pessimistic locking
tx.begin() cache.put(K,V) cache.put(K2,V2) cache.put(K,V5) tx.commit()
- When the line
cache.put(K,V)
executes, a cluster-wide lock is acquired onK
. - When the line
cache.put(K2,V2)
executes, a cluster-wide lock is acquired onK2
. - When the line
cache.put(K,V5)
executes, the lock acquisition is non operational because a cluster-wide lock forK
has been previously acquired. Theput
operation will still occur. - When the line
tx.commit()
executes, all locks held for this transaction are released.
15.3.6. Configure Locking Mode (Remote Client-Server Mode)
transaction
element as follows:
<transaction locking="{OPTIMISTIC/PESSIMISTIC}" />
15.3.7. Configure Locking Mode (Library Mode)
transaction
element as follows:
<transaction transaction-manager-lookup="{TransactionManagerLookupClass}" mode="{NONE, BATCH, NON_XA, NON_DURABLE_XA, FULL_XA}" locking="{OPTIMISTIC,PESSIMISTIC}"> </transaction>
locking
value to OPTIMISTIC
or PESSIMISTIC
to configure the locking mode used for the transactional cache.
15.4. Locking Operations
15.4.1. About the LockManager
LockManager
component is responsible for locking an entry before a write process initiates. The LockManager
uses a LockContainer
to locate, hold and create locks. There are two types of LockContainers
JBoss Data Grid uses internally and their choice is dependent on the useLockStriping
setting. The first type offers support for lock striping while the second type supports one lock per entry.
See Also:
15.4.2. About Lock Acquisition
15.4.3. About Concurrency Levels
ConcurrentHashMap
based collections, such as those internal to DataContainers
.
Chapter 16. Set Up Lock Striping
16.1. About Lock Striping
16.2. Configure Lock Striping (Remote Client-Server Mode)
striping
element to true
.
Example 16.1. Lock Striping (Remote Client-Server Mode)
<locking acquire-timeout="20000" concurrency-level="500" striping="true" />
Note
READ_COMMITTED
. If the isolation
attribute is included to explicitly specify an isolation mode, it is ignored, a warning is thrown, and the default value is used instead.
locking
element uses the following attributes:
- The
acquire-timeout
attribute specifies the maximum time to attempt a lock acquisition. The default value for this attribute is10000
milliseconds. - The
concurrency-level
attribute specifies the concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with JBoss Data Grid. The default value for this attribute is32
. - The
striping
attribute specifies whether a shared pool of locks is maintained for all entries that require locking (true
). If set tofalse
, a lock is created for each entry. Lock striping controls the memory footprint but can reduce concurrency in the system. The default value for this attribute isfalse
.
16.3. Configure Lock Striping (Library Mode)
striping
parameter as demonstrated in the following procedure.
Procedure 16.1. Configure Lock Striping (Library Mode)
<local-cache> <locking concurrency-level="${VALUE}" isolation="${LEVEL}" acquire-timeout="${TIME}" striping="${TRUE/FALSE}" write-skew="${TRUE/FALSE}" /> </local-cache>
- The
concurrency-level
is used to specify the size of the shared lock collection use when lock striping is enabled. - The
isolation
parameter specifies the cache's isolation level. Valid isolation levels areREAD_COMMITTED
andREPEATABLE_READ
. - The
acquire-timeout
parameter specifies time (in milliseconds) after which a lock acquisition attempt times out. - The
striping
parameter specifies whether a pool of shared locks are maintained for all entries that require locks. If set toFALSE
, locks are created for each entry in the cache. If set toTRUE
, lock striping is enabled and shared locks are used as required from the pool. - The
write-skew
check determines if a modification to the entry from a different transaction should roll back the transaction. Write skew set to true requiresisolation_level
set toREPEATABLE_READ
. The default value forwrite-skew
andisolation_level
areFALSE
andREAD_COMMITTED
respectively. Thewrite-skew
parameter can be only used withOPTIMISTIC
transactions and it requires entry versioning to be enabled, withSIMPLE
versioning scheme.
Chapter 17. Set Up Isolation Levels
17.1. About Isolation Levels
READ_COMMITTED
and REPEATABLE_READ
are the two isolation modes offered in Red Hat JBoss Data Grid.
READ_COMMITTED
. This isolation level is applicable to a wide variety of requirements. This is the default value in Remote Client-Server and Library modes.REPEATABLE_READ
.Important
The only valid value for locks in Remote Client-Server mode is the defaultREAD_COMMITTED
value. The value explicitly specified with theisolation
value is ignored.If thelocking
element is not present in the configuration, the default isolation value isREAD_COMMITTED
.
- See Section 16.2, “Configure Lock Striping (Remote Client-Server Mode)” for a Remote Client-Server mode configuration sample.
- See Section 16.3, “Configure Lock Striping (Library Mode)” for a Library mode configuration sample.
17.2. About READ_COMMITTED
READ_COMMITTED
is one of two isolation modes available in Red Hat JBoss Data Grid.
READ_COMMITTED
mode, write operations are made to copies of data rather than the data itself. A write operation blocks other data from being written, however writes do not block read operations. As a result, both READ_COMMITTED
and REPEATABLE_READ
modes permit read operations at any time, regardless of when write operations occur.
READ_COMMITTED
mode multiple reads of the same key within a transaction can return different results due to write operations in different transactions modifying data between reads. This phenomenon is known as non-repeatable reads and is avoided in REPEATABLE_READ
mode.
17.3. About REPEATABLE_READ
REPEATABLE_READ
is one of two isolation modes available in Red Hat JBoss Data Grid.
REPEATABLE_READ
does not allow write operations while read operations are in progress, nor does it allow read operations when write operations occur. This prevents the "non-repeatable read" phenomenon, which occurs when a single transaction has two read operations on the same row but the retrieved values differ (possibly due to a write operation modifying the value between the two read operations).
REPEATABLE_READ
isolation mode preserves the value of an entry before a modification occurs. As a result, the "non-repeatable read" phenomenon is avoided because a second read operation on the same entry retrieves the preserved value rather than the new modified value. As a result, the two values retrieved by the two read operations in a single transaction will always match, even if a write operation occurs in a different transaction between the two reads.
Part VII. Set Up and Configure a Cache Store
Chapter 18. Cache Stores
Note
shared
is set to false
), on node join, stale entries which might have been removed from the cluster might still be present in the stores and can reappear.
18.1. Cache Loaders and Cache Writers
org.infinispan.persistence.spi
:
CacheLoader
CacheWriter
AdvancedCacheLoader
AdvancedCacheWriter
CacheLoader
and CacheWriter
provide basic methods for reading and writing to a store. CacheLoader
retrieves data from a data store when the required data is not present in the cache, and CacheWriter
is used to enforce entry passivation and activation on eviction in a cache.
AdvancedCacheLoader
and AdvancedCacheWriter
provide operations to manipulate the underlying storage in bulk: parallel iteration and purging of expired entries, clear and size.
org.infinispan.persistence.file.SingleFileStore
is a good starting point to write your own store implementation.
Note
CacheLoader
, extended by CacheStore
), which is also still available.
18.2. Cache Store Configuration
18.2.1. Configuring the Cache Store
ignoreModifications
element has been set to "true"
for a specific cache store.
18.2.2. Configure the Cache Store using XML (Library Mode)
<persistence passivation="false"> <file-store shared="false" preload="true" fetch-state="true" purge-startup="false" singleton="true" location="${java.io.tmpdir}" > <write-behind enabled="true" flush-lock-timeout="15000" thread-pool-size="5" /> </singleFile> </persistence>
18.2.3. About SKIP_CACHE_LOAD Flag
SKIP_CACHE_LOAD
flag.
18.2.4. About the SKIP_CACHE_STORE Flag
SKIP_CACHE_STORE
Flag is used then the cache store will not be considered for the specified cache operations. This flag can be useful to place an entry in the cache without having it included in the configured cache store, along with determining if an entry is found within a cache without retrieving it from the associated cache store.
18.2.5. About the SKIP_SHARED_CACHE_STORE Flag
SKIP_SHARED_CACHE_STORE
Flag is enabled then any shared cache store will not be considered for the specified cache operations. This flag can be useful to place an entry in the cache without having it included in the shared cache store, along with determining if an entry is found within a cache without retrieving it from the shared cache store.
18.3. Shared Cache Stores
18.3.1. Invalidation Mode and Shared Cache Stores
- Compared to replication messages, which contain the updated data, invalidation messages are much smaller and result in reduced network traffic.
- The remaining cluster caches look up modified data from the shared cache store lazily and only when required to do so, resulting in further reduced network traffic.
18.3.2. The Cache Store and Cache Passivation
18.3.3. Application Cachestore Registration
18.4. Connection Factories
ConnectionFactory
implementation to obtain a database connection. This process is also known as connection management or pooling.
ConnectionFactoryClass
configuration attribute. JBoss Data Grid includes the following ConnectionFactory
implementations:
- ManagedConnectionFactory
- SimpleConnectionFactory.
- PooledConnectionFactory.
18.4.1. About ManagedConnectionFactory
ManagedConnectionFactory
is a connection factory that is ideal for use within managed environments such as application servers. This connection factory can explore a configured location in the JNDI tree and delegate connection management to the DataSource
.
18.4.2. About SimpleConnectionFactory
SimpleConnectionFactory
is a connection factory that creates database connections on a per invocation basis. This connection factory is not designed for use in a production environment.
18.4.3. About PooledConnectionFactory
PooledConnectionFactory
is a connection factory based on C3P0, and is typically recommended for standalone deployments as opposed to deployments utilizing a servlet container, such as JBoss EAP. This connection factory functions by allowing the user to define a set of parameters which may be used for all DataSource
instances generated by the factory.
Chapter 19. Cache Store Implementations
Note
shared
is set to false
), on node join, stale entries which might have been removed from the cluster might still be present in the stores and can reappear.
19.1. Cache Store Comparison
- The Single File Cache Store is a local file cache store. It persists data locally for each node of the clustered cache. The Single File Cache Store provides superior read and write performance, but keeps keys in memory which limits its use when persisting large data sets at each node. See Section 19.4, “Single File Cache Store” for details.
- The LevelDB file cache store is a local file cache store which provides high read and write performance. It does not have the limitation of Single File Cache Store of keeping keys in memory. See Section 19.5, “LevelDB Cache Store” for details.
- The JDBC cache store is a cache store that may be shared, if required. When using it, all nodes of a clustered cache persist to a single database or a local JDBC database for every node in the cluster. The shared cache store lacks the scalability and performance of a local cache store such as the LevelDB cache store, but it provides a single location for persisted data. The JDBC cache store persists entries as binary blobs, which are not readable outside JBoss Data Grid. See Section 19.6, “JDBC Based Cache Stores” for details.
- The JPA Cache Store (supported in Library mode only) is a shared cache store like JDBC cache store, but preserves schema information when persisting to the database. Therefore, the persisted entries can be read outside JBoss Data Grid. See Section 19.8, “JPA Cache Store” for details.
19.2. Cache Store Configuration Details (Library Mode)
- The
passivation
parameter affects the way in which Red Hat JBoss Data Grid interacts with stores. When an object is evicted from in-memory cache, passivation writes it to a secondary data store, such as a system or a database. Valid values for this parameter aretrue
andfalse
butpassivation
is set tofalse
by default.
- The
shared
parameter indicates that the cache store is shared by different cache instances. For example, where all instances in a cluster use the same JDBC settings to talk to the same remote, shared database.shared
isfalse
by default. When set totrue
, it prevents duplicate data being written to the cache store by different cache instances. For the LevelDB cache stores, this parameter must be excluded from the configuration, or set tofalse
because sharing this cache store is not supported. - The
preload
parameter is set tofalse
by default. When set totrue
the data stored in the cache store is preloaded into the memory when the cache starts. This allows data in the cache store to be available immediately after startup and avoids cache operations delays as a result of loading data lazily. Preloaded data is only stored locally on the node, and there is no replication or distribution of the preloaded data. Red Hat JBoss Data Grid will only preload up to the maximum configured number of entries in eviction. - The
fetch-state
parameter determines whether or not to fetch the persistent state of a cache and apply it to the local cache store when joining the cluster. If the cache store is shared the fetch persistent state is ignored, as caches access the same cache store. A configuration exception will be thrown when starting the cache service if more than one cache store has this property set totrue
. Thefetch-state
property isfalse
by default. - In order to speed up lookups, the single file cache store keeps an index of keys and their corresponding position in the file. To avoid this index resulting in memory consumption problems, this cache store can be bounded by a maximum number of entries that it stores, defined by the
max-entries
parameter. If this limit is exceeded, entries are removed permanently using the LRU algorithm both from the in-memory index and the underlying file based cache store. The default value is-1
, allowing unlimited entries. - The
singleton
parameter enables a singleton store cache store. SingletonStore is a delegating cache store used when only one instance in a cluster can interact with the underlying store; however,singleton
parameter is not recommended forfile-store
. The default value isfalse
. - The
purge
parameter controls whether cache store is purged when it starts up. - The
location
configuration element sets a location on disk where the store can write.
The write-behind
element contains parameters that configure various aspects of the cache store.
- The
thread-pool-size
parameter specifies the number of threads that concurrently apply modifications to the store. The default value for this parameter is1
. - The
flush-lock-timeout
parameter specifies the time to acquire the lock which guards the state to be flushed to the cache store periodically. The default value for this parameter is1
. - The
modification-queue-size
parameter specifies the size of the modification queue for the asynchronous store. If updates are made at a rate that is faster than the underlying cache store can process this queue, then the asynchronous store behaves like a synchronous store for that period, blocking until the queue can accept more elements. The default value for this parameter is1024
elements. - The
shutdown-timeout
parameter specifies maximum amount of time that can be taken to stop the cache store. Default value for this parameter is25000
milliseconds.
- The
cache
attribute specifies the name of the remote cache to which it intends to connect in the remote Infinispan cluster. The default cache will be used if the remote cache name is unspecified. - The
fetch-state
attribute, when set totrue
, ensures that the persistent state is fetched when the remote cache joins the cluster. If multiple cache stores are chained, only one cache store can have this property set totrue
. The default for this value isfalse
. - The
shared
attribute is set totrue
when multiple cache instances share a cache store, which prevents multiple cache instances writing the same modification individually. The default for this attribute isfalse
. - The
preload
attribute ensures that the cache store data is pre-loaded into memory and is immediately accessible after starting up. The disadvantage of setting this totrue
is that the start up time increases. The default value for this attribute isfalse
. - The
singleton
parameter enables the SingletonStore delegating cache store, used in situations when only one instance in a cluster should interact with the underlying store. The default value isfalse
. - The
purge
attribute ensures that the cache store is purged during the start up process. The default value for this attribute isfalse
. - The
tcp-no-delay
attribute triggers theTCP
NODELAY
stack. The default value for this attribute istrue
. - The
ping-on-start
attribute sends a ping request to a back end server to fetch the cluster topology. The default value for this attribute istrue
. - The
key-size-estimate
attribute provides an estimation of the key size. The default value for this attribute is64
. - The
value-size-estimate
attribute specifies the size of the byte buffers when serializing and deserializing values. The default value for this attribute is512
. - The
force-return-values
attribute sets whetherFORCE_RETURN_VALUE
is enabled for all calls. The default value for this attribute isfalse
.
Create a remote-server
element within the remote-store
element to define the server information.
- The
host
attribute configures the host address. - The
port
attribute configures the port used by the Remote Cache Store. This defaults to11222
.
- The
max-active
parameter indicates the maximum number of active connections for each server at a time. The default value for this attribute is-1
which indicates an infinite number of active connections. - The
max-idle
parameter indicates the maximum number of idle connections for each server at a time. The default value for this attribute is-1
which indicates an infinite number of idle connections. - The
max-total
parameter indicates the maximum number of persistent connections within the combined set of servers. The default setting for this attribute is-1
which indicates an infinite number of connections. - The
min-idle-time
parameter sets a target value for the minimum number of idle connections (per server) that should always be available. If this parameter is set to a positive number andtimeBetweenEvictionRunsMillis
> 0, each time the idle connection eviction thread runs, it will try to create enough idle instances so that there will beminIdle
idle instances available for each server. The default setting for this parameter is1
. - The
eviction-interval
parameter indicates how long the eviction thread should sleep before "runs" of examining idle connections. When non-positive, no eviction thread will be launched. The default setting for this parameter is120000
milliseconds, or 2 minutes. - The
min-evictable-idle-time
parameter specifies the minimum amount of time that an connection may sit idle in the pool before it is eligible for eviction due to idle time. When non-positive, no connection will be dropped from the pool due to idle time alone. This setting has no effect unlesstimeBetweenEvictionRunsMillis
> 0. The default setting for this parameter is1800000
, or (30 minutes). - The
test-idle
parameter indicates whether or not idle connections should be validated by sending an TCP packet to the server, during idle connection eviction runs. Connections that fail to validate will be dropped from the pool. This setting has no effect unlesstimeBetweenEvictionRunsMillis
> 0. The default setting for this parameter istrue
.
- The
relative-to
parameter specifies the base directory in which to store the cache state. - The
path
parameter specifies the location within therelative-to
parameter to store the cache state. - The
shared
parameter specifies whether the cache store is shared. The only supported value for this parameter in the LevelDB cache store isfalse
. - The
preload
parameter specifies whether the cache store will be pre-loaded. Valid values aretrue
andfalse
. - The
block-size
parameter defines the block size of the cache store. - The
singleton
parameter enables the SingletonStore delegating cache store, used in situations when only one instance in a cluster should interact with the underlying store. The default value isfalse
. - The
cache-size
parameter defines the cache size of the cache store. - The
clear-threshold
parameter defines the cache clear threshold of the cache store.
- The
persistence-unit
attribute specifies the name of the JPA cache store. - The
entity-class
attribute specifies the fully qualified class name of the JPA entity used to store the cache entry value. - The
batch-size
(optional) attribute specifies the batch size for cache store streaming. The default value for this attribute is100
. - The
store-metadata
(optional) attribute specifies whether the cache store keeps the metadata (for example expiration and versioning information) with the entries. The default value for this attribute istrue
. - The
singleton
parameter enables the SingletonStore delegating cache store, used in situations when only one instance in a cluster should interact with the underlying store. The default value isfalse
.
- The
fetch-state
parameter determines whether the persistent state is fetched when joining a cluster. Set this totrue
if using a replication and invalidation in a clustered environment. Additionally, if multiple cache stores are chained, only one cache store can have this property enabled. If a shared cache store is used, the cache does not allow a persistent state transfer despite this property being set totrue
. Thefetch-state
parameter isfalse
by default. - The
singleton
parameter enables the SingletonStore delegating cache store, used in situations when only one instance in a cluster should interact with the underlying store. The default value isfalse
. - The
purge
parameter specifies whether the cache store is purged when initially started. - The
key-to-string-mapper
parameter specifies the class name used to map keys to strings for the database tables.
- The
connection-url
parameter specifies the JDBC driver-specific connection URL. - The
username
parameter contains the username used to connect via theconnection-url
. - The
password
parameter contains the password to use when connecting via theconnection-url
- The
driver
parameter specifies the class name of the driver used to connect to the database.
- The
prefix
attribute defines the string prepended to name of the target cache when composing the name of the cache bucket table. - The
drop-on-exit
parameter specifies whether the database tables are dropped upon shutdown. - The
create-on-start
parameter specifies whether the database tables are created by the store on startup. - The
fetch-size
parameter specifies the size to use when querying from this table. Use this parameter to avoid heap memory exhaustion when the query is large. - The
batch-size
parameter specifies the batch size used when modifying this table.
- The
name
parameter specifies the name of the column used. - The
type
parameter specifies the type of the column used.
- The
class
parameter specifies the class name of the cache store implementation. - The
preload
parameter specifies whether to load entries into the cache during start up. Valid values for this parameter aretrue
andfalse
. - The
shared
parameter specifies whether the cache store is shared. This is used when multiple cache instances share a cache store. Valid values for this parameter aretrue
andfalse
.
A property may be defined inside of a cache store, with the entry between the property tags being the stored value. For instance, in the below example a value of 1
is defined for minOccurs
.
<property name="minOccurs">1</property>
- The
name
attribute specifies the name of the property.
19.3. Cache Store Configuration Details (Remote Client-Server Mode)
- The
name
parameter of thelocal-cache
attribute is used to specify a name for the cache. - The
statistics
parameter specifies whether statistics are enabled at the container level. Enable or disable statistics on a per-cache basis by setting thestatistics
attribute tofalse
.
- The
name
parameter of thefile-store
element is used to specify a name for the file store. - The
passivation
parameter determines whether entries in the cache are passivated (true
) or if the cache store retains a copy of the contents in memory (false
). - The
purge
parameter specifies whether or not the cache store is purged when it is started. Valid values for this parameter aretrue
andfalse
. - The
shared
parameter is used when multiple cache instances share a cache store. This parameter can be set to prevent multiple cache instances writing the same modification multiple times. Valid values for this parameter aretrue
andfalse
. However, theshared
parameter is not recommended for the LevelDB cache store because this cache store cannot be shared. - The
relative-to
property is the directory where thefile-store
stores the data. It is used to define a named path. - The
path
property is the name of the file where the data is stored. It is a relative path name that is appended to the value of therelative-to
property to determine the complete path. - The
max-entries
parameter provides maximum number of entries allowed. The default value is -1 for unlimited entries. - The
fetch-state
parameter when set to true fetches the persistent state when joining a cluster. If multiple cache stores are chained, only one of them can have this property enabled. Persistent state transfer with a shared cache store does not make sense, as the same persistent store that provides the data will just end up receiving it. Therefore, if a shared cache store is used, the cache does not allow a persistent state transfer even if a cache store has this property set totrue
. It is recommended to set this property to true only in a clustered environment. The default value for this parameter is false. - The
preload
parameter when set to true, loads the data stored in the cache store into memory when the cache starts. However, setting this parameter to true affects the performance as the startup time is increased. The default value for this parameter is false. - The
singleton
parameter enables a singleton store cache store. SingletonStore is a delegating cache store used when only one instance in a cluster can interact with the underlying store; however,singleton
parameter is not recommended forfile-store
. The default value isfalse
.
- The
class
parameter specifies the class name of the cache store implementation.
- The
name
parameter specifies the name of the property. - The
value
parameter specifies the value assigned to the property.
- The
cache
parameter defines the name for the remote cache. If left undefined, the default cache is used instead. - The
socket-timeout
parameter sets whether the value defined inSO_TIMEOUT
(in milliseconds) applies to remote Hot Rod servers on the specified timeout. A timeout value of0
indicates an infinite timeout. The default value is 60,000 ms, or one minute. - The
tcp-no-delay
sets whetherTCP_NODELAY
applies on socket connections to remote Hot Rod servers. - The
hotrod-wrapping
sets whether a wrapper is required for Hot Rod on the remote store. - The
singleton
parameter enables the SingletonStore delegating cache store, used in situations when only one instance in a cluster should interact with the underlying store. The default value isfalse
.
- The
outbound-socket-binding
parameter sets the outbound socket binding for the remote server.
- The
datasource
parameter defines the name of a JNDI for the datasource. - The
passivation
parameter determines whether entries in the cache are passivated (true
) or if the cache store retains a copy of the contents in memory (false
). - The
preload
parameter specifies whether to load entries into the cache during start up. Valid values for this parameter aretrue
andfalse
. - The
purge
parameter specifies whether or not the cache store is purged when it is started. Valid values for this parameter aretrue
andfalse
. - The
shared
parameter is used when multiple cache instances share a cache store. This parameter can be set to prevent multiple cache instances writing the same modification multiple times. Valid values for this parameter aretrue
andfalse
. - The
singleton
parameter enables a singleton store cache store. SingletonStore is a delegating cache store used when only one instance in a cluster can interact with the underlying store
- The
prefix
parameter specifies a prefix string for the database table name.
- The
name
parameter specifies the name of the database column. - The
type
parameter specifies the type of the database column.
- The
relative-to
parameter specifies the base directory to store the cache state. This value defaults tojboss.server.data.dir
. - The
path
parameter defines where, within the directory specified in therelative-to
parameter, the cache state is stored. If undefined, the path defaults to the cache container name. - The
passivation
parameter specifies whether passivation is enabled for the LevelDB cache store. Valid values aretrue
andfalse
. - The
singleton
parameter enables the SingletonStore delegating cache store, used in situations when only one instance in a cluster should interact with the underlying store. The default value isfalse
. - The
purge
parameter specifies whether the cache store is purged when it starts up. Valid values aretrue
andfalse
.
19.4. Single File Cache Store
SingleFileCacheStore
.
SingleFileCacheStore
is a simple file system based implementation and a replacement to the older file system based cache store: the FileCacheStore
.
SingleFileCacheStore
stores all key/value pairs and their corresponding metadata information in a single file. To speed up data location, it also keeps all keys and the positions of their values and metadata in memory. Hence, using the single file cache store slightly increases the memory required, depending on the key size and the amount of keys stored. Hence SingleFileCacheStore
is not recommended for use cases where the keys are too big.
SingleFileCacheStore
can be used in a limited capacity in production environments. It can not be used on shared file system (such as NFS and Windows shares) due to a lack of proper file locking, resulting in data corruption. Furthermore, file systems are not inherently transactional, resulting in file writing failures during the commit phase if the cache is used in a transactional context.
19.4.1. Single File Store Configuration (Remote Client-Server Mode)
<local-cache name="default" statistics="true"> <file-store name="myFileStore" passivation="true" purge="true" relative-to="{PATH}" path="{DIRECTORY}" max-entries="10000" fetch-state="true" preload="false" /> </local-cache>
19.4.2. Single File Store Configuration (Library Mode)
<local-cache name="writeThroughToFile"> <persistence passivation="false"> <file-store fetch-state="true" purge="false" shared="false" preload="false" location="/tmp/Another-FileCacheStore-Location" max-entries="100"> <write-behind enabled="true" threadPoolSize="500" flush-lock-timeout="1" modification-queue-size="1024" shutdown-timeout="25000"/> </singleFile> </persistence> </local-cache>
19.4.3. Upgrade JBoss Data Grid Cache Stores
19.5. LevelDB Cache Store
19.5.1. Configuring LevelDB Cache Store (Remote Client-Server Mode)
Procedure 19.1. To configure LevelDB Cache Store:
- Add the following elements to a cache definition in
standalone.xml
to configure the database:<leveldb-store path="/path/to/leveldb/data" passivation="false" purge="false" > <leveldb-expiration path="/path/to/leveldb/expires/data" /> <implementation type="JNI" /> </leveldb-store>
Note
Directories will be automatically created if they do not exist.
19.5.2. LevelDB Cache Store Sample XML Configuration (Library Mode)
<local-cache name="vehicleCache"> <persistence passivation="false"> <leveldb-store xmlns="urn:infinispan:config:store:leveldb:8.0 relative-to="/path/to/leveldb/data" shared="false" preload="true"/> </persistence> </local-cache>
19.5.3. Configure a LevelDB Cache Store Using JBoss Operations Network
Procedure 19.2.
- Ensure that Red Hat JBoss Operations Network 3.2 or higher is installed and started.
- Install the Red Hat JBoss Data Grid Plugin Pack for JBoss Operations Network 3.2.0.
- Ensure that JBoss Data Grid is installed and started.
- Import JBoss Data Grid server into the inventory.
- Configure the JBoss Data Grid connection settings.
- Create a new LevelDB cache store as follows:
Figure 19.1. Create a new LevelDB Cache Store
- Right-click the
default
cache. - In the menu, mouse over the Create Child option.
- In the submenu, click LevelDB Store.
- Name the new LevelDB cache store as follows:
Figure 19.2. Name the new LevelDB Cache Store
- In the Resource Create Wizard that appears, add a name for the new LevelDB Cache Store.
- Click Next to continue.
- Configure the LevelDB Cache Store settings as follows:
Figure 19.3. Configure the LevelDB Cache Store Settings
- Use the options in the configuration window to configure a new LevelDB cache store.
- Click Finish to complete the configuration.
- Schedule a restart operation as follows:
Figure 19.4. Schedule a Restart Operation
- In the screen's left panel, expand the JBossAS7 Standalone Servers entry, if it is not currently expanded.
- Click JDG (0.0.0.0:9990) from the expanded menu items.
- In the screen's right panel, details about the selected server display. Click the Operations tab.
- In the Operation drop-down box, select the Restart operation.
- Select the radio button for the Now entry.
- Click Schedule to restart the server immediately.
- Discover the new LevelDB cache store as follows:
Figure 19.5. Discover the New LevelDB Cache Store
- In the screen's left panel, select each of the following items in the specified order to expand them: JBossAS7 Standalong Servers → JDG (0.0.0.0:9990) → infinispan → Cache Containers → local → Caches → default → LevelDB Stores
- Click the name of your new LevelDB Cache Store to view its configuration information in the right panel.
19.6. JDBC Based Cache Stores
JdbcBinaryStore
.JdbcStringBasedStore
.JdbcMixedStore
.
19.6.1. JdbcBinaryStores
JdbcBinaryStore
supports all key types. It stores all keys with the same hash value (hashCode
method on the key) in the same table row/blob. The hash value common to the included keys is set as the primary key for the table row/blob. As a result of this hash value, JdbcBinaryStore
offers excellent flexibility but at the cost of concurrency and throughput.
k1
, k2
and k3
) have the same hash code, they are stored in the same table row. If three different threads attempt to concurrently update k1
, k2
and k3
, they must do it sequentially because all three keys share the same row and therefore cannot be simultaneously updated.
19.6.1.1. JdbcBinaryStore Configuration (Remote Client-Server Mode)
JdbcBinaryStore
using Red Hat JBoss Data Grid's Remote Client-Server mode with Passivation enabled:
<local-cache name="customCache"> <!-- Additional configuration elements here --> <binary-keyed-jdbc-store datasource="java:jboss/datasources/JdbcDS" passivation="${true/false}" preload="${true/false}" purge="${true/false}"> <binary-keyed-table prefix="JDG"> <id-column name="id" type="${id.column.type}"/> <data-column name="datum" type="${data.column.type}"/> <timestamp-column name="version" type="${timestamp.column.type}"/> </binary-keyed-table> </binary-keyed-jdbc-store> </local-cache>
19.6.1.2. JdbcBinaryStore Configuration (Library Mode)
JdbcBinaryStore
:
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.3 http://www.infinispan.org/schemas/infinispan-config-8.3.xsd urn:infinispan:config:store:jdbc:8.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-8.0.xsd" xmlns="urn:infinispan:config:8.3"> <!-- Additional configuration elements here --> <persistence> <binary-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:8.0 fetch-state="false" purge="false"> <connection-pool connection-url="jdbc:h2:mem:infinispan_binary_based;DB_CLOSE_DELAY=-1" username="sa" driver="org.h2.Driver"/> <binary-keyed-table dropOnExit="true" createOnStart="true" prefix="ISPN_BUCKET_TABLE"> <id-column name="ID_COLUMN" type="VARCHAR(255)" /> <data-column name="DATA_COLUMN" type="BINARY" /> <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" /> </binary-keyed-table> </binary-keyed-jdbc-store> </persistence>
19.6.2. JdbcStringBasedStores
JdbcStringBasedStore
stores each entry in its own row in the table, instead of grouping multiple entries into each row, resulting in increased throughput under a concurrent load. It also uses a (pluggable) bijection that maps each key to a String
object. The key-to-string-mapper
interface defines the bijection.
DefaultTwoWayKey2StringMapper
that handles primitive types.
19.6.2.1. JdbcStringBasedStore Configuration (Remote Client-Server Mode)
JdbcStringBasedStore
for Red Hat JBoss Data Grid's Remote Client-Server mode:
<local-cache name="customCache"> <!-- Additional configuration elements here --> <string-keyed-jdbc-store datasource="java:jboss/datasources/JdbcDS" passivation="true" preload="false" purge="false" shared="false" singleton="true"> <string-keyed-table prefix="JDG"> <id-column name="id" type="${id.column.type}"/> <data-column name="datum" type="${data.column.type}"/> <timestamp-column name="version" type="${timestamp.column.type}"/> </string-keyed-table> </string-keyed-jdbc-store> </local-cache>
19.6.2.2. JdbcStringBasedStore Configuration (Library Mode)
JdbcStringBasedStore
:
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.3 http://www.infinispan.org/schemas/infinispan-config-8.3.xsd urn:infinispan:config:store:jdbc:8.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-8.0.xsd" xmlns="urn:infinispan:config:8.3"> <!-- Additional configuration elements here --> <persistence> <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:8.0" fetch-state="false" purge="false" key2StringMapper="org.infinispan.loaders.keymappers.DefaultTwoWayKey2StringMapper"> <dataSource jndiUrl="java:jboss/datasources/JdbcDS"/> <string-keyed-table dropOnExit="true" createOnStart="true" prefix="ISPN_STRING_TABLE"> <id-column name="ID_COLUMN" type="VARCHAR(255)" /> <data-column name="DATA_COLUMN" type="BINARY" /> <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" /> </string-keyed-table> </string-keyed-jdbc-store> </persistence>
19.6.2.3. JdbcStringBasedStore Multiple Node Configuration (Remote Client-Server Mode)
JdbcStringBasedStore
in Red Hat JBoss Data Grid's Remote Client-Server mode. This configuration is used when multiple nodes must be used.
<subsystem xmlns="urn:infinispan:server:core:8.3" default-cache-container="default"> <cache-container <!-- Additional configuration information here --> > <!-- Additional configuration elements here --> <replicated-cache> <!-- Additional configuration elements here --> <string-keyed-jdbc-store datasource="java:jboss/datasources/JdbcDS" fetch-state="true" passivation="false" preload="false" purge="false" shared="false" singleton="true"> <string-keyed-table prefix="JDG"> <id-column name="id" type="${id.column.type}"/> <data-column name="datum" type="${data.column.type}"/> <timestamp-column name="version" type="${timestamp.column.type}"/> </string-keyed-table> </string-keyed-jdbc-store> </replicated-cache> </cache-container> </subsystem>
19.6.3. JdbcMixedStores
JdbcMixedStore
is a hybrid implementation that delegates keys based on their type to either the JdbcBinaryStore
or JdbcStringBasedStore
.
19.6.3.1. JdbcMixedStore Configuration (Remote Client-Server Mode)
JdbcMixedStore
for Red Hat JBoss Data Grid's Remote Client-Server mode:
<local-cache name="customCache"> <mixed-keyed-jdbc-store datasource="java:jboss/datasources/JdbcDS" passivation="true" preload="false" purge="false"> <binary-keyed-table prefix="MIX_BKT2"> <id-column name="id" type="${id.column.type}"/> <data-column name="datum" type="${data.column.type}"/> <timestamp-column name="version" type="${timestamp.column.type}"/> </binary-keyed-table> <string-keyed-table prefix="MIX_STR2"> <id-column name="id" type="${id.column.type}"/> <data-column name="datum" type="${data.column.type}"/> <timestamp-column name="version" type="${timestamp.column.type}"/> </string-keyed-table> </mixed-keyed-jdbc-store> </local-cache>
19.6.3.2. JdbcMixedStore Configuration (Library Mode)
JdbcMixedStore
:
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.3 http://www.infinispan.org/schemas/infinispan-config-8.3.xsd urn:infinispan:config:store:jdbc:8.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-8.0.xsd" xmlns="urn:infinispan:config:8.3"> <!-- Additional configuration elements here --> <persistence> <mixed-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:8.0" fetch-state="false" purge="false" key-to-string-mapper="org.infinispan.persistence.keymappers.DefaultTwoWayKey2StringMapper"> <connection-pool connection-url="jdbc:h2:mem:infinispan_binary_based;DB_CLOSE_DELAY=-1" username="sa" driver="org.h2.Driver"/> <binary-keyed-table dropOnExit="true" createOnStart="true" prefix="ISPN_BUCKET_TABLE_BINARY"> <id-column name="ID_COLUMN" type="VARCHAR(255)" /> <data-column name="DATA_COLUMN" type="BINARY" /> <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" /> </binary-keyed-table> <string-keyed-table dropOnExit="true" createOnStart="true" prefix="ISPN_BUCKET_TABLE_STRING"> <id-column name="ID_COLUMN" type="VARCHAR(255)" /> <data-column name="DATA_COLUMN" type="BINARY" /> <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" /> </string-keyed-table> </mixed-keyed-jdbc-store> </persistence>
19.6.4. Cache Store Troubleshooting
19.6.4.1. IOExceptions with JdbcStringBasedStore
JdbcStringBasedStore
indicates that your data column type is set to VARCHAR
, CLOB
or something similar instead of the correct type, BLOB
or VARBINARY
. Despite its name, JdbcStringBasedStore
only requires that the keys are strings while the values can be any data type, so that they can be stored in a binary column.
19.7. The Remote Cache Store
RemoteCacheStore
is an implementation of the cache loader that stores data in a remote Red Hat JBoss Data Grid cluster. The RemoteCacheStore
uses the Hot Rod client-server architecture to communicate with the remote cluster.
RemoteCacheStore
and the cluster.
19.7.1. Remote Cache Store Configuration (Remote Client-Server Mode)
<remote-store cache="default" socket-timeout="60000" tcp-no-delay="true" hotrod-wrapping="true"> <remote-server outbound-socket-binding="remote-store-hotrod-server" /> </remote-store>
19.7.2. Remote Cache Store Configuration (Library Mode)
<persistence passivation="false"> <remote-store xmlns="urn:infinispan:config:remote:8.3" cache="default" fetch-state="false" shared="true" preload="false" purge="false" tcp-no-delay="true" ping-on-start="true" key-size-estimate="62" value-size-estimate="512" force-return-values="false"> <remote-server host="127.0.0.1" port="1971" /> <connectionPool max-active="99" max-idle="97" max-total="98" /> </remote-store> </persistence>
19.7.3. Define the Outbound Socket for the Remote Cache Store
outbound-socket-binding
element in a standalone.xml
file.
standalone.xml
file is as follows:
Example 19.1. Define the Outbound Socket
<server> <!-- Additional configuration elements here --> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <!-- Additional configuration elements here --> <outbound-socket-binding name="remote-store-hotrod-server"> <remote-destination host="remote-host" port="11222"/> </outbound-socket-binding> </socket-binding-group> </server>
19.8. JPA Cache Store
Important
19.8.1. JPA Cache Store Sample XML Configuration (Library Mode)
infinispan.xml
file:
<local-cache name="users"> <!-- Insert additional configuration elements here --> <persistence passivation="false"> <jpa-store xmlns="urn:infinispan:config:store:jpa:8.0" shared="true" preload="true" persistence-unit="MyPersistenceUnit" entity-class="org.infinispan.loaders.jpa.entity.User" /> </persistence> </local-cache>
19.8.2. Storing Metadata in the Database
storeMetadata
is set to true
(default value), meta information about the entries such as expiration, creation and modification timestamps, and versioning is stored in the database. JBoss Data Grid stores the metadata in an additional table named __ispn_metadata__
because the entity table has a fixed layout that cannot accommodate the metadata.
Procedure 19.3. Configure persistence.xml for Metadata Entities
- Using Hibernate as the JPA implementation allows automatic creation of these tables using the property
hibernate.hbm2ddl.auto
inpersistence.xml
as follows:<property name="hibernate.hbm2ddl.auto" value="update"/>
- Declare the metadata entity class to the JPA provider by adding the following to
persistence.xml
:<class>org.infinispan.persistence.jpa.impl.MetadataEntity</class>
storeMetadata
attribute to false
in the JPA Store configuration.
19.8.3. Deploying JPA Cache Stores in Various Containers
<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-jpa</artifactId> <version>8.3.0.Final-redhat-1</version> </dependency>
Procedure 19.4. Deploy JPA Cache Stores in JBoss EAP 6.3.x and earlier
- To add dependencies from the JBoss Data Grid modules to the application's classpath, provide the JBoss EAP deployer a list of dependencies in one of the following ways:
- Add a dependency configuration to the
MANIFEST.MF
file:Manifest-Version: 1.0 Dependencies: org.infinispan:jdg-7.0 services, org.infinispan.persistence.jpa:jdg-7.0 services
- Add a dependency configuration to the
jboss-deployment-structure.xml
file:<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2"> <deployment> <dependencies> <module name="org.infinispan.persistence.jpa" slot="jdg-7.0" services="export"/> <module name="org.infinispan" slot="jdg-7.0" services="export"/> </dependencies> </deployment> </jboss-deployment-structure>
Procedure 19.5. Deploy JPA Cache Stores in JBoss EAP 6.4 and later
- Add the following property in
persistence.xml
:<persistence-unit> [...] <properties> <property name="jboss.as.jpa.providerModule" value="application" /> </properties> </persistence-unit>
- Add the following dependencies to the
jboss-deployment-structure.xml
:<jboss-deployment-structure> <deployment> <dependencies> <module name="org.infinispan" slot="jdg-7.0"/> <module name="org.jgroups" slot="jdg-7.0"/> <module name="org.infinispan.persistence.jpa" slot="jdg-7.0" services="export"/> <module name="org.hibernate"/> </dependencies> </deployment> </jboss-deployment-structure>
- Add any additional dependencies, such as additional JDG modules, are in use add these to the
dependencies
section injboss-deployment-structure.xml
.
Important
19.9. Cassandra Cache Store
auto-create-keyspace
parameter in the cache store configuration. A sample keyspace creation is demonstrated below:
CREATE KEYSPACE IF NOT EXISTS Infinispan WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; CREATE TABLE Infinispan.InfinispanEntries (key blob PRIMARY KEY, value blob, metadata blob);
19.9.1. Enabling the Cassandra Cache Store
Library Mode
- Theinfinispan-cachestore-cassandra-8.3.0.final-redhat-1-deployable.jar
is included in thejboss-datagrid-${jdg-version}-library/
directory, and may be added to any projects that are using the Cassandra Cache Store.Remote Client-Server Mode
- The Cassandra Cache Store is prepackaged in themodules/
directory of the server, and may be used by default with no additional configuration necessary.JBoss Data Grid modules for JBoss EAP
- The Cassandra Cache Store is included in the modules distributed, and may be added by using theorg.infinispan.persistence.cassandra
as the module name.
19.9.2. Cassandra Cache Store Sample XML Configuration (Remote Client-Server Mode)
org.infinispan.persistence.cassandra.CassandraStore
and defining the properties individually within the store.
<local-cache name="cassandracache" start="EAGER"> <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> <store name="cassstore1" class="org.infinispan.persistence.cassandra.CassandraStore" shared="true" passivation="false"> <property name="autoCreateKeyspace">true</property> <property name="keyspace">store1</property> <property name="entryTable">entries1</property> <property name="consistencyLevel">LOCAL_ONE</property> <property name="serialConsistencyLevel">SERIAL</property> <property name="servers">127.0.0.1[9042],127.0.0.1[9041]</property> <property name="connectionPool.heartbeatIntervalSeconds">30</property> <property name="connectionPool.idleTimeoutSeconds">120</property> <property name="connectionPool.poolTimeoutMillis">5</property> </store> </local-cache>
19.9.3. Cassandra Cache Store Sample XML Configuration (Library Mode)
- Option 1: Using the same method discussed for Remote Client-Server Mode, found in Section 19.9.2, “Cassandra Cache Store Sample XML Configuration (Remote Client-Server Mode)”.
- Option 2: Using the
cassandra-store
schema. The following snippet shows an example configuration defining a Cassandra Cache Store:<cache-container default-cache="cassandracache"> <local-cache name="cassandracache"> <persistence passivation="false"> <cassandra-store xmlns="urn:infinispan:config:store:cassandra:8.2" auto-create-keyspace="true" keyspace="Infinispan" entry-table="InfinispanEntries" shared="true"> <cassandra-server host="127.0.0.1" port="9042" /> <connection-pool heartbeat-interval-seconds="30" idle-timeout-seconds="120" pool-timeout-millis="5" /> </cassandra-store> </persistence> </local-cache> </cache-container>
19.9.4. Cassandra Configuration Parameters
cassandra-server
elements may be specified in the configuration. Each of the elements has the following properties:
Table 19.1. Cassandra Server Configuration Parameters
Parameter Name | Description | Default Value |
---|---|---|
host | The hostname or ip address of a Cassandra server. | 127.0.0.1 |
port | The port on which the server is listening. | 9042 |
Table 19.2. Cassandra Configuration Parameter
Parameter Name | Description | Default Value |
---|---|---|
auto-create-keyspace | Determines whether the keyspace and entry table should be automatically created on startup. | true |
keyspace | Name of the keyspace to use. | Infinispan |
entry-table | Name of the table storing entries. | InfinispanEntries |
consistency-level | Consistency level to use for the queries. | LOCAL_ONE |
serial-consistency-level | Serial consistency level to use for the queries. | SERIAL |
connection-pool
may also be defined with the following elements:
Table 19.3. Connection Pool Configuration Parameters
Parameter Name | Description | Default Value |
---|---|---|
pool-timeout-millis | Time that the driver blocks when no connection from hosts pool is available. After this timeout, the driver will try the next host. | 5 |
heartbeat-interval-seconds | Application-side heartbeat to avoid the connections being dropped when no activity is happening. Set to 0 to disable. | 30 |
idle-timeout-seconds | Timeout before an idle connection is removed. | 120 |
19.10. Custom Cache Stores
CacheLoader
CacheWriter
AdvancedCacheLoader
AdvancedCacheWriter
ExternalStore
AdvancedLoadWriteStore
Note
AdvancedCacheWriter
is not implemented, the expired entries cannot be purged or cleared using the given writer.
Note
AdvancedCacheLoader
is not implemented, the entries stored in the given loader will not be used for preloading.
SingleFileStore
as an example. To view the SingleFileStore
example code, download the JBoss Data Grid source code.
SingleFileStore
example code from the Customer Portal:
Procedure 19.6. Download JBoss Data Grid Source Code
- To access the Red Hat Customer Portal, navigate to https://access.redhat.com/home in a browser.
- Click Downloads.
- In the section labeled JBoss Development and Management, click Red Hat JBoss Data Grid.
- Enter the relevant credentials in the Red Hat Login and Password fields and click Log In.
- From the list of downloadable files, locate Red Hat JBoss Data Grid 7 Source Code and click Download. Save and unpack it in a desired location.
- Locate the
SingleFileStore
source code by navigating throughjboss-datagrid-7.0.0-sources/infinispan-8.3.0.Final-redhat-1-src/core/src/main/java/org/infinispan/persistence/file/SingleFileStore.java
.
19.10.1. Custom Cache Store Maven Archetype
Procedure 19.7. Generate a Maven Archetype
- Ensure the JBoss Data Grid Maven repository has been installed by following the instructions in the Red Hat JBoss Data Grid Getting Started Guide.
- Open a command prompt and execute the following command to generate an archetype in the current directory:
mvn -Dmaven.repo.local="path/to/unzipped/jboss-datagrid-7.0.0-maven-repository/" archetype:generate -DarchetypeGroupId=org.infinispan -DarchetypeArtifactId=custom-cache-store-archetype -DarchetypeVersion=8.3.0.Final-redhat-1
Note
The above command has been broken into multiple lines for readability; however, when executed this command and all arguments must be on a single line.
19.10.2. Custom Cache Store Configuration (Remote Client-Server Mode)
Example 19.2. Custom Cache Store Configuration
<distributed-cache name="cacheStore" mode="SYNC" segments="20" owners="2" remote-timeout="30000"> <store class="my.package.CustomCacheStore"> <property name="customStoreProperty">10</property> </store> </distributed-cache>
19.10.2.1. Option 1: Add Custom Cache Store using deployments (Remote Client-Server Mode)
Procedure 19.8. Deploy Custom Cache Store .jar file to JDG server using deployments
- Add the following Java service loader file
META-INF/services/org.infinispan.persistence.spi.AdvancedLoadWriteStore
to the module and add a reference to the Custom Cache Store Class, such as seen below:my.package.CustomCacheStore
- Copy the jar to the
$JDG_HOME/standalone/deployments/
directory. - If the .jar file is available the server the following message will be displayed in the logs:
JBAS010287: Registering Deployed Cache Store service for store 'my.package.CustomCacheStore'
- In the
infinispan-core
subsystem add an entry for the cache inside acache-container
, specifying the class that overrides one of the interfaces from Section 19.10, “Custom Cache Stores”:<subsystem xmlns="urn:infinispan:server:core:8.3"> [...] <distributed-cache name="cacheStore" mode="SYNC" segments="20" owners="2" remote-timeout="30000""> <store class="my.package.CustomCacheStore"> <!-- If custom properties are included these may be specified as below --> <property name="customStoreProperty">10</property> </store> </distributed-cache> [...] </subsystem>
19.10.2.2. Option 2: Add Custom Cache Store using the CLI (Remote Client-Server Mode)
Procedure 19.9. Deploying Custom Cache Store .jar file to JDG server using the CLI
- Connect to the JDG server by running the below command:
[$JDG_HOME] $ bin/cli.sh --connect --controller=$IP:$PORT
- Deploy the .jar file by executing the following command:
deploy /path/to/artifact.jar
19.10.2.3. Option 3: Add Custom Cache Store using JON (Remote Client-Server Mode)
Procedure 19.10. Deploying Custom Cache Store .jar file to JDG server using JBoss Operation Network
- Log into JON.
- Navigate to
Bundles
along the upper bar. - Click the
New
button and choose theRecipe
radio button. - Insert a deployment bundle file content that references the store, similar to the following example:
<?xml version="1.0"?> <project name="cc-bundle" default="main" xmlns:rhq="antlib:org.rhq.bundle"> <rhq:bundle name="Mongo DB Custom Cache Store" version="1.0" description="Custom Cache Store"> <rhq:deployment-unit name="JDG" compliance="full"> <rhq:file name="custom-store.jar"/> </rhq:deployment-unit> </rhq:bundle> <target name="main" /> </project>
- Proceed with
Next
button toBundle Groups
configuration wizard page and proceed withNext
button once again. - Locate custom cache store
.jar
file using file uploader andUpload
the file. - Proceed with
Next
button toSummary
configuration wizard page. Proceed withFinish
button in order to finish bundle configuration. - Navigate back to the
Bundles
tab along the upper bar. - Select the newly created bundle and click
Deploy
button. - Enter
Destination Name
and choose the proper Resource Group; this group should only consist of JDG servers. - Choose
Install Directory
fromBase Location
's radio box group. - Enter
/standalone/deployments
inDeployment Directory
text field below. - Proceed with the wizard using the default options.
- Validate the deployment using the following command on the server's host:
find $JDG_HOME -name "custom-store.jar"
- Confirm the bundle has been installed in
$JDG_HOME/standalone/deployments
.
Note
19.10.3. Custom Cache Store Configuration (Library Mode)
Example 19.3. Custom Cache Store Configuration
<persistence> <store class="org.infinispan.custom.CustomCacheStore" preload="true" shared="true"> <properties> <property name="customStoreProperty" value="10" /> </properties> </store> </persistence>
Note
Part VIII. Set Up Passivation
Chapter 20. Activation and Passivation Modes
20.1. Passivation Mode Benefits
20.2. Configure Passivation
passivation
parameter to the cache store element to toggle passivation for it:
Example 20.1. Toggle Passivation in Remote Client-Server Mode
<local-cache name="customCache"/> <!-- Additional configuration elements for local-cache here --> <file-store passivation="true" <!-- Additional configuration elements for file-store here --> </local-cache>
passivation
parameter to the persistence
element to toggle passivation:
Example 20.2. Toggle Passivation in Library Mode
<persistence passivation="true"> <!-- Additional configuration elements here --> </persistence>
20.3. Eviction and Passivation
20.3.1. Eviction and Passivation Usage
- A notification regarding the passivated entry is emitted to the cache listeners.
- The evicted entry is stored.
20.3.2. Eviction Example when Passivation is Disabled
Table 20.1. Eviction when Passivation is Disabled
Step | Key in Memory | Key on Disk |
---|---|---|
Insert keyOne | Memory: keyOne | Disk: keyOne |
Insert keyTwo | Memory: keyOne , keyTwo | Disk: keyOne , keyTwo |
Eviction thread runs, evicts keyOne | Memory: keyTwo | Disk: keyOne , keyTwo |
Read keyOne | Memory: keyOne , keyTwo | Disk: keyOne , keyTwo |
Eviction thread runs, evicts keyTwo | Memory: keyOne | Disk: keyOne , keyTwo |
Remove keyTwo | Memory: keyOne | Disk: keyOne |
20.3.3. Eviction Example when Passivation is Enabled
Table 20.2. Eviction when Passivation is Enabled
Step | Key in Memory | Key on Disk |
---|---|---|
Insert keyOne | Memory: keyOne | Disk: |
Insert keyTwo | Memory: keyOne , keyTwo | Disk: |
Eviction thread runs, evicts keyOne | Memory: keyTwo | Disk: keyOne |
Read keyOne | Memory: keyOne , keyTwo | Disk: |
Eviction thread runs, evicts keyTwo | Memory: keyOne | Disk: keyTwo |
Remove keyTwo | Memory: keyOne | Disk: |
Part IX. Set Up Cache Writing
Chapter 21. Cache Writing Modes
- Write-Through (Synchronous)
- Write-Behind (Asynchronous)
21.1. Write-Through Caching
Cache.put()
invocation), the call does not return until JBoss Data Grid has located and updated the underlying cache store. This feature allows updates to the cache store to be concluded within the client thread boundaries.
21.1.1. Write-Through Caching Benefits and Disadvantages
The primary advantage of the Write-Through mode is that the cache and cache store are updated simultaneously, which ensures that the cache store remains consistent with the cache contents.
Due to the cache store being updated simultaneously with the cache entry, there is a possibility of reduced performance for cache operations that occur concurrently with the cache store accesses and updates.
21.1.2. Write-Through Caching Configuration (Library Mode)
Procedure 21.1. Configure a Write-Through Local File Cache Store
<local-cache name="persistentCache"> <persistence> <file-store fetch-state="true" purge="false" shared="false" location="${java.io.tmpdir}"/> </persistence> </local-cache>
- The
name
parameter specifies the name of thelocal-cache
to use. - The
fetch-state
parameter determines whether the persistent state is fetched when joining a cluster. Set this totrue
if using a replication and invalidation in a clustered environment. Additionally, if multiple cache stores are chained, only one cache store can have this property enabled. If a shared cache store is used, the cache does not allow a persistent state transfer despite this property being set totrue
. Thefetch-state
parameter isfalse
by default. - The
purge
parameter specifies whether the cache is purged when initially started. - The
shared
parameter is used when multiple cache instances share a cache store and is now defined at the cache store level. This parameter can be set to prevent multiple cache instances writing the same modification multiple times. Valid values for this parameter aretrue
andfalse
.
21.2. Write-Behind Caching
21.2.1. About Unscheduled Write-Behind Strategy
21.2.2. Unscheduled Write-Behind Strategy Configuration (Remote Client-Server Mode)
write-behind
element to the target cache store configuration as follows:
Procedure 21.2. The write-behind
Element
<file-store passivation="false" path="${PATH}" purge="true" shared="false"> <write-behind modification-queue-size="1024" shutdown-timeout="25000" flush-lock-timeout="15000" thread-pool-size="5" /> </file-store>
write-behind
element uses the following configuration parameters:
- The
modification-queue-size
parameter sets the modification queue size for the asynchronous store. If updates occur faster than the cache store can process the queue, the asynchronous store behaves like a synchronous store. The store behavior remains synchronous and blocks elements until the queue is able to accept them, after which the store behavior becomes asynchronous again. - The
shutdown-timeout
parameter specifies the time in milliseconds after which the cache store is shut down. When the store is stopped some modifications may still need to be applied. Setting a large timeout value will reduce the chance of data loss. The default value for this parameter is25000
. - The
flush-lock-timeout
parameter specifies the time (in milliseconds) to acquire the lock that guards the state to be periodically flushed. The default value for this parameter is15000
. - The
thread-pool-size
parameter specifies the size of the thread pool. The threads in this thread pool apply modifications to the cache store. The default value for this parameter is5
.
21.2.3. Unscheduled Write-Behind Strategy Configuration (Library Mode)
async
element to the store configuration as follows:
Procedure 21.3. The async
Element
<persistence> <singleFile location="${LOCATION}"> <async enabled="true" modificationQueueSize="1024" shutdownTimeout="25000" flushLockTimeout="15000" threadPoolSize="5"/> </singleFile> </persistence>
async
element uses the following configuration parameters:
- The
modificationQueueSize
parameter sets the modification queue size for the asynchronous store. If updates occur faster than the cache store can process the queue, the asynchronous store behaves like a synchronous store. The store behavior remains synchronous and blocks elements until the queue is able to accept them, after which the store behavior becomes asynchronous again. - The
shutdownTimeout
parameter specifies the time in milliseconds after which the cache store is shut down. This provides time for the asynchronous writer to flush data to the store when a cache is shut down. The default value for this parameter is25000
. - The
flushLockTimeout
parameter specifies the time (in milliseconds) to acquire the lock that guards the state to be periodically flushed. The default value for this parameter is15000
. - The
threadPoolSize
parameter specifies the number of threads that concurrently apply modifications to the store. The default value for this parameter is5
.
Part X. Monitor Caches and Cache Managers
Chapter 22. Set Up Java Management Extensions (JMX)
22.1. About Java Management Extensions (JMX)
MBeans
.
22.2. Using JMX with Red Hat JBoss Data Grid
22.3. JMX Statistic Levels
- At the cache level, where management information is generated by individual cache instances.
- At the
CacheManager
level, where theCacheManager
is the entity that governs all cache instances created from it. As a result, the management information is generated for all these cache instances instead of individual caches.
Important
22.4. Enable JMX for Cache Instances
Add the following snippet within either the <default> element for the default cache instance, or under the target <local-cache> element for a specific cache:
<jmxStatistics enabled="true"/>
22.5. Enable JMX for CacheManagers
CacheManager
level, JMX statistics can be enabled either declaratively or programmatically, as follows.
Add the following in the <global> element to enable JMX declaratively at the CacheManager
level:
<globalJmxStatistics enabled="true"/>
22.6. Disabling the CacheStore via JMX When Using Rolling Upgrades
disconnectSource
operation on the RollingUpgradeManager
MBean.
See Also:
22.7. Multiple JMX Domains
CacheManager
instances exist on a single virtual machine, or if the names of cache instances in different CacheManagers
clash.
CacheManager
in manner that allows it to be easily identified and used by monitoring tools such as JMX and JBoss Operations Network.
Add the following snippet to the relevant CacheManager
configuration:
<globalJmxStatistics enabled="true" cacheManagerName="Hibernate2LC"/>
22.8. MBeans
MBean
represents a manageable resource such as a service, component, device or an application.
MBeans
that monitor and manage multiple aspects. For example, MBeans
that provide statistics on the transport layer are provided. If a JBoss Data Grid server is configured with JMX statistics, an MBean
that provides information such as the hostname, port, bytes read, bytes written and the number of worker threads exists at the following location:
jboss.infinispan:type=Server,name=<Memcached|Hotrod>,component=Transport
MBeans
are available under two JMX domains:
- jboss.as - these
MBeans
are created by the server subsystem. - jboss.infinispan - these
MBeans
are symmetric to those created by embedded mode.
MBeans
under jboss.infinispan should be used for Red Hat JBoss Data Grid, as the ones under jboss.as are for Red Hat JBoss Enterprise Application Platform.
Note
22.8.1. Understanding MBeans
MBeans
are available:
- If Cache Manager-level JMX statistics are enabled, an
MBean
namedjboss.infinispan:type=CacheManager,name="DefaultCacheManager"
exists, with properties specified by the Cache ManagerMBean
. - If the cache-level JMX statistics are enabled, multiple
MBeans
display depending on the configuration in use. For example, if a write behind cache store is configured, anMBean
that exposes properties that belong to the cache store component is displayed. All cache-levelMBeans
use the same format:jboss.infinispan:type=Cache,name="<name-of-cache>(<cache-mode>)",manager="<name-of-cache-manager>",component=<component-name>
In this format:- Specify the default name for the cache using the
cache-container
element'sdefault-cache
attribute. - The
cache-mode
is replaced by the cache mode of the cache. The lower case version of the possible enumeration values represents the cache mode. - The
component-name
is replaced by one of the JMX component names from the JMX reference documentation.
MBean
for a default cache configured for synchronous distribution would be named as follows:
jboss.infinispan:type=Cache,name="default(dist_sync)", manager="default",component=CacheStore
22.8.2. Registering MBeans in Non-Default MBean Servers
getMBeanServer()
method returns the desired (non default) MBeanServer.
Add the following snippet:
<globalJmxStatistics enabled="true" mBeanServerLookup="com.acme.MyMBeanServerLookup"/>
Chapter 23. Set Up JBoss Operations Network (JON)
23.1. About JBoss Operations Network (JON)
Important
Important
Update 04
or higher. For information on upgrading the JBoss Operations Network, see the Upgrading JBoss ON section in the JBoss Operations Network Installation Guide.
Note
23.2. Download JBoss Operations Network (JON)
23.2.1. Prerequisites for Installing JBoss Operations Network (JON)
- A Linux, Windows, or Mac OSX operating system, and an x86_64, i686, or ia64 processor.
- Java 6 or higher is required to run both the JBoss Operations Network Server and the JBoss Operations Network Agent.
- Synchronized clocks on JBoss Operations Network Servers and Agents.
- An external database must be installed.
23.2.2. Download JBoss Operations Network
Procedure 23.1. Download JBoss Operations Network
- To access the Red Hat Customer Portal, navigate to https://access.redhat.com/home in a browser.
- Click Downloads.
- In the section labeled JBoss Development and Management, click Red Hat JBoss Data Grid.
- Enter the relevant credentials in the Red Hat Login and Password fields and click Log In.
- Select the appropriate version in the Version drop down menu list.
- Click the Download button next to the desired download file.
23.2.3. Remote JMX Port Values
23.2.4. Download JBoss Operations Network (JON) Plugin
Procedure 23.2. Download Installation Files
- Open http://access.redhat.com in a web browser.
- Click Downloads in the menu across the top of the page.
- Click Red Hat JBoss Operations Network in the list under
JBoss Development and Management
. - Enter your login information.You are taken to the Software Downloads page.
Download the JBoss Operations Network Plugin
If you intend to use the JBoss Operations Network plugin for JBoss Data Grid, selectJBoss ON for Data Grid
from either the Product drop-down box, or the menu on the left.- Click the
Red Hat JBoss Operations Network VERSION Base Distribution
Download button. - Repeat the steps to download the
Data Grid Management Plugin Pack for JBoss ON VERSION
23.3. JBoss Operations Network Server Installation
Note
23.4. JBoss Operations Network Agent
init.d
script in a UNIX environment.
Note
23.5. JBoss Operations Network for Remote Client-Server Mode
- initiate and perform installation and configuration operations.
- monitor resources and their metrics.
23.5.1. Installing the JBoss Operations Network Plug-in (Remote Client-Server Mode)
Install the plug-ins
- Copy the JBoss Data Grid server rhq plug-in to
$JON_SERVER_HOME/plugins
. - Copy the JBoss Enterprise Application Platform plug-in to
$JON_SERVER_HOME/plugins
.
The server will automatically discover plug-ins here and deploy them. The plug-ins will be removed from the plug-ins directory after successful deployment.Obtain plug-ins
Obtain all available plug-ins from the JBoss Operations Network server. To do this, type the following into the agent's console:plugins update
List installed plug-ins
Ensure the JBoss Enterprise Application Platform plug-in and the JBoss Data Grid server rhq plug-in are installed correctly using the following:plugins info
23.6. JBoss Operations Network Remote-Client Server Plugin
23.6.1. JBoss Operations Network Plugin Metrics
Table 23.1. JBoss Operations Network Traits for the Cache Container (Cache Manager)
Trait Name | Display Name | Description |
---|---|---|
cache-manager-status | Cache Container Status | The current runtime status of a cache container. |
cluster-name | Cluster Name | The name of the cluster. |
members | Cluster Members | The names of the members of the cluster. |
coordinator-address | Coordinator Address | The coordinator node's address. |
local-address | Local Address | The local node's address. |
version | Version | The cache manager version. |
defined-cache-names | Defined Cache Names | The caches that have been defined for this manager. |
Table 23.2. JBoss Operations Network Metrics for the Cache Container (Cache Manager)
Metric Name | Display Name | Description |
---|---|---|
cluster-size | Cluster Size | How many members are in the cluster. |
defined-cache-count | Defined Cache Count | How many caches that have been defined for this manager. |
running-cache-count | Running Cache Count | How many caches are running under this manager. |
created-cache-count | Created Cache Count | How many caches have actually been created under this manager. |
Table 23.3. JBoss Operations Network Traits for the Cache
Trait Name | Display Name | Description |
---|---|---|
cache-status | Cache Status | The current runtime status of a cache. |
cache-name | Cache Name | The current name of the cache. |
version | Version | The cache version. |
Table 23.4. JBoss Operations Network Metrics for the Cache
Metric Name | Display Name | Description |
---|---|---|
cache-status | Cache Status | The current runtime status of a cache. |
number-of-locks-available | [LockManager] Number of locks available | The number of exclusive locks that are currently available. |
concurrency-level | [LockManager] Concurrency level | The LockManager's configured concurrency level. |
average-read-time | [Statistics] Average read time | Average number of milliseconds required for a read operation on the cache to complete. |
hit-ratio | [Statistics] Hit ratio | The result (in percentage) when the number of hits (successful attempts) is divided by the total number of attempts. |
elapsed-time | [Statistics] Seconds since cache started | The number of seconds since the cache started. |
read-write-ratio | [Statistics] Read/write ratio | The read/write ratio (in percentage) for the cache. |
average-write-time | [Statistics] Average write time | Average number of milliseconds a write operation on a cache requires to complete. |
hits | [Statistics] Number of cache hits | Number of cache hits. |
evictions | [Statistics] Number of cache evictions | Number of cache eviction operations. |
remove-misses | [Statistics] Number of cache removal misses | Number of cache removals where the key was not found. |
time-since-reset | [Statistics] Seconds since cache statistics were reset | Number of seconds since the last cache statistics reset. |
number-of-entries | [Statistics] Number of current cache entries | Number of entries currently in the cache. |
stores | [Statistics] Number of cache puts | Number of cache put operations |
remove-hits | [Statistics] Number of cache removal hits | Number of cache removal operation hits. |
misses | [Statistics] Number of cache misses | Number of cache misses. |
success-ratio | [RpcManager] Successful replication ratio | Successful replications as a ratio of total replications in numeric double format. |
replication-count | [RpcManager] Number of successful replications | Number of successful replications |
replication-failures | [RpcManager] Number of failed replications | Number of failed replications |
average-replication-time | [RpcManager] Average time spent in the transport layer | The average time (in milliseconds) spent in the transport layer. |
commits | [Transactions] Commits | Number of transaction commits performed since the last reset. |
prepares | [Transactions] Prepares | Number of transaction prepares performed since the last reset. |
rollbacks | [Transactions] Rollbacks | Number of transaction rollbacks performed since the last reset. |
invalidations | [Invalidation] Number of invalidations | Number of invalidations. |
passivations | [Passivation] Number of cache passivations | Number of passivation events. |
activations | [Activations] Number of cache entries activated | Number of activation events. |
cache-loader-loads | [Activation] Number of cache store loads | Number of entries loaded from the cache store. |
cache-loader-misses | [Activation] Number of cache store misses | Number of entries that did not exist in the cache store. |
cache-loader-stores | [CacheStore] Number of cache store stores | Number of entries stored in the cache stores. |
Note
The metrics provided by the JBoss Operations Network (JON) plugin for Red Hat JBoss Data Grid are for REST and Hot Rod endpoints only. For the REST protocol, the data must be taken from the Web subsystem metrics. For details about each of these endpoints, see the Getting Started Guide.
Table 23.5. JBoss Operations Network Metrics for the Connectors
Metric Name | Display Name | Description |
---|---|---|
bytesRead | Bytes Read | Number of bytes read. |
bytesWritten | Bytes Written | Number of bytes written. |
Note
23.6.2. JBoss Operations Network Plugin Operations
Table 23.6. JBoss ON Plugin Operations for the Cache
Operation Name | Description |
---|---|
Start Cache | Starts the cache. |
Stop Cache | Stops the cache. |
Clear Cache | Clears the cache contents. |
Reset Statistics | Resets statistics gathered by the cache. |
Reset Activation Statistics | Resets activation statistics gathered by the cache. |
Reset Invalidation Statistics | Resets invalidations statistics gathered by the cache. |
Reset Passivation Statistics | Resets passivation statistics gathered by the cache. |
Reset Rpc Statistics | Resets replication statistics gathered by the cache. |
Remove Cache | Removes the given cache from the cache-container. |
Record Known Global Keyset | Records the global known keyset to a well-known key for retrieval by the upgrade process. |
Synchronize Data | Synchronizes data from the old cluster to this using the specified migrator. |
Disconnect Source | Disconnects the target cluster from the source cluster according to the specified migrator. |
The cache backups used for these operations are configured using cross-datacenter replication. In the JBoss Operations Network (JON) User Interface, each cache backup is the child of a cache. For more information about cross-datacenter replication, see Chapter 35, Set Up Cross-Datacenter Replication
Table 23.7. JBoss Operations Network Plugin Operations for the Cache Backups
Operation Name | Description |
---|---|
status | Display the site status. |
bring-site-online | Brings the site online. |
take-site-offline | Takes the site offline. |
Red Hat JBoss Data Grid does not support using Transactions in Remote Client-Server mode. As a result, none of the endpoints can use transactions.
23.6.3. JBoss Operations Network Plugin Attributes
Table 23.8. JBoss ON Plugin Attributes for the Cache (Transport)
Attribute Name | Type | Description |
---|---|---|
cluster | string | The name of the group communication cluster. |
executor | string | The executor used for the transport. |
lock-timeout | long | The timeout period for locks on the transport. The default value is 240000 . |
machine | string | A machine identifier for the transport. |
rack | string | A rack identifier for the transport. |
site | string | A site identifier for the transport. |
stack | string | The JGroups stack used for the transport. |
23.6.4. Create a New Cache Using JBoss Operations Network (JON)
Procedure 23.3. Creating a new cache in Remote Client-Server mode
- Log into the JBoss Operations Network Console.
- From the JBoss Operations Network console, click Inventory.
- Select Servers from the Resources list on the left of the console.
- Select the specific Red Hat JBoss Data Grid server from the servers list.
- Below the server name, click infinispan and then Cache Containers.
- Select the desired cache container that will be parent for the newly created cache.
- Right-click the selected cache container. For example, clustered.
- In the context menu, navigate to Create Child and select Cache.
- Create a new cache in the resource create wizard.
- Enter the new cache name and click Next.
- Set the cache attributes in the Deployment Options and click Finish.
Note
23.7. JBoss Operations Network for Library Mode
- initiate and perform installation and configuration operations.
- monitor resources and their metrics.
23.7.1. Installing the JBoss Operations Network Plug-in (Library Mode)
Procedure 23.4. Install JBoss Operations Network Library Mode Plug-in
Open the JBoss Operations Network Console
- From the JBoss Operations Network console, select Administration.
- Select Agent Plugins from the Configuration options on the left side of the console.
Figure 23.1. JBoss Operations Network Console for JBoss Data Grid
Upload the Library Mode Plug-in
- Click Browse, locate the
InfinispanPlugin
on your local file system. - Click Upload to add the plug-in to the JBoss Operations Network Server.
Figure 23.2. Upload the
InfinispanPlugin
.Scan for Updates
- Once the file has successfully uploaded, click Scan For Updates at the bottom of the screen.
- The
InfinispanPlugin
will now appear in the list of installed plug-ins.
Figure 23.3. Scan for Updated Plug-ins.
23.7.2. Monitoring Of JBoss Data Grid Instances in Library Mode
23.7.2.1. Prerequisites
- A correctly configured instance of JBoss Operations Network (JON) 3.2.0 with patch
Update 02
or higher version. - A running instance of JON Agent on the server where the application will run. For more information, see Section 23.4, “JBoss Operations Network Agent”
- An operational instance of the RHQ agent with a full JDK. Ensure that the agent has access to the
tools.jar
file from the JDK in particular. In the JON agent's environment file (bin/rhq-env.sh
), set the value of theRHQ_AGENT_JAVA_HOME
property to point to a full JDK home. - The RHQ agent must have been initiated using the same user as the JBoss Enterprise Application Platform instance. As an example, running the JON agent as a user with root privileges and the JBoss Enterprise Application Platform process under a different user does not work as expected and must be avoided.
- An installed JON plugin for JBoss Data Grid Library Mode. For more information, see Section 23.7.1, “Installing the JBoss Operations Network Plug-in (Library Mode)”
Generic JMX plugin
from JBoss Operation Networks 3.2.0 with patchUpdate 02
or better version in use.- A custom application using Red Hat JBoss Data Grid's Library mode with enabled JMX statistics for library mode caches in order to make statistics and monitoring working. For details how to enable JMX statistics for cache instances, see Section 22.4, “Enable JMX for Cache Instances” and to enable JMX for cache managers see Section 22.5, “Enable JMX for CacheManagers”
- The Java Virtual Machine (JVM) must be configured to expose the JMX MBean Server. For the Oracle/Sun JDK, see http://docs.oracle.com/javase/1.5.0/docs/guide/management/agent.html
- A correctly added and configured management user for JBoss Enterprise Application Platform.
23.7.2.2. Manually Adding JBoss Data Grid Instances in Library Mode
Procedure 23.5. Add JBoss Data Grid Instances in Library Mode
Import the Platform
- Navigate to the Inventory and select Discovery Queue from the Resources list on the left of the console.
- Select the platform on which the application is running and click Import at the bottom of the screen.
Figure 23.4. Import the Platform from the Discovery Queue.
Access the Servers on the Platform
- The
jdg
Platform now appears in the Platforms list. - Click on the Platform to access the servers that are running on it.
Figure 23.5. Open the
jdg
Platform to view the list of servers.Import the JMX Server
- From the Inventory tab, select Child Resources.
- Click the Import button at the bottom of the screen and select the JMX Server option from the list.
Figure 23.6. Import the JMX Server
Enable JDK Connection Settings
- In the Resource Import Wizard window, specify JDK 5 from the list of Connection Settings Template options.
Figure 23.7. Select the JDK 5 Template.
Modify the Connector Address
- In the Deployment Options menu, modify the supplied Connector Address with the hostname and JMX port of the process containing the Infinispan Library.
- Enter the JMX connector address of the new JBoss Data Grid instance you want to monitor. For example:Connector Address:
service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7997/jmxrmi
Note
The connector address varies depending on the host and the JMX port assigned to the new instance. In this case, instances require the following system properties at start up:-Dcom.sun.management.jmxremote.port=7997 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
- Specify the Principal and Credentials information if required.
- Click Finish.
Figure 23.8. Modify the values in the Deployment Options screen.
View Cache Statistics and Operations
- Click Refresh to refresh the list of servers.
- The JMX Servers tree in the panel on the left side of the screen contains the Infinispan Cache Managers node, which contains the available cache managers. The available cache managers contain the available caches.
- Select a cache from the available caches to view metrics.
- Select the Monitoring tab.
- The Tables view shows statistics and metrics.
- The Operations tab provides access to the various operations that can be performed on the services.
Figure 23.9. Metrics and operational data relayed through JMX is now available in the JBoss Operations Network console.
23.7.2.3. Monitor Custom Applications Using Library Mode Deployed On JBoss Enterprise Application Platform
23.7.2.3.1. Monitor an Application Deployed in Standalone Mode
Procedure 23.6. Monitor an Application Deployed in Standalone Mode
Start the JBoss Enterprise Application Platform Instance
Start the JBoss Enterprise Application Platform instance as follows:- Enter the following command at the command line or change standalone configuration file (
/bin/standalone.conf
) respectively:JAVA_OPTS="$JAVA_OPTS -Dorg.rhq.resourceKey=MyEAP"
- Start the JBoss Enterprise Application Platform instance in standalone mode as follows:
$JBOSS_HOME/bin/standalone.sh
Deploy the Red Hat JBoss Data Grid Application
Deploy the WAR file that contains the JBoss Data Grid Library mode application withglobalJmxStatistics
andjmxStatistics
enabled.Run JBoss Operations Network (JON) Discovery
Run thediscovery --full
command in the JBoss Operations Network (JON) agent.Locate Application Server Process
In the JBoss Operations Network (JON) web interface, the JBoss Enterprise Application Platform process is listed as a JMX server.Import the Process Into Inventory
Import the process into the JBoss Operations Network (JON) inventory.Optional: Run Discovery Again
If required, run thediscovery --full
command again to discover the new resources.
The JBoss Data Grid Library mode application is now deployed in JBoss Enterprise Application Platform's standalone mode and can be monitored using the JBoss Operations Network (JON).
23.7.2.3.2. Monitor an Application Deployed in Domain Mode
Procedure 23.7. Monitor an Application Deployed in Domain Mode
Edit the Host Configuration
Edit thedomain/configuration/
host.xml
file to replace theserver
element with the following configuration:<servers> <server name="server-one" group="main-server-group"> <jvm name="default"> <jvm-options> <option value="-Dorg.rhq.resourceKey=EAP1"/> </jvm-options> </jvm> </server> <server name="server-two" group="main-server-group" auto-start="true"> <socket-bindings port-offset="150"/> <jvm name="default"> <jvm-options> <option value="-Dorg.rhq.resourceKey=EAP2"/> </jvm-options> </jvm> </server> </servers>
Start JBoss Enterprise Application Platform 6
Start JBoss Enterprise Application Platform 6 in domain mode:$JBOSS_HOME/bin/domain.sh
Deploy the Red Hat JBoss Data Grid Application
Deploy the WAR file that contains the JBoss Data Grid Library mode application withglobalJmxStatistics
andjmxStatistics
enabled.Run Discovery in JBoss Operations Network (JON)
If required, run thediscovery --full
command for the JBoss Operations Network (JON) agent to discover the new resources.
The JBoss Data Grid Library mode application is now deployed in JBoss Enterprise Application Platform's domain mode and can be monitored using the JBoss Operations Network (JON).
23.8. JBoss Operations Network Plug-in Quickstart
23.9. Other Management Tools and Operations
23.9.1. Accessing Data via URLs
put()
and post()
methods place data in the cache, and the URL used determines the cache name and key(s) used. The data is the value placed into the cache, and is placed in the body of the request.
GET
and HEAD
methods are used for data retrieval while other headers control cache settings and behavior.
Note
23.9.2. Limitations of Map Methods
size()
, values()
, keySet()
and entrySet()
, can be used with certain limitations with Red Hat JBoss Data Grid as they are unreliable. These methods do not acquire locks (global or local) and concurrent modification, additions and removals are excluded from consideration in these calls.
In JBoss Data Grid 7.0 the map methods size()
, values()
, keySet()
, and entrySet()
include entries in the cache loader by default. The cache loader in use will determine the performance of these commands; for instance, when using a database these methods will run a complete scan of the table where data is stored, which may result in slower processing. To not load entries from the cache loader, and avoid any potential performance hit, use Cache.getAdvancedCache().withFlags(Flag.SKIP_CACHE_LOAD)
before executing the desired method.
In JBoss Data Grid 7.0 the Cache.size()
method provides a count of all elements in both this cache and cache loader across the entire cluster. When using a loader or remote entries, only a subset of entries is held in memory at any given time to prevent possible memory issues, and the loading of all entries may be slow.
size()
method is affected by the flags org.infinispan.context.Flag#CACHE_MODE_LOCAL
, to force it to return the number of entries present on the local node, and org.infinispan.context.Flag#SKIP_CACHE_LOAD
, to ignore any passivated entries. Either of these flags may be used to increase performance of this method, at the cost of not returning a count of all elements across the entire cluster.
In JBoss Data Grid 7.0 the Hot Rod protocol contain a dedicated SIZE
operation, and the clients use this operation to calculate the size of all entries.
Part XI. Red Hat JBoss Data Grid Web Administration
Chapter 24. Red Hat JBoss Data Grid Administration Console
24.1. About JBoss Data Grid Administration Console
24.2. Red Hat JBoss Data Grid Administration Console Prerequisites
- Java 8
- JBoss Data Grid server installed and running in domain mode.
24.3. Red Hat JBoss Data Grid Administration Console Getting Started
24.3.1. Red Hat JBoss Data Grid Administration Console Getting Started
24.3.2. Downloading and Installing JBoss Data Grid Server
- Download Red Hat JBoss Data Grid server version from Red Hat Customer Portal.
- Install JBoss Data Grid by unzipping the downloaded package in a preferred directory of your system.
Note
24.3.3. Adding Management User
Procedure 24.1. Adding a Management User
- Run the add-user script within the bin folder as follows:
./add-user.sh
- Select the option for the type of user to be added. For management user, select option
a
. - Set the Username and password as per the listed recommendations.
- Enter the name of the group or groups in which the user has to be added. Leave blank for no group.
Note
See the Download and Install JBoss Data Grid section in the Red Hat JBoss Data Grid Getting Started Guide for download and installation details. - Confirm if you need the user to be used for Apache Spark process connection.
Note
Before proceeding, make sure $JBOSS_HOME is not set to a different installation. Otherwise, you may get unpredictable results.
Management user is successfully added.
24.3.4. Starting the JBoss Data Grid Server
./domain.sh
24.3.5. Logging in the JBoss Data Grid Administration Console
http://localhost:9990/console/index.html
Figure 24.1. JBoss Data Grid Administration Console Login Screen
24.4. Dashboard View
- Caches
- Clusters.
- Status Events
24.4.1. Cache Containers View
Figure 24.2. Cache Containers View
24.4.2. Clusters View
Figure 24.3. Clusters View
24.4.3. Status Events View
Figure 24.4. Status Events View
24.5. Cache Administration
24.5.1. Adding a New Cache
Procedure 24.2. Adding a New Cache
- In the Cache Containers view, click on the name of the cache container.
Figure 24.5. Cache Containers View
- The Caches view is displayed listing all the configured caches. Click Add Cache to add and configure a new cache. The new cache creation window is opened.
Figure 24.6. Add Cache
- Enter the new cache name, select the base configuration template from the drop-down menu and click Next.
Figure 24.7. Cache Properties
- The cache configuration screen is displayed. Enter the cache parameters and click Create.
Figure 24.8. Cache Configuration
- A confirmation screen is displayed. Click Create to create the cache.
Figure 24.9. Cache Confirmation
New cache is added successfully.
24.5.2. Editing Cache Configuration
Procedure 24.3. Editing Cache Configuration
- Log into the JBoss Data Grid Administration Console and click on the cache container name.
Figure 24.10. Cache Containers
- In the Caches view, click on the cache name.
Figure 24.11. Caches View
- The cache statistics and properties page is displayed. On the right hand side, click the Configuration tab.
Figure 24.12. Cache Configuration Button
- The edit cache configuration interface is opened. The editable cache properties are found in the cache properties menu at the left hand side.
Figure 24.13. Editing Cache Configuration Interface
- Select the cache configuration property to be edited from the cache properties menu. To get a description on the cache configuration parameters, hover the cursor over the information icon. The parameter description is presented in form of a tooltip.
Figure 24.14. Cache configuration paramaters
- For example, the General property is selected by default. Edit the required values in the given parameter input field. Scroll down and click Apply changes.
- A confirmation dialog box will appear. Click Update.
Figure 24.15.
- The restart dialogue box appears. Click Restart Now to apply the changes.
Figure 24.16. Restart Dialogue Box
Note
Click Restart Later to continue editing the cache properties.
24.5.3. Cache Statistics and Properties View
Procedure 24.4. Viewing Cache Statistics
- Navigate to the list of caches by clicking on the name of the cache container in the Cache Container view.
- Click on the name of the cache from the list of caches. Optionally you can use the cache filter on the left side to filter caches. The caches can be filtered by a keyword, substring or by selecting the type and the trait.
Figure 24.17. Caches View
- The next page displays the comprehensive cache statistics under the headings:
Cache content
,Operations performance
andCaching Activity
.Figure 24.18. Cache Statistics
- Additional cache statistics are displayed under the headings:
Entries Lifecycle
,Cache Loader
andLocking
Figure 24.19. Cache Statistics
- To view cache properties, click on Configuration at the right hand side.
Figure 24.20. Configuration Button
- The cache properties menu is displayed at the left hand side.
Figure 24.21. Cache Properties Menu
Figure 24.22. General Status Tab
Figure 24.23. Cache Node Labels
24.5.4. Enable and Disable Cache
Procedure 24.5. Disabling a Cache
- Navigate to the caches view by clicking on the name of the cache container in the Cache Container view. Click on the name of the cache to be disabled.
Figure 24.24. Caches View
- The cache statistics will be displayed. On the right hand side of the interface, click on the Actions tab and then click Disable.
Figure 24.25. Cache Disable
- A confirmation dialogue box will appear. Click Disable to disable the cache.
Figure 24.26. Cache Disable Confirmation
- A subsequent dialogue box appears. Click Ok.
Figure 24.27. Confirmation Box
- The selected cache is disabled successfully with a visual indicator Disabled next to the cache name label.
Figure 24.28. Disabled Cache
The cache is disabled successfully.
Procedure 24.6. Enabling a Cache
- To enable a cache, click on the specific disabled cache from the Cache view.
Figure 24.29. Caches View
- On the right hand side of the interface, click on the Actions tab.
Figure 24.30.
- From the Actions tab, click Enable
Figure 24.31. Actions Menu
- A confirmation dialogue box appears. Click Enable.
Figure 24.32. Confirmation Box
- A subsequent dialogue box appears. Click Ok
Figure 24.33. Information Box
- The selected cache is enabled successfully with a visual indicator Enabled next to the cache name label.
Figure 24.34. Cache Enabled
24.5.5. Cache Flush and Clear
Flushing a Cache
Procedure 24.7. Flushing a Cache
- In the Cache Containers view, click on the name of the cache container.
- The Caches view is displayed. Click on the cache to be cleared.
Figure 24.35. Caches View
- The cache statistics page is displayed. At the right hand side, click Actions.
Figure 24.36. Actions Button
- From the Actions menu, click Flush.
Figure 24.37. Actions Menu
- A confirmation dialogue box appears. Click Flush.
Figure 24.38. Cache Flush Confirmation Box
- The cache is successfully flushed. Click Ok.
Figure 24.39. Cache Flush Information Box
Clearing a Cache
Procedure 24.8. Clearing a Cache
- In the Cache Containers view, click on the name of the cache container.
- The Caches view is displayed. Click on the cache to be cleared.
Figure 24.40. Caches View
- On the cache statistics page, at the right hand side, click Actions.
Figure 24.41.
- From the Actions menu, click Clear.
Figure 24.42. Clear Button
- A confirmation dialogue box appears. Click Clear.
Figure 24.43. Confirmation Box
- The cache is successfully flushed. Click Ok.
Figure 24.44. Information Box
24.5.6. Server Tasks Execution
24.5.7. Server Tasks
24.5.7.1. New Server Task
Procedure 24.9. Launching a New Server Task
- In the Cache Containers view of the JBoss Data Grid Administration Console, click on the name of the Cache container.
- On the cache view page, click the Task Execution tab.
Figure 24.45. Task Execution
- In the Tasks execution tab, click Launch new task.
Figure 24.46. Launch New Task
- Enter the new task properties and click Launch task.
Figure 24.47. Task Properties
New server task is successfully created.
24.5.7.2. Server Tasks View
Figure 24.48. Server Tasks View
Figure 24.49. Task Start/End Time
24.6. Cache Container Configuration
Procedure 24.10. Accessing Cache Container Configuration Settings
- In the Cache Container View, click on the name of the cache container.
Figure 24.50. Cache Container View
- Click Configuration setting button at the top right hand side of the interface.
Figure 24.51. Configuration
Figure 24.52. Cache Container Configuration
24.6.1. Defining Protocol Buffer Schema
Procedure 24.11. Defining a Protobuf Schema
- Click Add at the right hand side of the Schema tab to launch the create schema window.
- Enter the schema name and the schema in the respective fields and click Create Schema.
Figure 24.53. New Schema
- The protocol buffer schema is added.
Figure 24.54. Protocol Buffer
24.6.2. Transport Setting
Figure 24.55. Transport Setting
Figure 24.56. Restart Confirmation
24.6.3. Defining Thread Pools
Figure 24.57. Async Operations
Figure 24.58. Expiration Values
Figure 24.59. Listener Values
Figure 24.60. Persistence Values
Figure 24.61. Remote Commands
Figure 24.62. Replication Queue Values
Figure 24.63. State Transfer Values
Figure 24.64. Transport Values
24.6.4. Adding New Security Role
Procedure 24.12. Adding a Security Role
- Click on the Security tab. If authorization is not defined for a cache container, click Yes to define.
Figure 24.65. Define Authorization
- Select the Role Mapper from the drop-down menu. Click Add to launch the permissions window.
Figure 24.66. Role Mapper Selection
- In the Permissions window, enter the name of the new role and assign the permissions by checking the required check-boxes. Click Save changes to save the role.
Figure 24.67. Role Permissions
- The new security role is added.
Figure 24.68. New Security Role
24.6.5. Creating Cache Configuration Template
Figure 24.69. Cache Templates View
Procedure 24.13. Creating New Cache Configuration Template
- Click Create new Template on the right hand side of the templates list.
- Enter the cache configuration template name and select the base configuration from the drop-down and click Next.
Figure 24.70. Cache Configuration Template
- Set the cache template attributes for the various cache operations such as Locking, Expiration, Indexing and others.
Figure 24.71. Cache Configuration Template
- After entering the values, click Create to create the Cache Template.
24.7. Cluster Administration
24.7.1. Cluster Nodes View
Figure 24.72. Nodes View
24.7.2. Cluster Nodes Mismatch
Figure 24.73. Cluster Nodes Mismatch
24.7.3. Cluster Rebalancing
Note
Procedure 24.14. Enable and Disable Rebalancing
- From the cache container view, click on the name of the cache container.
- In the caches view, at the right hand side, click on Actions.
Figure 24.74.
- A callout menu is opened. Click Disable Rebalancing.
Figure 24.75.
- A confirmation dialogue box appears. Click Accept.
Figure 24.76.
- Cluster rebalancing is successfully disabled.
Figure 24.77.
- To enable rebalancing, click Actions > Enable Rebalancing.
Figure 24.78.
- A confirmation dialogue box appears. Click Accept.
Figure 24.79.
Figure 24.80.
Procedure 24.15. Enable and Disable Rebalancing
- From the cache container view, click on the name of the cache container.
- In the caches view, click on a specific cache.
- The cache statistics page is displayed. At the right hand side, click Actions.
Figure 24.81.
- From the callout menu, click Disable Rebalance.
Figure 24.82.
- A confirmation dialogue box appears. Click Disable Rebalance.
Figure 24.83.
- The rebalancing for the cache is successfully disabled.
Figure 24.84.
- To enable cache level rebalancing, click Enable rebalance from the Actions menu.
Figure 24.85.
- A confirmation dialogue box appears. Click Enable rebalance.
Figure 24.86.
Figure 24.87.
24.7.4. Cluster Partition Handling
Figure 24.88. Network Partition Warning
24.7.5. Cluster Events
Figure 24.89.
Figure 24.90.
24.7.6. Adding Node
Procedure 24.16. Adding a New Node
- In the Dashboard view, click Cluster tab.
Figure 24.91. Clusters Tab
- Click on the name of the cluster where the new node has to be added.
Figure 24.92. Cluster Selection
- Click Add Node.
Figure 24.93. Add Node Created
- The node configuration window is opened. Enter the node properties in the respective fields and click Create
Figure 24.94. Node Properties
- The system boots up.
Figure 24.95. System Boot
- The new node is successfully created.
Figure 24.96. New Node
24.7.7. Node Statistics and Properties View
Figure 24.97. Nodes Statistics
24.7.8. Node Performance Metrics View
Figure 24.98. Node Performance Metrics
24.7.9. Disabling a Node
Procedure 24.17. Adding a New Node
- Click on the name of the cluster in the Cluster View of the JBoss Data Grid Administration Console.
- In the Nodes view, click on the node to be disabled.
Figure 24.99. Nodes View
- The Node statistics view is opened. Click on the Actions tab located at the right hand side of the page and then click Stop.
Figure 24.100. Nodes Stop
- A confirmation box appears. Click Stop to remove the node from the cluster.
Figure 24.101. Confirmation Box
24.7.10. Cluster Shutdown and Restart
24.7.10.1. Cluster Shutdown
Procedure 24.18. Shutting Down Cluster
- Navigate to the Clusters view in the JBoss Data Grid Administration console and click on the name of the cluster.
Figure 24.102. Clusters View
- On the Nodes view page, locate the Actions tab to the top right hand side of the interface. Click on Actions tab and then click Stop.
Figure 24.103. Cluster Stop
- A confirmation box will appear. To confirm, click Stop.
Figure 24.104. Confirmation Box
24.7.10.2. Cluster Start
Procedure 24.19. Starting Cluster
- Navigate to the Clusters view in the JBoss Data Grid Administration console and click on the name of the cluster.
- On the Nodes view page, locate the Actions tab to the top right hand side of the interface. Click on Actions tab and then click Start.
Figure 24.105. Cluster Start
- A confirmation box will appear. Click Start to start the cluster.
Part XII. Securing Data in Red Hat JBoss Data Grid
JBoss Data Grid features role-based access control for operations on designated secured caches. Roles can be assigned to users who access your application, with roles mapped to permissions for cache and cache-manager operations. Only authenticated users are able to perform the operations that are authorized for their role.
Node-level security requires new nodes or merging partitions to authenticate before joining a cluster. Only authenticated nodes that are authorized to join the cluster are permitted to do so. This provides data protection by preventing authorized servers from storing your data.
JBoss Data Grid increases data security by supporting encrypted communications between the nodes in a cluster by using a user-specified cryptography algorithm, as supported by Java Cryptography Architecture (JCA).
Chapter 25. Red Hat JBoss Data Grid Security: Authorization and Authentication
25.1. Red Hat JBoss Data Grid Security: Authorization and Authentication
SecureCache
. SecureCache
is a simple wrapper around a cache, which checks whether the "current user" has the permissions required to perform an operation. The "current user" is a Subject associated with the AccessControlContext
.
Figure 25.1. Roles and Permissions Mapping
25.2. Permissions
Table 25.1. CacheManager Permissions
Permission | Function | Description |
---|---|---|
CONFIGURATION | defineConfiguration | Whether a new cache configuration can be defined. |
LISTEN | addListener | Whether listeners can be registered against a cache manager. |
LIFECYCLE | stop, start | Whether the cache manager can be stopped or started respectively. |
ALL | A convenience permission which includes all of the above. |
Table 25.2. Cache Permissions
Permission | Function | Description |
---|---|---|
READ | get, contains | Whether entries can be retrieved from the cache. |
WRITE | put, putIfAbsent, replace, remove, evict | Whether data can be written/replaced/removed/evicted from the cache. |
EXEC | distexec, mapreduce | Whether code execution can be run against the cache. |
LISTEN | addListener | Whether listeners can be registered against a cache. |
BULK_READ | keySet, values, entrySet,query | Whether bulk retrieve operations can be executed. |
BULK_WRITE | clear, putAll | Whether bulk write operations can be executed. |
LIFECYCLE | start, stop | Whether a cache can be started / stopped. |
ADMIN | getVersion, addInterceptor*, removeInterceptor, getInterceptorChain, getEvictionManager, getComponentRegistry, getDistributionManager, getAuthorizationManager, evict, getRpcManager, getCacheConfiguration, getCacheManager, getInvocationContextContainer, setAvailability, getDataContainer, getStats, getXAResource | Whether access to the underlying components/internal structures is allowed. |
ALL | A convenience permission which includes all of the above. | |
ALL_READ | Combines READ and BULK_READ. | |
ALL_WRITE | Combines WRITE and BULK_WRITE. |
Note
25.3. Role Mapping
PrincipalRoleMapper
must be specified in the global configuration. Red Hat JBoss Data Grid ships with three mappers, and also allows you to provide a custom mapper.
Table 25.3. Mappers
Mapper Name | Java | XML | Description |
---|---|---|---|
IdentityRoleMapper | org.infinispan.security.impl.IdentityRoleMapper | <identity-role-mapper /> | Uses the Principal name as the role name. |
CommonNameRoleMapper | org.infinispan.security.impl.CommonRoleMapper | <common-name-role-mapper /> | If the Principal name is a Distinguished Name (DN), this mapper extracts the Common Name (CN) and uses it as a role name. For example the DN cn=managers,ou=people,dc=example,dc=com will be mapped to the role managers . |
ClusterRoleMapper | org.infinispan.security.impl.ClusterRoleMapper | <cluster-role-mapper /> | Uses the ClusterRegistry to store principal to role mappings. This allows the use of the CLI’s GRANT and DENY commands to add/remove roles to a Principal. |
Custom Role Mapper | <custom-role-mapper class="a.b.c" /> | Supply the fully-qualified class name of an implementation of org.infinispan.security.impl.PrincipalRoleMapper |
25.4. Configuring Authentication and Role Mapping using Login Modules
login-module
for querying roles from LDAP, you must implement your own mapping of Principals to Roles, as custom classes are in use. An example implementation of this conversion is found in the JBoss Data Grid Developer Guide, while a declarative configuration example is below:
Example 25.1. Example of LDAP Login Module Configuration
<security-domain name="ispn-secure" cache-type="default"> <authentication> <login-module code="org.jboss.security.auth.spi.LdapLoginModule" flag="required"> <module-option name="java.naming.factory.initial" value="com.sun.jndi.ldap.LdapCtxFactory"/> <module-option name="java.naming.provider.url" value="ldap://localhost:389"/> <module-option name="java.naming.security.authentication" value="simple"/> <module-option name="principalDNPrefix" value="uid="/> <module-option name="principalDNSuffix" value=",ou=People,dc=infinispan,dc=org"/> <module-option name="rolesCtxDN" value="ou=Roles,dc=infinispan,dc=org"/> <module-option name="uidAttributeID" value="member"/> <module-option name="matchOnUserDN" value="true"/> <module-option name="roleAttributeID" value="cn"/> <module-option name="roleAttributeIsDN" value="false"/> <module-option name="searchScope" value="ONELEVEL_SCOPE"/> </login-module> </authentication> </security-domain>
Example 25.2. Example of Login Module Configuration
<security-domain name="krb-admin" cache-type="default"> <authentication> <login-module code="Kerberos" flag="required"> <module-option name="useKeyTab" value="true"/> <module-option name="principal" value="admin@INFINISPAN.ORG"/> <module-option name="keyTab" value="${basedir}/keytab/admin.keytab"/> </login-module> </authentication> </security-domain>
Important
25.5. Configuring Red Hat JBoss Data Grid for Authorization
The following is an example configuration for authorization at the CacheManager level:
Example 25.3. CacheManager Authorization (Declarative Configuration)
<cache-container name="local" default-cache="default"> <security> <authorization> <identity-role-mapper /> <role name="admin" permissions="ALL"/> <role name="reader" permissions="READ"/> <role name="writer" permissions="WRITE"/> <role name="supervisor" permissions="ALL_READ ALL_WRITE"/> </authorization> </security> </cache-container>
- whether to use authorization.
- a class which will map principals to a set of roles.
- a set of named roles and the permissions they represent.
Roles may be applied on a cache-per-cache basis, using the roles defined at the cache-container level, as follows:
Example 25.4. Defining Roles
<local-cache name="secured"> <security> <authorization roles="admin reader writer supervisor"/> </security> </local-cache>
Important
Important
SecurityException
.
25.6. Authorization Using a SecurityManager
SecurityManager
for basic cache operations. In Library mode, a SecurityManager
may also be used to perform some of the more complex tasks, such as distexec and query among others.
SecurityManager
in your JVM using one of the following methods:
java -Djava.security.manager ...
System.setSecurityManager(new SecurityManager());
SecurityManager
examines when an application performs an action. If the action is allowed by the policy file, then the SecurityManager
will permit the action to take place; however, if the action is not allowed by the policy then the SecurityManager
denies that action.
// If the code is signed by "admin", grant it read/write access to all files grant signedBy "admin" { permission java.io.FilePermission "/*", "read,write"; }; // Grant everyone read permissions on specific environment variables: grant { permission java.util.PropertyPermission "java.home", "read"; permission java.util.PropertyPermission "java.class.path", "read"; permission java.util.PropertyPermission "java.vendor", "read"; }; // Grant a specific codebase, example.jar, read and write access to "/tmp/*" grant codeBase "file:///path/to/example.jar" { permission java.io.FilePermission "/tmp/*", "read,write"; };
25.7. Security Manager in Java
25.7.1. About the Java Security Manager
The Java Security Manager is a class that manages the external boundary of the Java Virtual Machine (JVM) sandbox, controlling how code executing within the JVM can interact with resources outside the JVM. When the Java Security Manager is activated, the Java API checks with the security manager for approval before executing a wide range of potentially unsafe operations.
25.7.2. About Java Security Manager Policies
A set of defined permissions for different classes of code. The Java Security Manager compares actions requested by applications against the security policy. If an action is allowed by the policy, the Security Manager will permit that action to take place. If the action is not allowed by the policy, the Security Manager will deny that action. The security policy can define permissions based on the location of code, on the code's signature, or based on the subject's principals.
java.security.manager
and java.security.policy
.
A security policy's entry consists of the following configuration elements, which are connected to the policytool
:
- CodeBase
- The URL location (excluding the host and domain information) where the code originates from. This parameter is optional.
- SignedBy
- The alias used in the keystore to reference the signer whose private key was used to sign the code. This can be a single value or a comma-separated list of values. This parameter is optional. If omitted, presence or lack of a signature has no impact on the Java Security Manager.
- Principals
- A list of
principal_type
/principal_name
pairs, which must be present within the executing thread's principal set. The Principals entry is optional. If it is omitted, it signifies that the principals of the executing thread will have no impact on the Java Security Manager. - Permissions
- A permission is the access which is granted to the code. Many permissions are provided as part of the Java Enterprise Edition 6 (Java EE 6) specification. This document only covers additional permissions which are provided by JBoss EAP 6.
Important
25.7.3. Write a Java Security Manager Policy
An application called policytool
is included with most JDK and JRE distributions, for the purpose of creating and editing Java Security Manager security policies. Detailed information about policytool
is linked from http://docs.oracle.com/javase/6/docs/technotes/tools/.
Procedure 25.1. Setup a new Java Security Manager Policy
Start
policytool
.Start thepolicytool
tool in one of the following ways.Red Hat Enterprise Linux
From your GUI or a command prompt, run/usr/bin/policytool
.Microsoft Windows Server
Runpolicytool.exe
from your Start menu or from thebin\
of your Java installation. The location can vary.
Create a policy.
To create a policy, select Add Policy Entry. Add the parameters you need, then click Done.Edit an existing policy
Select the policy from the list of existing policies, and select the Edit Policy Entry button. Edit the parameters as needed.Delete an existing policy.
Select the policy from the list of existing policies, and select the Remove Policy Entry button.
25.7.4. Run Red Hat JBoss Data Grid Server Within the Java Security Manager
standalone.sh
script. The following procedure guides you through the steps of configuring your instance to run within a Java Security Manager policy.
Prerequisites
- Before you following this procedure, you need to write a security policy, using the
policytool
command which is included with your Java Development Kit (JDK). This procedure assumes that your policy is located atJDG_HOME/bin/server.policy
. As an alternative, write the security policy using any text editor and manually save it asJDG_HOME/bin/server.policy
- The JBoss Data Grid server must be completely stopped before you edit any configuration files.
Procedure 25.2. Configure the Security Manager for JBoss Data Grid Server
Open the configuration file.
Open the configuration file for editing. This location of this file is listed below by OS. Note that this is not the executable file used to start the server, but a configuration file that contains runtime parameters.- For Linux:
JDG_HOME/bin/standalone.conf
- For Windows:
JDG_HOME\bin\standalone.conf.bat
Add the Java options to the file.
To ensure the Java options are used, add them to the code block that begins with:if [ "x$JAVA_OPTS" = "x" ]; then
You can modify the-Djava.security.policy
value to specify the exact location of your security policy. It should go onto one line only, with no line break. Using==
when setting the-Djava.security.policy
property specifies that the security manager will use only the specified policy file. Using=
specifies that the security manager will use the specified policy combined with the policy set in thepolicy.url
section ofJAVA_HOME/lib/security/java.security
.Important
JBoss Enterprise Application Platform releases from 6.2.2 onwards require that the system propertyjboss.modules.policy-permissions
is set to true.Example 25.5. standalone.conf
JAVA_OPTS="$JAVA_OPTS -Djava.security.manager -Djava.security.policy==$PWD/server.policy -Djboss.home.dir=$JBOSS_HOME -Djboss.modules.policy-permissions=true"
Example 25.6. standalone.conf.bat
set "JAVA_OPTS=%JAVA_OPTS% -Djava.security.manager -Djava.security.policy==\path\to\server.policy -Djboss.home.dir=%JBOSS_HOME% -Djboss.modules.policy-permissions=true"
Start the server.
Start the server as normal.
25.8. Data Security for Remote Client Server Mode
25.8.1. About Security Realms
ManagementRealm
stores authentication information for the Management API, which provides the functionality for the Management CLI and web-based Management Console. It provides an authentication system for managing JBoss Data Grid Server itself. You could also use theManagementRealm
if your application needed to authenticate with the same business rules you use for the Management API.ApplicationRealm
stores user, password, and role information for Web Applications and EJBs.
REALM-users.properties
stores usernames and hashed passwords.REALM-roles.properties
stores user-to-role mappings.mgmt-groups.properties
stores user-to-role mapping file forManagementRealm
.
standalone/configuration/
directories. The files are written simultaneously by the add-user.sh
or add-user.bat
command. When you run the command, the first decision you make is which realm to add your new user to.
25.8.2. Add a New Security Realm
Run the Management CLI.
Start thecli.sh
orcli.bat
command and connect to the server.Create the new security realm itself.
Run the following command to create a new security realm namedMyDomainRealm
on a domain controller or a standalone server./host=master/core-service=management/security-realm=MyDomainRealm:add()
Create the references to the properties file which will store information about the new realm's users.
Run the below command to define the location of the new security realm's properties file; this file contains information regarding the users of this security realm. The following command references a file namedmyfile.properties
in thejboss.server.config.dir
.Note
The newly-created properties file is not managed by the includedadd-user.sh
andadd-user.bat
scripts. It must be managed externally./host=master/core-service=management/security-realm=MyDomainRealm/authentication=properties:add(path="myfile.properties",relative-to="jboss.server.config.dir")
Reload the server
Reload the server so the changes will take effect.:reload
The new security realm is created. When you add users and roles to this new realm, the information will be stored in a separate file from the default security realms. You can manage this new file using your own applications or procedures.
25.8.3. Add a User to a Security Realm
Run the
add-user.sh
oradd-user.bat
command.Open a terminal and change directories to theJDG_HOME/bin/
directory. If you run Red Hat Enterprise Linux or another UNIX-like operating system, runadd-user.sh
. If you run Microsoft Windows Server, runadd-user.bat
.Choose whether to add a Management User or Application User.
For this procedure, typeb
to add an Application User.Choose the realm the user will be added to.
By default, the only available realms are theManagementRealm
andApplicationRealm
; however, if a custom realm has been added, then its name may be entered instead.Type the username, password, and roles, when prompted.
Type the desired username, password, and optional roles when prompted. Verify your choice by typingyes
, or typeno
to cancel the changes. The changes are written to each of the properties files for the security realm.
25.8.4. Configuring Security Realms Declaratively
authentication
and an authorization
section.
Example 25.7. Configuring Security Realms Declaratively
<security-realms> <security-realm name="ManagementRealm"> <authentication> <local default-user="$local" skip-group-loading="true"/> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization map-groups-to-roles="false"> <properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <local default-user="$local" allowed-users="*" skip-group-loading="true"/> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization> <properties path="application-roles.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> </security-realms>
server-identities
parameter can also be used to specify certificates.
25.8.5. Loading Roles from LDAP for Authorization (Remote Client-Server Mode)
memberOf
attributes; a group entity may map which users belong to it through uniqueMember
attributes; or both mappings may be maintained by the LDAP server.
force
attribute is set to "false". When force
is true, the search is performed again during authorization (while loading groups). This is typically done when different servers perform authentication and authorization.
<authorization> <ldap connection="..."> <!-- OPTIONAL --> <username-to-dn force="true"> <!-- Only one of the following. --> <username-is-dn /> <username-filter base-dn="..." recursive="..." user-dn-attribute="..." attribute="..." /> <advanced-filter base-dn="..." recursive="..." user-dn-attribute="..." filter="..." /> </username-to-dn> <group-search group-name="..." iterative="..." group-dn-attribute="..." group-name-attribute="..." > <!-- One of the following --> <group-to-principal base-dn="..." recursive="..." search-by="..."> <membership-filter principal-attribute="..." /> </group-to-principal> <principal-to-group group-attribute="..." /> </group-search> </ldap> </authorization>
Important
force
attribute. It is required, even when set to the default value of false
.
username-to-dn
username-to-dn
element specifies how to map the user name to the distinguished name of their entry in the LDAP directory. This element is only required when both of the following are true:
- The authentication and authorization steps are against different LDAP servers.
- The group search uses the distinguished name.
- 1:1 username-to-dn
- This specifies that the user name entered by the remote user is the user's distinguished name.
<username-to-dn force="false"> <username-is-dn /> </username-to-dn>
This defines a 1:1 mapping and there is no additional configuration. - username-filter
- The next option is very similar to the simple option described above for the authentication step. A specified attribute is searched for a match against the supplied user name.
<username-to-dn force="true"> <username-filter base-dn="dc=people,dc=harold,dc=example,dc=com" recursive="false" attribute="sn" user-dn-attribute="dn" /> </username-to-dn>
The attributes that can be set here are:base-dn
: The distinguished name of the context to begin the search.recursive
: Whether the search will extend to sub contexts. Defaults tofalse
.attribute
: The attribute of the users entry to try and match against the supplied user name. Defaults touid
.user-dn-attribute
: The attribute to read to obtain the users distinguished name. Defaults todn
.
- advanced-filter
- The final option is to specify an advanced filter, as in the authentication section this is an opportunity to use a custom filter to locate the users distinguished name.
<username-to-dn force="true"> <advanced-filter base-dn="dc=people,dc=harold,dc=example,dc=com" recursive="false" filter="sAMAccountName={0}" user-dn-attribute="dn" /> </username-to-dn>
For the attributes that match those in the username-filter example, the meaning and default values are the same. There is one new attribute:filter
: Custom filter used to search for a user's entry where the user name will be substituted in the{0}
place holder.
Important
The XML must remain valid after the filter is defined so if any special characters are used such as&
ensure the proper form is used. For example&
for the&
character.
The Group Search
Example 25.8. Principal to Group - LDIF example.
TestUserOne
who is a member of GroupOne
, GroupOne
is in turn a member of GroupFive
. The group membership is shown by the use of a memberOf
attribute which is set to the distinguished name of the group of which the user (or group) is a member.
memberOf
attributes set, one for each group of which the user is directly a member.
dn: uid=TestUserOne,ou=users,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: inetOrgPerson objectClass: uidObject objectClass: person objectClass: organizationalPerson cn: Test User One sn: Test User One uid: TestUserOne distinguishedName: uid=TestUserOne,ou=users,dc=principal-to-group,dc=example,dc=org memberOf: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org memberOf: uid=Slashy/Group,ou=groups,dc=principal-to-group,dc=example,dc=org userPassword:: e1NTSEF9WFpURzhLVjc4WVZBQUJNbEI3Ym96UVAva0RTNlFNWUpLOTdTMUE9PQ== dn: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: group objectClass: uidObject uid: GroupOne distinguishedName: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org memberOf: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org dn: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: group objectClass: uidObject uid: GroupFive distinguishedName: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org
Example 25.9. Group to Principal - LDIF Example
TestUserOne
who is a member of GroupOne
which is in turn a member of GroupFive
- however in this case it is an attribute uniqueMember
from the group to the user being used for the cross reference.
dn: uid=TestUserOne,ou=users,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: inetOrgPerson objectClass: uidObject objectClass: person objectClass: organizationalPerson cn: Test User One sn: Test User One uid: TestUserOne userPassword:: e1NTSEF9SjR0OTRDR1ltaHc1VVZQOEJvbXhUYjl1dkFVd1lQTmRLSEdzaWc9PQ== dn: uid=GroupOne,ou=groups,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: Group One uid: GroupOne uniqueMember: uid=TestUserOne,ou=users,dc=group-to-principal,dc=example,dc=org dn: uid=GroupFive,ou=subgroups,ou=groups,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: Group Five uid: GroupFive uniqueMember: uid=TestUserFive,ou=users,dc=group-to-principal,dc=example,dc=org uniqueMember: uid=GroupOne,ou=groups,dc=group-to-principal,dc=example,dc=org
General Group Searching
<group-search group-name="..." iterative="..." group-dn-attribute="..." group-name-attribute="..." > ... </group-search>
group-name
: This attribute is used to specify the form that should be used for the group name returned as the list of groups of which the user is a member. This can either be the simple form of the group name or the group's distinguished name. If the distinguished name is required this attribute can be set toDISTINGUISHED_NAME
. Defaults toSIMPLE
.iterative
: This attribute is used to indicate if, after identifying the groups a user is a member of, we should also iteratively search based on the groups to identify which groups the groups are a member of. If iterative searching is enabled we keep going until either we reach a group that is not a member if any other groups or a cycle is detected. Defaults tofalse
.
Important
group-dn-attribute
: On an entry for a group which attribute is its distinguished name. Defaults todn
.group-name-attribute
: On an entry for a group which attribute is its simple name. Defaults touid
.
Example 25.10. Principal to Group Example Configuration
memberOf
attribute on the user.
<authorization> <ldap connection="LocalLdap"> <username-to-dn> <username-filter base-dn="ou=users,dc=principal-to-group,dc=example,dc=org" recursive="false" attribute="uid" user-dn-attribute="dn" /> </username-to-dn> <group-search group-name="SIMPLE" iterative="true" group-dn-attribute="dn" group-name-attribute="uid"> <principal-to-group group-attribute="memberOf" /> </group-search> </ldap> </authorization>
principal-to-group
element has been added with a single attribute.
group-attribute
: The name of the attribute on the user entry that matches the distinguished name of the group the user is a member of. Defaults tomemberOf
.
Example 25.11. Group to Principal Example Configuration
<authorization> <ldap connection="LocalLdap"> <username-to-dn> <username-filter base-dn="ou=users,dc=group-to-principal,dc=example,dc=org" recursive="false" attribute="uid" user-dn-attribute="dn" /> </username-to-dn> <group-search group-name="SIMPLE" iterative="true" group-dn-attribute="dn" group-name-attribute="uid"> <group-to-principal base-dn="ou=groups,dc=group-to-principal,dc=example,dc=org" recursive="true" search-by="DISTINGUISHED_NAME"> <membership-filter principal-attribute="uniqueMember" /> </group-to-principal> </group-search> </ldap> </authorization>
group-to-principal
is added. This element is used to define how searches for groups that reference the user entry will be performed. The following attributes are set:
base-dn
: The distinguished name of the context to use to begin the search.recursive
: Whether sub-contexts also be searched. Defaults tofalse
.search-by
: The form of the role name used in searches. Valid values areSIMPLE
andDISTINGUISHED_NAME
. Defaults toDISTINGUISHED_NAME
.
principal-attribute
: The name of the attribute on the group entry that references the user entry. Defaults tomember
.
25.9. Securing Interfaces
25.9.1. Hot Rod Interface Security
25.9.1.1. Publish Hot Rod Endpoints as a Public Interface
interface
parameter in the socket-binding
element from management
to public
as follows:
<socket-binding name="hotrod" interface="public" port="11222" />
25.9.1.2. Encryption of communication between Hot Rod Server and Hot Rod client
Procedure 25.3. Secure Hot Rod Using SSL/TLS
Generate a Keystore
Create a Java Keystore using the keytool application distributed with the JDK and add your certificate to it. The certificate can be either self signed, or obtained from a trusted CA depending on your security policy.Place the Keystore in the Configuration Directory
Put the keystore in the~/JDG_HOME/standalone/configuration
directory with thestandalone-hotrod-ssl.xml
file from the~/JDG_HOME/docs/examples/configs
directory.Declare an SSL Server Identity
Declare an SSL server identity within a security realm in the management section of the configuration file. The SSL server identity must specify the path to a keystore and its secret key.<server-identities> <ssl protocol="..."> <keystore path="..." relative-to="..." keystore-password="${VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::ENCRYPTED_VALUE}" /> </ssl> <secret value="..." /> </server-identities>
See Section 25.9.1.4.4, “Configure Hot Rod Authentication (X.509)” for details about these parameters.Add the Security Element
Add the security element to the Hot Rod connector as follows:<hotrod-connector socket-binding="hotrod" cache-container="local"> <encryption ssl="true" security-realm="ApplicationRealm" require-ssl-client-auth="false" /> </hotrod-connector>
Server Authentication of Certificate
If you require the server to perform authentication of the client certificate, create a truststore that contains the valid client certificates and set therequire-ssl-client-auth
attribute totrue
.
Start the Server
Start the server using the following:bin/standalone.sh -c standalone-hotrod-ssl.xml
This will start a server with a Hot Rod endpoint on port 11222. This endpoint will only accept SSL connections.
Important
25.9.1.3. Securing Hot Rod to LDAP Server using SSL
PLAIN
username/password. When the username/password is checked against credentials in LDAP, a secure connection from the Hot Rod server to the LDAP server is also required. To enable connection from the Hot Rod server to LDAP via SSL, a security realm must be defined as follows:
Example 25.12. Hot Rod Client Authentication to LDAP Server
<management> <security-realms> <security-realm name="LdapSSLRealm"> <authentication> <truststore path="ldap.truststore" relative-to="jboss.server.config.dir" keystore-password=${VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::ENCRYPTED_VALUE} /> </authentication> </security-realm> </security-realms> <outbound-connections> <ldap name="LocalLdap" url="ldaps://localhost:10389" search-dn="uid=wildfly,dc=simple,dc=wildfly,dc=org" search-credential="secret" security-realm="LdapSSLRealm" /> </outbound-connections> </management>
Important
25.9.1.4. User Authentication over Hot Rod Using SASL
PLAIN
is the least secure mechanism because credentials are transported in plain text format. However, it is also the simplest mechanism to implement. This mechanism can be used in conjunction with encryption (SSL) for additional security.DIGEST-MD5
is a mechanism than hashes the credentials before transporting them. As a result, it is more secure than thePLAIN
mechanism.GSSAPI
is a mechanism that uses Kerberos tickets. As a result, it requires a correctly configured Kerberos Domain Controller (for example, Microsoft Active Directory).EXTERNAL
is a mechanism that obtains the required credentials from the underlying transport (for example, from aX.509
client certificate) and therefore requires client certificate encryption to work correctly.
25.9.1.4.1. Configure Hot Rod Authentication (GSSAPI/Kerberos)
Procedure 25.4. Configure SASL GSSAPI/Kerberos Authentication - Server-side Configuration
- Define a Kerberos security login module using the security domain subsystem:
<system-properties> <property name="java.security.krb5.conf" value="/tmp/infinispan/krb5.conf"/> <property name="java.security.krb5.debug" value="true"/> <property name="jboss.security.disable.secdomain.option" value="true"/> </system-properties> <security-domain name="infinispan-server" cache-type="default"> <authentication> <login-module code="Kerberos" flag="required"> <module-option name="debug" value="true"/> <module-option name="storeKey" value="true"/> <module-option name="refreshKrb5Config" value="true"/> <module-option name="useKeyTab" value="true"/> <module-option name="doNotPrompt" value="true"/> <module-option name="keyTab" value="/tmp/infinispan/infinispan.keytab"/> <module-option name="principal" value="HOTROD/localhost@INFINISPAN.ORG"/> </login-module> </authentication> </security-domain>
- Ensure that the cache-container has authorization roles defined, and these roles are applied in the cache's authorization block as seen in Section 25.5, “Configuring Red Hat JBoss Data Grid for Authorization”.
- Configure a Hot Rod connector as follows:
<hotrod-connector socket-binding="hotrod" cache-container="default"> <authentication security-realm="ApplicationRealm"> <sasl server-name="node0" mechanisms="{mechanism_name}" qop="{qop_name}" strength="{value}"> <policy> <no-anonymous value="true" /> </policy> <property name="com.sun.security.sasl.digest.utf8">true</property> </sasl> </authentication> </hotrod-connector>
- The
server-name
attribute specifies the name that the server declares to incoming clients. The client configuration must also contain the same server name value. - The
server-context-name
attribute specifies the name of the login context used to retrieve a server subject for certain SASL mechanisms (for example, GSSAPI). - The
mechanisms
attribute specifies the authentication mechanism in use. See Section 25.9.1.4, “User Authentication over Hot Rod Using SASL” for a list of supported mechanisms. - The
qop
attribute specifies the SASL quality of protection value for the configuration. Supported values for this attribute areauth
(authentication),auth-int
(authentication and integrity, meaning that messages are verified against checksums to detect tampering), andauth-conf
(authentication, integrity, and confidentiality, meaning that messages are also encrypted). Multiple values can be specified, for example,auth-int auth-conf
. The ordering implies preference, so the first value which matches both the client and server's preference is chosen. - The
strength
attribute specifies the SASL cipher strength. Valid values arelow
,medium
, andhigh
. - The
no-anonymous
element within thepolicy
element specifies whether mechanisms that accept anonymous login are permitted. Set this value tofalse
to permit andtrue
to deny.
- Perform the Client-Side configuration on each client. As the Hot Rod client is configured programmatically information on this configuration is found in the JBoss Data Grid Developer Guide.
25.9.1.4.2. Configure Hot Rod Authentication (MD5)
Procedure 25.5. Configure Hot Rod Authentication (MD5)
- Set up the Hot Rod Connector configuration by adding the
sasl
element to theauthentication
element (for details on theauthentication
element, see Section 25.8.4, “Configuring Security Realms Declaratively”) as follows:<hotrod-connector socket-binding="hotrod" cache-container="default"> <authentication security-realm="ApplicationRealm"> <sasl server-name="myhotrodserver" mechanisms="DIGEST-MD5" qop="auth" /> </authentication> </hotrod-connector>
- The
server-name
attribute specifies the name that the server declares to incoming clients. The client configuration must also contain the same server name value. - The
mechanisms
attribute specifies the authentication mechanism in use. See Section 25.9.1.4, “User Authentication over Hot Rod Using SASL” for a list of supported mechanisms. - The
qop
attribute specifies the SASL quality of production value for the configuration. Supported values for this attribute areauth
,auth-int
, andauth-conf
.
- Configure each client to be connected to the Hot Rod connector. As this step is performed programmatically instructions are found in JBoss Data Grid's Developer Guide.
25.9.1.4.3. Configure Hot Rod Using LDAP/Active Directory
<security-realms> <security-realm name="ApplicationRealm"> <authentication> <ldap connection="ldap_connection" recursive="true" base-dn="cn=users,dc=infinispan,dc=org"> <username-filter attribute="cn" /> </ldap> </authentication> </security-realm> </security-realms> <outbound-connections> <ldap name="ldap_connection" url="ldap://my_ldap_server" search-dn="CN=test,CN=Users,DC=infinispan,DC=org" search-credential="Test_password"/> </outbound-connections>
- The
security-realm
element'sname
parameter specifies the security realm to reference to use when establishing the connection. - The
authentication
element contains the authentication details. - The
ldap
element specifies how LDAP searches are used to authenticate a user. First, a connection to LDAP is established and a search is conducted using the supplied user name to identify the distinguished name of the user. A subsequent connection to the server is established using the password supplied by the user. If the second connection succeeds, the authentication is a success.- The
connection
parameter specifies the name of the connection to use to connect to LDAP. - The (optional)
recursive
parameter specifies whether the filter is executed recursively. The default value for this parameter isfalse
. - The
base-dn
parameter specifies the distinguished name of the context to use to begin the search from. - The (optional)
user-dn
parameter specifies which attribute to read for the user's distinguished name after the user is located. The default value for this parameter isdn
.
- The
outbound-connections
element specifies the name of the connection used to connect to the LDAP. directory. - The
ldap
element specifies the properties of the outgoing LDAP connection.- The
name
parameter specifies the unique name used to reference this connection. - The
url
parameter specifies the URL used to establish the LDAP connection. - The
search-dn
parameter specifies the distinguished name of the user to authenticate and to perform the searches. - The
search-credential
parameter specifies the password required to connect to LDAP as thesearch-dn
. - The (optional)
initial-context-factory
parameter allows the overriding of the initial context factory. the default value of this parameter iscom.sun.jndi.ldap.LdapCtxFactory
.
25.9.1.4.4. Configure Hot Rod Authentication (X.509)
X.509
certificate can be installed at the node, and be made available to other nodes for authentication purposes for inbound and outbound SSL connections. This is enabled using the <server-identities/>
element of a security realm definition, which defines how a server appears to external applications. This element can be used to configure a password to be used when establishing a remote connection, as well as the loading of an X.509
key.
X.509
certificate on the node.
<security-realm name="ApplicationRealm"> <server-identities> <ssl protocol="..."> <keystore path="..." relative-to="..." keystore-password="..." alias="..." key-password="..." /> </ssl> </server-identities> [... authentication/authorization ...] </security-realms>
Table 25.4. <server-identities/> Options
Parameter | Mandatory/Optional | Description |
---|---|---|
path | Mandatory | This is the path to the keystore, this can be an absolute path or relative to the next attribute. |
relative-to | Optional | The name of a service representing a path the keystore is relative to. |
keystore-password | Mandatory | The password required to open the keystore. |
alias | Optional | The alias of the entry to use from the keystore - for a keystore with multiple entries in practice the first usable entry is used but this should not be relied on and the alias should be set to guarantee which entry is used. |
key-password | Optional | The password to load the key entry, if omitted the keystore-password will be used instead. |
Note
key-password
as well as an alias
to ensure only one key is loaded.
UnrecoverableKeyException: Cannot recover key
25.9.2. REST Interface Security
25.9.2.1. Publish REST Endpoints as a Public Interface
interface
parameter in the socket-binding
element from management
to public
as follows:
<socket-binding name="http" interface="public" port="8080"/>
25.9.2.2. Enable Security for the REST Endpoint
Note
Procedure 25.6. Enable Security for the REST Endpoint
standalone.xml
:
Specify Security Parameters
Ensure that the rest endpoint specifies a valid value for theauthentication
. An example configuration is below::<subsystem xmlns="urn:infinispan:server:endpoint:8.1"> <rest-connector socket-binding="rest" cache-container="security"> <authentication security-realm="ApplicationRealm" auth-method="BASIC"/> </rest-connector> </subsystem>
Check Security Domain Declaration
Ensure that the security subsystem contains the corresponding security-domain declaration. For details about setting up security-domain declarations, see the JBoss Enterprise Application Platform 7 documentation.Add an Application User
Run the relevant script and enter the configuration settings to add an application user.- Run the
adduser.sh
script (located in$JDG_HOME/bin
).- On a Windows system, run the
adduser.bat
file (located in$JDG_HOME/bin
) instead.
- When prompted about the type of user to add, select
Application User (application-users.properties)
by enteringb
. - Accept the default value for realm (
ApplicationRealm
) by pressing the return key. - Specify a username and password.
- When prompted for a group, enter
REST
. - Ensure the username and application realm information is correct when prompted and enter "yes" to continue.
Verify the Created Application User
Ensure that the created application user is correctly configured.- Check the configuration listed in the
application-users.properties
file (located in$JDG_HOME/standalone/configuration/
). The following is an example of what the correct configuration looks like in this file:user1=2dc3eacfed8cf95a4a31159167b936fc
- Check the configuration listed in the
application-roles.properties
file (located in$JDG_HOME/standalone/configuration/
). The following is an example of what the correct configuration looks like in this file:user1=REST
Test the Server
Start the server and enter the following link in a browser window to access the REST endpoint:http://localhost:8080/rest/namedCache
Note
If testing using a GET request, a405
response code is expected and indicates that the server was successfully authenticated.
25.9.3. Memcached Interface Security
25.9.3.1. Publish Memcached Endpoints as a Public Interface
interface
parameter in the socket-binding
element from management
to public
as follows:
<socket-binding name="memcached" interface="public" port="11211" />
25.10. Active Directory Authentication (Non-Kerberos)
25.11. Active Directory Authentication Using Kerberos (GSSAPI)
Procedure 25.7. Configure Kerberos Authentication for Active Directory (Library Mode)
- Configure JBoss EAP server to authenticate itself to Kerberos. This can be done by configuring a dedicated security domain, for example:
<security-domain name="ldap-service" cache-type="default"> <authentication> <login-module code="Kerberos" flag="required"> <module-option name="storeKey" value="true"/> <module-option name="useKeyTab" value="true"/> <module-option name="refreshKrb5Config" value="true"/> <module-option name="principal" value="ldap/localhost@INFINISPAN.ORG"/> <module-option name="keyTab" value="${basedir}/keytab/ldap.keytab"/> <module-option name="doNotPrompt" value="true"/> </login-module> </authentication> </security-domain>
- The security domain for authentication must be configured correctly for JBoss EAP, an application must have a valid Kerberos ticket. To initiate the Kerberos ticket, you must reference another security domain using
<module-option name="usernamePasswordDomain" value="krb-admin"/>
. This points to the standard Kerberos login module described in Step 3.<security-domain name="ispn-admin" cache-type="default"> <authentication> <login-module code="SPNEGO" flag="requisite"> <module-option name="password-stacking" value="useFirstPass"/> <module-option name="serverSecurityDomain" value="ldap-service"/> <module-option name="usernamePasswordDomain" value="krb-admin"/> </login-module> <login-module code="AdvancedAdLdap" flag="required"> <module-option name="password-stacking" value="useFirstPass"/> <module-option name="bindAuthentication" value="GSSAPI"/> <module-option name="jaasSecurityDomain" value="ldap-service"/> <module-option name="java.naming.provider.url" value="ldap://localhost:389"/> <module-option name="baseCtxDN" value="ou=People,dc=infinispan,dc=org"/> <module-option name="baseFilter" value="(krb5PrincipalName={0})"/> <module-option name="rolesCtxDN" value="ou=Roles,dc=infinispan,dc=org"/> <module-option name="roleFilter" value="(member={1})"/> <module-option name="roleAttributeID" value="cn"/> </login-module> </authentication> </security-domain>
- The security domain authentication configuration described in the previous step points to the following standard Kerberos login module:
<security-domain name="krb-admin" cache-type="default"> <authentication> <login-module code="Kerberos" flag="required"> <module-option name="useKeyTab" value="true"/> <module-option name="principal" value="admin@INFINISPAN.ORG"/> <module-option name="keyTab" value="${basedir}/keytab/admin.keytab"/> </login-module> </authentication> </security-domain>
25.12. The Security Audit Logger
org.infinispan.security.impl.DefaultAuditLogger
. This logger outputs audit logs using the available logging framework (for example, JBoss Logging) and provides results at the TRACE
level and the AUDIT
category.
AUDIT
category to either a log file, a JMS queue, or a database, use the appropriate log appender.
25.12.1. Configure the Security Audit Logger (Library Mode)
<infinispan> ... <global-security> <authorization audit-logger = "org.infinispan.security.impl.DefaultAuditLogger"> ... </authorization> </global-security> ... </infinispan>
25.12.2. Configure the Security Audit Logger (Remote Client-Server Mode)
<authorization>
element. The <authorization>
element must be within the <cache-container>
element in the Infinispan subsystem (in the standalone.xml
configuration file).
<cache-container name="local" default-cache="default"> <security> <authorization audit-logger="org.infinispan.security.impl.DefaultAuditLogger"> <identity-role-mapper/> <role name="admin" permissions="ALL"/> <role name="reader" permissions="READ"/> <role name="writer" permissions="WRITE"/> <role name="supervisor" permissions="ALL_READ ALL_WRITE"/> </authorization> </security> <local-cache name="default" start="EAGER"> <locking isolation="NONE" acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> <security> <authorization roles="admin reader writer supervisor"/> </security> </local-cache> [...] </cache-container>
Note
org.jboss.as.clustering.infinispan.subsystem.ServerAuditLogger
which sends the log messages to the server audit log. See the Management Interface Audit Logging chapter in the JBoss Enterprise Application Platform Administration and Configuration Guide for more information.
25.12.3. Custom Audit Loggers
org.infinispan.security.AuditLogger
interface. If no custom logger is provided, the default logger (DefaultAuditLogger
) is used.
Chapter 26. Security for Cluster Traffic
26.1. Node Authentication and Authorization (Remote Client-Server Mode)
DIGEST-MD5
or GSSAPI
mechanisms are currently supported.
Example 26.1. Configure SASL Authentication
<management> <security-realms> <!-- Additional configuration information here --> <security-realm name="ClusterRealm"> <authentication> <properties path="cluster-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization> <properties path="cluster-roles.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> </security-realms> <!-- Additional configuration information here --> </security-realms> </management> <stack name="udp"> <!-- Additional configuration information here --> <sasl mech="DIGEST-MD5" security-realm="ClusterRealm" cluster-role="cluster"> <property name="client_name">node1</property> <property name="client_password">password</property> </sasl> <!-- Additional configuration information here --> </stack>
DIGEST-MD5
mechanism to authenticate against the ClusterRealm
. In order to join, nodes must have the cluster
role.
cluster-role
attribute determines the role all nodes must belong to in the security realm in order to JOIN
or MERGE
with the cluster. Unless it has been specified, the cluster-role
attribute is the name of the clustered <cache-container>
by default. Each node identifies itself using the client-name
property. If none is specified, the hostname on which the server is running will be used.
jboss.node.name
system property that can be overridden on the command line. For example:
$ standalone.sh -Djboss.node.name=node001
Note
26.1.1. Configure Node Authentication for Cluster Security (DIGEST-MD5)
DIGEST-MD5
with a properties-based security realm, with a dedicated realm for cluster node.
Example 26.2. Using the DIGEST-MD5 Mechanism
<management> <security-realms> <security-realm name="ClusterRealm"> <authentication> <properties path="cluster-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization> <properties path="cluster-roles.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> </security-realms> </management> <subsystem xmlns="urn:infinispan:server:jgroups:8.0" default-stack="${jboss.default.jgroups.stack:udp}"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <protocol type="PING"/> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL"/> <protocol type="pbcast.NAKACK"/> <protocol type="UNICAST2"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> <protocol type="RSVP"/> <sasl security-realm="ClusterRealm" mech="DIGEST-MD5"> <property name="client_password>...</property> </sasl> </stack> </subsystem> <subsystem xmlns="urn:infinispan:server:core:8.3" default-cache-container="clustered"> <cache-container name="clustered" default-cache="default"> <transport executor="infinispan-transport" lock-timeout="60000" stack="udp"/> <!-- various clustered cache definitions here --> </cache-container> </subsystem>
node001
, node002
, node003
, the cluster-users.properties
will contain:
node001=/<node001passwordhash>/
node002=/<node002passwordhash>/
node003=/<node003passwordhash>/
cluster-roles.properties
will contain:
- node001=clustered
- node002=clustered
- node003=clustered
add-users.sh
script can be used:
$ add-user.sh -up cluster-users.properties -gp cluster-roles.properties -r ClusterRealm -u node001 -g clustered -p <password>
MD5
password hash of the node must also be placed in the "client_password
" property of the <sasl/> element.
<property name="client_password>...</property>
Note
JOIN
ing and MERGE
ing node's credentials against the realm before letting the node become part of the cluster view.
26.1.2. Configure Node Authentication for Cluster Security (GSSAPI/Kerberos)
GSSAPI
mechanism, the client_name
is used as the name of a Kerberos-enabled login module defined within the security domain subsystem. For a full procedure on how to do this, see Section 25.9.1.4.1, “Configure Hot Rod Authentication (GSSAPI/Kerberos)”.
Example 26.3. Using the Kerberos Login Module
<security-domain name="krb-node0" cache-type="default"> <authentication> <login-module code="Kerberos" flag="required"> <module-option name="storeKey" value="true"/> <module-option name="useKeyTab" value="true"/> <module-option name="refreshKrb5Config" value="true"/> <module-option name="principal" value="jgroups/node0/clustered@INFINISPAN.ORG"/> <module-option name="keyTab" value="${jboss.server.config.dir}/keytabs/jgroups_node0_clustered.keytab"/> <module-option name="doNotPrompt" value="true"/> </login-module> </authentication> </security-domain>
<sasl <!-- Additional configuration information here --> > <property name="login_module_name"> <!-- Additional configuration information here --> </property> </sasl>
authentication
section of the security realm is ignored, as the nodes will be validated against the Kerberos Domain Controller. The authorization
configuration is still required, as the node principal must belong to the required cluster-role.
jgroups/$NODE_NAME/$CACHE_CONTAINER_NAME@REALM
26.2. Configure Node Security in Library Mode
SASL
protocol to your JGroups XML configuration.
CallbackHandlers
, to obtain certain information necessary for the authentication handshake. Users must supply their own CallbackHandlers
on both client and server sides.
Important
JAAS
API is only available when configuring user authentication and authorization, and is not available for node security.
Note
CallbackHandler
classes are examples only, and not contained in the Red Hat JBoss Data Grid release. Users must provide the appropriate CallbackHandler
classes for their specific LDAP implementation.
Example 26.4. Setting Up SASL Authentication in JGroups
<SASL mech="DIGEST-MD5" client_name="node_user" client_password="node_password" server_callback_handler_class="org.example.infinispan.security.JGroupsSaslServerCallbackHandler" client_callback_handler_class="org.example.infinispan.security.JGroupsSaslClientCallbackHandler" sasl_props="com.sun.security.sasl.digest.realm=test_realm" />
DIGEST-MD5
mechanism. Each node must declare the user and password it will use when joining the cluster.
Important
26.2.1. Simple Authorizing Callback Handler
SimpleAuthorizingCallbackHandler
class may be used. To enable this set both the server_callback_handler
and the client_callback_handler
to org.jgroups.auth.sasl.SimpleAuthorizingCallbackHandler
, as seen in the below example:
<SASL mech="DIGEST-MD5" client_name="node_user" client_password="node_password" server_callback_handler_class="org.jgroups.auth.sasl.SimpleAuthorizingCallbackHandler" client_callback_handler_class="org.jgroups.auth.sasl.SimpleAuthorizingCallbackHandler" sasl_props="com.sun.security.sasl.digest.realm=test_realm" />
SimpleAuthorizingCallbackHandler
may be configured either programmatically, by passing the constructor an instance of of java.util.Properties
, or via standard Java system properties, set on the command line using the -DpropertyName=propertyValue
notation. The following properties are available:
sasl.credentials.properties
- the path to a property file which contains principal/credential mappings represented as principal=password .sasl.local.principal
- the name of the principal that is used to identify the local node. It must exist in the sasl.credentials.properties file.sasl.roles.properties
- (optional) the path to a property file which contains principal/roles mappings represented as principal=role1,role2,role3 .sasl.role
- (optional) if present, authorizes joining nodes only if their principal is.sasl.realm
- (optional) the name of the realm to use for the SASL mechanisms that require it
26.2.2. Configure Node Authentication for Library Mode (DIGEST-MD5)
CallbackHandlers
are required:
- The
server_callback_handler_class
is used by the coordinator. - The
client_callback_handler_class
is used by other nodes.
CallbackHandlers
.
Example 26.5. Callback Handlers
<SASL mech="DIGEST-MD5" client_name="node_name" client_password="node_password" client_callback_handler_class="${CLIENT_CALLBACK_HANDLER_IN_CLASSPATH}" server_callback_handler_class="${SERVER_CALLBACK_HANDLER_IN_CLASSPATH}" sasl_props="com.sun.security.sasl.digest.realm=test_realm" />
26.2.3. Configure Node Authentication for Library Mode (GSSAPI)
login_module_name
parameter must be specified instead of callback
.
server_name
must also be specified, as the client principal is constructed as jgroups/$server_name@REALM
.
Example 26.6. Specifying the login module and server on the coordinator node
<SASL mech="GSSAPI" server_name="node0/clustered" login_module_name="krb-node0" server_callback_handler_class="org.infinispan.test.integration.security.utils.SaslPropCallbackHandler" />
server_callback_handler_class
must be specified for node authorization. This will determine if the authenticated joining node has permission to join the cluster.
Note
jgroups/server_name
, therefore the server principal in Kerberos must also be jgroups/server_name
. For example, if the server name in Kerberos is jgroups/node1/mycache
, then the server name must be node1/mycache
.
26.3. JGroups Encryption
SYM_ENCRYPT
and ASYM_ENCRYPT
protocols to provide encryption for cluster traffic.
Important
ENCRYPT
protocol has been deprecated and should not be used in production environments. It is recommended to use either SYM_ENCRYPT
or ASYM_ENCRYPT
encrypt_entire_message
must be true
. When defining these protocols they should be placed directly under NAKACK2
.
SYM_ENCRYPT
: Configured with a secret key in a keystore using theJCEKS
store type.ASYM_ENCRYPT
: Configured with algorithms and key sizes. In this scenario the secret key is not retrieved from the keystore, but instead generated by the coordinator and distributed to new members. Once a member joins the cluster they send a request for the secret key to the coordinator; the coordinator responds with the secret key back to the new member encrypted with the member's public key.
26.3.1. Configuring JGroups Encryption Protocols
- Standard Java properties can also be used in the configuration, and it is possible to pass the path to JGroups configuration via the
-D
option during start up. - The default, pre-configured JGroups files are packaged in
infinispan-embedded.jar
, alternatively, you can create your own configuration file. See Section 30.2, “Configure JGroups (Library Mode)” for instructions on how to set up JBoss Data Grid to use custom JGroups configurations in library mode. - In Remote Client-Server mode, the JGroups configuration is part of the main server configuration file.
SYM_ENCRYPT
and ASYM_ENCRYPT
protocols, place them directly under NAKACK2
in the configuration file.
26.3.2. SYM_ENCRYPT: Using a Key Store
SYM_ENCRYPT
uses store type JCEKS. To generate a keystore compatible with JCEKS, use the following command line options to keytool:
$ keytool -genseckey -alias myKey -keypass changeit -storepass changeit -keyalg Blowfish -keysize 56 -keystore defaultStore.keystore -storetype JCEKS
SYM_ENCRYPT
can then be configured by adding the following information to the JGroups file used by the application.
<SYM_ENCRYPT sym_algorithm="AES" encrypt_entire_message="true" keystore_name="defaultStore.keystore" store_password="changeit" alias="myKey"/>
Note
defaultStore.keystore
must be found in the classpath.
26.3.3. ASYM_ENCRYPT: Configured with Algorithms and Key Sizes
- The secret key is generated and distributed by the coordinator.
- When a view change occurs, a peer requests the secret key by sending a key request with its own public key.
- The coordinator encrypts the secret key with the public key, and sends it back to the peer.
- The peer then decrypts and installs the key as its own secret key.
- Any further communications are encrypted and decrypted using the secret key.
Example 26.7. ASYM_ENCRYPT Example
... <VERIFY_SUSPECT/> <ASYM_ENCRYPT encrypt_entire_message="true" sym_keylength="128" sym_algorithm="AES/ECB/PKCS5Padding" asym_keylength="512" asym_algorithm="RSA"/> <pbcast.NAKACK2/> <UNICAST3/> <pbcast.STABLE/> <FRAG2/> <AUTH auth_class="org.jgroups.auth.MD5Token" auth_value="chris" token_hash="MD5"/> <pbcast.GMS join_timeout="2000" />
ASYM_ENCRYPT
has been placed immediately below NAKACK2
, and encrypt_entire_message
has been enabled, indicating that the message headers will be encrypted along with the message body. This means that the NAKACK2
and UNICAST3
protocols are also encrypted. In addition, AUTH
has been included as part of the configuration, so that only authenticated nodes may request the secret key from the coordinator.
change_key_on_leave
to true.
26.3.4. JGroups Encryption Configuration Parameters
ENCRYPT
JGroups protocol, which both SYM_ENCRYPT
and ASYM_ENCRYPT
extend:
Table 26.1. ENCRYPT Configuration Parameters
Name | Description |
---|---|
asym_algorithm | Cipher engine transformation for asymmetric algorithm. Default is RSA. |
asym_keylength | Initial public/private key length. Default is 512. |
asym_provider | Cryptographic Service Provider. Default is Bouncy Castle Provider. |
encrypt_entire_message | By default only the message body is encrypted. Enabling encrypt_entire_message ensures that all headers, destination and source addresses, and the message body is encrypted. |
sym_algorithm | Cipher engine transformation for symmetric algorithm. Default is AES. |
sym_keylength | Initial key length for matching symmetric algorithm. Default is 128. |
sym_provider | Cryptographic Service Provider. Default is Bouncy Castle Provider. |
SYM_ENCRYPT
protocol parameters
Table 26.2. SYM_ENCRYPT Configuration Parameters
Name | Description |
---|---|
alias | Alias used for recovering the key. Change the default. |
key_password | Password for recovering the key. Change the default. |
keystore_name | File on classpath that contains keystore repository. |
store_password | Password used to check the integrity/unlock the keystore. Change the default. |
ASYM_ENCRYPT
protocol parameters
Table 26.3. ASYM_ENCRYPT Configuration Parameters
Name | Description |
---|---|
change_key_on_leave | When a member leaves the view, change the secret key, preventing old members from eavesdropping. |
Part XIII. Command Line Tools
- The JBoss Data Grid Library CLI. For more information, see Section 27.1, “Red Hat JBoss Data Grid Library Mode CLI”.
- The JBoss Data Grid Server CLI. For more information, see Section 27.2, “Red Hat Data Grid Server CLI”.
Chapter 27. Red Hat JBoss Data Grid CLIs
27.1. Red Hat JBoss Data Grid Library Mode CLI
27.1.1. Start the Library Mode CLI (Server)
standalone
and domain
files. For Linux, use the standalone.sh
or domain.sh
script and for Windows, use the standalone.bat
or domain.bat
file.
27.1.2. Start the Library Mode CLI (Client)
cli
files in the bin
directory. For Linux, run bin/cli.sh
and for Windows, run bin\cli.bat
.
27.1.3. CLI Client Switches for the Command Line
Table 27.1. CLI Client Command Line Switches
Short Option | Long Option | Description |
---|---|---|
-c | --connect=${URL} | Connects to a running Red Hat JBoss Data Grid instance. For example, for JMX over RMI use jmx://[username[:password]]@host:port[/container[/cache]] and for JMX over JBoss Remoting use remoting://[username[:password]]@host:port[/container[/cache]] |
-f | --file=${FILE} | Read the input from the specified file rather than using interactive mode. If the value is set to - then the stdin is used as the input. |
-h | --help | Displays the help information. |
-v | --version | Displays the CLI version information. |
27.1.4. Connect to the Application
[disconnected//]> connect jmx://localhost:12000 [jmx://localhost:12000/MyCacheManager/>
Note
12000
depends on the value the JVM is started with. For example, starting the JVM with the -Dcom.sun.management.jmxremote.port=12000
command line parameter uses this port, but otherwise a random port is chosen. When the remoting protocol (remoting://localhost:9999
) is used, the Red Hat JBoss Data Grid server administration port is used (the default is port 9999
).
CacheManager
.
cache
command to select a cache before performing cache operations. The CLI supports tab completion, therefore using the cache
and pressing the tab button displays a list of active caches:
[[jmx://localhost:12000/MyCacheManager/> cache ___defaultcache namedCache [jmx://localhost:12000/MyCacheManager/]> cache ___defaultcache [jmx://localhost:12000/MyCacheManager/___defaultcache]>
27.2. Red Hat Data Grid Server CLI
- configuration
- management
- obtaining metrics
27.2.1. Start the Server Mode CLI
$ JDG_HOME/bin/cli.sh
C:\>JDG_HOME\bin\cli.bat
27.3. CLI Commands
deny
(see Section 27.3.8, “The deny Command”), grant
(see Section 27.3.14, “The grant Command”), and roles
(see Section 27.3.19, “The roles command”) commands are only available on the Server Mode CLI.
27.3.1. The abort Command
abort
command aborts a running batch initiated using the start
command. Batching must be enabled for the specified cache. The following is a usage example:
[jmx://localhost:12000/MyCacheManager/namedCache]> start [jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> abort [jmx://localhost:12000/MyCacheManager/namedCache]> get a null
27.3.2. The begin Command
begin
command starts a transaction. This command requires transactions enabled for the cache it targets. An example of this command's usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> begin [jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> put b b [jmx://localhost:12000/MyCacheManager/namedCache]> commit
27.3.3. The cache Command
cache
command specifies the default cache used for all subsequent operations. If invoked without any parameters, it shows the currently selected cache. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> cache ___defaultcache [jmx://localhost:12000/MyCacheManager/___defaultcache]> cache ___defaultcache [jmx://localhost:12000/MyCacheManager/___defaultcache]>
27.3.4. The clearcache Command
clearcache
command clears all content from the cache. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> clearcache [jmx://localhost:12000/MyCacheManager/namedCache]> get a null
27.3.5. The commit Command
commit
command commits changes to an ongoing transaction. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> begin [jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> put b b [jmx://localhost:12000/MyCacheManager/namedCache]> commit
27.3.6. The container Command
container
command selects the default cache container (cache manager). When invoked without any parameters, it lists all available containers. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> container MyCacheManager OtherCacheManager [jmx://localhost:12000/MyCacheManager/namedCache]> container OtherCacheManager [jmx://localhost:12000/OtherCacheManager/]>
27.3.7. The create Command
create
command creates a new cache based on the configuration of an existing cache definition. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> create newCache like namedCache [jmx://localhost:12000/MyCacheManager/namedCache]> cache newCache [jmx://localhost:12000/MyCacheManager/newCache]>
27.3.8. The deny Command
deny
command can be used to deny roles previously assigned to a principal:
[remoting://localhost:9999]> deny supervisor to user1
Note
deny
command is only available to the JBoss Data Grid Server Mode CLI.
27.3.9. The disconnect Command
disconnect
command disconnects the currently active connection, which allows the CLI to connect to another instance. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> disconnect [disconnected//]
27.3.10. The encoding Command
encoding
command sets a default codec to use when reading and writing entries to and from a cache. If invoked with no arguments, the currently selected codec is displayed. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> encoding none [jmx://localhost:12000/MyCacheManager/namedCache]> encoding --list memcached hotrod none rest [jmx://localhost:12000/MyCacheManager/namedCache]> encoding hotrod
27.3.11. The end Command
end
command ends a running batch initiated using the start
command. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> start [jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> end [jmx://localhost:12000/MyCacheManager/namedCache]> get a a
27.3.12. The evict Command
evict
command evicts an entry associated with a specific key from the cache. An example of it usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> evict a
27.3.13. The get Command
get
command shows the value associated with a specified key. For primitive types and Strings, the get
command prints the default representation. For other objects, a JSON representation of the object is printed. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> get a a
27.3.14. The grant Command
ClusterRoleMapper
, the principal to role mappings are stored within the cluster registry (a replicated cache available to all nodes). The grant
command can be used to grant new roles to a principal as follows:
[remoting://localhost:9999]> grant supervisor to user1
Note
grant
command is only available to the JBoss Data Grid Server Mode CLI.
27.3.15. The info Command
info
command displays the configuration of a selected cache or container. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> info GlobalConfiguration{asyncListenerExecutor=ExecutorFactoryConfiguration{factory=org.infinispan.executors.DefaultExecutorFactory@98add58}, asyncTransportExecutor=ExecutorFactoryConfiguration{factory=org.infinispan.executors.DefaultExecutorFactory@7bc9c14c}, evictionScheduledExecutor=ScheduledExecutorFactoryConfiguration{factory=org.infinispan.executors.DefaultScheduledExecutorFactory@7ab1a411}, replicationQueueScheduledExecutor=ScheduledExecutorFactoryConfiguration{factory=org.infinispan.executors.DefaultScheduledExecutorFactory@248a9705}, globalJmxStatistics=GlobalJmxStatisticsConfiguration{allowDuplicateDomains=true, enabled=true, jmxDomain='jboss.infinispan', mBeanServerLookup=org.jboss.as.clustering.infinispan.MBeanServerProvider@6c0dc01, cacheManagerName='local', properties={}}, transport=TransportConfiguration{clusterName='ISPN', machineId='null', rackId='null', siteId='null', strictPeerToPeer=false, distributedSyncTimeout=240000, transport=null, nodeName='null', properties={}}, serialization=SerializationConfiguration{advancedExternalizers={1100=org.infinispan.server.core.CacheValue$Externalizer@5fabc91d, 1101=org.infinispan.server.memcached.MemcachedValue$Externalizer@720bffd, 1104=org.infinispan.server.hotrod.ServerAddress$Externalizer@771c7eb2}, marshaller=org.infinispan.marshall.VersionAwareMarshaller@6fc21535, version=52, classResolver=org.jboss.marshalling.ModularClassResolver@2efe83e5}, shutdown=ShutdownConfiguration{hookBehavior=DONT_REGISTER}, modules={}, site=SiteConfiguration{localSite='null'}}
27.3.16. The locate Command
locate
command displays the physical location of a specified entry in a distributed cluster. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> locate a [host/node1,host/node2]
27.3.17. The put Command
put
command inserts an entry into the cache. If a mapping exists for a key, the put
command overwrites the old value. The CLI allows control over the type of data used to store the key and value. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> put b 100 [jmx://localhost:12000/MyCacheManager/namedCache]> put c 4139l [jmx://localhost:12000/MyCacheManager/namedCache]> put d true [jmx://localhost:12000/MyCacheManager/namedCache]> put e { "package.MyClass": {"i": 5, "x": null, "b": true } }
put
can specify a life span and maximum idle time value as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> put a a expires 10s [jmx://localhost:12000/MyCacheManager/namedCache]> put a a expires 10m maxidle 1m
27.3.18. The replace Command
replace
command replaces an existing entry in the cache with a specified new value. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> replace a b [jmx://localhost:12000/MyCacheManager/namedCache]> get a b [jmx://localhost:12000/MyCacheManager/namedCache]> replace a b c [jmx://localhost:12000/MyCacheManager/namedCache]> get a c [jmx://localhost:12000/MyCacheManager/namedCache]> replace a b d [jmx://localhost:12000/MyCacheManager/namedCache]> get a c
27.3.19. The roles command
ClusterRoleMapper
, the principal to role mappings are stored within the cluster registry (a replicated cache available to all nodes). The roles
command can be used to list the roles associated to a specific user, or to all users if one is not given:
[remoting://localhost:9999]> roles user1 [supervisor, reader]
Note
roles
command is only available to the JBoss Data Grid Server Mode CLI.
27.3.20. The rollback Command
rollback
command rolls back any changes made by an ongoing transaction. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> begin [jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> put b b [jmx://localhost:12000/MyCacheManager/namedCache]> rollback
27.3.21. The site Command
site
command performs administration tasks related to cross-datacenter replication. This command also retrieves information about the status of a site and toggles the status of a site. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> site --status NYC online [jmx://localhost:12000/MyCacheManager/namedCache]> site --offline NYC ok [jmx://localhost:12000/MyCacheManager/namedCache]> site --status NYC offline [jmx://localhost:12000/MyCacheManager/namedCache]> site --online NYC
27.3.22. The start Command
start
command initiates a batch of operations. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> start [jmx://localhost:12000/MyCacheManager/namedCache]> put a a [jmx://localhost:12000/MyCacheManager/namedCache]> put b b [jmx://localhost:12000/MyCacheManager/namedCache]> end
27.3.23. The stats Command
stats
command displays statistics for the cache. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> stats Statistics: { averageWriteTime: 143 evictions: 10 misses: 5 hitRatio: 1.0 readWriteRatio: 10.0 removeMisses: 0 timeSinceReset: 2123 statisticsEnabled: true stores: 100 elapsedTime: 93 averageReadTime: 14 removeHits: 0 numberOfEntries: 100 hits: 1000 } LockManager: { concurrencyLevel: 1000 numberOfLocksAvailable: 0 numberOfLocksHeld: 0 }
27.3.24. The upgrade Command
upgrade
command implements the rolling upgrade procedure. For details about rolling upgrades, refer to Chapter 36, Rolling Upgrades.
upgrade
command's use is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> upgrade --synchronize=hotrod --all [jmx://localhost:12000/MyCacheManager/namedCache]> upgrade --disconnectsource=hotrod --all
27.3.25. The version Command
version
command displays version information for the CLI client and server. An example of its usage is as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> version Client Version 5.2.1.Final Server Version 5.2.1.Final
Part XIV. Other Red Hat JBoss Data Grid Functions
Chapter 28. Set Up the L1 Cache
28.1. About the L1 Cache
28.2. L1 Cache Configuration
28.2.1. L1 Cache Configuration (Library Mode)
Example 28.1. L1 Cache Configuration in Library Mode
<distributed-cache name="distributed_cache" owners="2" l1-lifespan="0" l1-cleanup-interval="60000"/>
- The
l1-lifespan
attribute indicates the maximum lifespan in milliseconds of entries placed in the L1 cache, and is not allowed in non-distributed caches. By default L1 this value is 0, indicating that L1 caching is disabled, and is only enabled if a positive value is defined. - The
li-cleanup-interval
parameter controls how often a cleanup task to prune L1 tracking data is run, in milliseconds, and by default is defined to 10 minutes.
28.2.2. L1 Cache Configuration (Remote Client-Server Mode)
0
, indicating it is disabled, in Red Hat JBoss Data Grid's Remote Client-Server mode:
Example 28.2. L1 Cache Configuration for Remote Client-Server Mode
<distributed-cache l1-lifespan="0"> <!-- Additional configuration information here --> </distributed-cache>
l1-lifespan
element is added to a distributed-cache
element to enable L1 caching and to set the life span of the L1 cache entries for the cache. This element is only valid for distributed caches.
l1-lifespan
is set to 0
or a negative number (-1
), L1 caching is disabled. L1 caching is enabled when the l1-lifespan
value is greater than 0
.
Important
Note
l1-lifespan
attribute is not set. The default lifespan value was 10 minutes. Since JBoss Data Grid 6.3, the default lifespan is 0 which disables the L1 cache. Set a non-zero value for the l1-lifespan
parameter to enable the L1 cache.
Chapter 29. Set Up Transactions
29.1. About Transactions
29.1.1. About the Transaction Manager
- initiating and concluding transactions
- managing information about each transaction
- coordinating transactions as they operate over multiple resources
- recovering from a failed transaction by rolling back changes
29.1.2. XA Resources and Synchronizations
OK
or ABORT
. If the Transaction Manager receives OK
votes from all XA Resources, the transaction is committed, otherwise it is rolled back.
29.1.3. Optimistic and Pessimistic Transactions
- less messages being sent during the transaction execution
- locks held for shorter periods
- improved throughput
Note
FORCE_WRITE_LOCK
flag with the operation.
29.1.4. Write Skew Checks
REPEATABLE_READ
isolation level. Also, in clustered mode (distributed or replicated modes), set up entry versioning. For local mode, entry versioning is not required.
Important
29.1.5. Transactions Spanning Multiple Cache Instances
29.2. Configure Transactions
29.2.1. Configure Transactions (Library Mode)
TransactionManagerLookup
interface. When initialized, the cache creates an instance of the specified class and invokes its getTransactionManager()
method to locate and return a reference to the Transaction Manager.
Procedure 29.1. Configure Transactions in Library Mode (XML Configuration)
<local-cache name="default" <!-- Additional configuration information here -->> <transaction mode="BATCH" stop-timeout="60000" auto-commit="true" protocol="DEFAULT" recovery-cache="recoveryCache"> <locking <!-- Additional configuration information here --> > <versioning versioningScheme="SIMPLE"/> <!-- Additional configuration information here --> </local-cache>
- Enable
transactions
by defining amode
. By default the mode isNONE
, therefore disabling transactions. Valid transaction modes areBATCH
,NON_XA
,NON_DURABLE_XA
,FULL_XA
. - Define a
stop-timeout
, so that if there are any ongoing transactions when a cache is stopped the instance will wait for ongoing transactions to finish. Defaults to30000
milliseconds. - Enable
auto-commit
, so that single operation transactions do not need to be manually initiated. Defaults totrue
. - Define the commit
protocol
in use. Valid commit protocols areDEFAULT
andTOTAL_ORDER
. - Define the name of the
recovery-cache
, where recovery related information is kept. Defaults to__recoveryInfoCacheName__
. - Enable
versioning
of entries by defining theversioningScheme
attribute asSIMPLE
. Defaults toNONE
, indicating that versioning is disabled.
29.2.2. Configure Transactions (Remote Client-Server Mode)
Example 29.1. Transaction Configuration in Remote Client-Server Mode
<cache> <!-- Additional configuration elements here --> <transaction mode="NONE" /> <!-- Additional configuration elements here --> </cache>
29.3. Transaction Recovery
29.3.1. Transaction Recovery Process
Procedure 29.2. The Transaction Recovery Process
- The Transaction Manager creates a list of transactions that require intervention.
- The system administrator, connected to JBoss Data Grid using JMX, is presented with the list of transactions (including transaction IDs) using email or logs. The status of each transaction is either
COMMITTED
orPREPARED
. If some transactions are in bothCOMMITTED
andPREPARED
states, it indicates that the transaction was committed on some nodes while in the preparation state on others. - The System Administrator visually maps the XID received from the Transaction Manager to a JBoss Data Grid internal ID. This step is necessary because the XID (a byte array) cannot be conveniently passed to the JMX tool and then reassembled by JBoss Data Grid without this mapping.
- The system administrator forces the commit or rollback process for a transaction based on the mapped internal ID.
29.3.2. Transaction Recovery Example
Example 29.2. Money Transfer from an Account Stored in a Database to an Account in JBoss Data Grid
- The
TransactionManager.commit()
method is invoked to run the two phase commit protocol between the source (the database) and the destination (JBoss Data Grid) resources. - The
TransactionManager
tells the database and JBoss Data Grid to initiate the prepare phase (the first phase of a Two Phase Commit).
Note
29.4. Deadlock Detection
29.4.1. Enable Deadlock Detection
deadlock-detection-spin
attribute of the cache
configuration element, as seen below:
<local-cache [...] deadlock-detection-spin="1000"/>
deadlock-detection-spin
attribute defines how often lock acquisition is attempted within the maximum time allowed to acquire a particular lock (in milliseconds). This value defaults to 100
milliseconds, and negative values disable deadlock detection.
Chapter 30. Configure JGroups
30.1. Configure Red Hat JBoss Data Grid Interface Binding (Remote Client-Server Mode)
30.1.1. Interfaces
link-local
: Uses a169.x.x.x
or254.x.x.x
address. This suits the traffic within one box.<interfaces> <interface name="link-local"> <link-local-address/> </interface> <!-- Additional configuration elements here --> </interfaces>
site-local
: Uses a private IP address, for example192.168.x.x
. This prevents extra bandwidth charged from GoGrid, and similar providers.<interfaces> <interface name="site-local"> <site-local-address/> </interface> <!-- Additional configuration elements here --> </interfaces>
global
: Picks a public IP address. This should be avoided for replication traffic.<interfaces> <interface name="global"> <any-address/> </interface> <!-- Additional configuration elements here --> </interfaces>
non-loopback
: Uses the first address found on an active interface that is not a127.x.x.x
address.<interfaces> <interface name="non-loopback"> <not> <loopback /> </not> </interface> </interfaces>
30.1.2. Binding Sockets
30.1.2.1. Binding a Single Socket Example
socket-binding
element.
Example 30.1. Socket Binding
<socket-binding name="jgroups-udp" <!-- Additional configuration elements here --> interface="site-local"/>
30.1.2.2. Binding a Group of Sockets Example
socket-binding-group
element:
Example 30.2. Bind a Group
<socket-binding-group name="ha-sockets" default-interface="global"> <!-- Additional configuration elements here --> <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <!-- Additional configuration elements here --> </socket-binding-group>
default-interface
(global
), therefore the interface attribute does not need to be specified.
30.1.3. Configure JGroups Socket Binding
Example 30.3. JGroups UDP Socket Binding Configuration
jgroups-udp
socket binding is defined for the transport:
<subsystem xmlns="urn:jboss:domain:jgroups:3.0" default-stack="udp"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"> <!-- Additional configuration elements here --> </transport> <protocol type="PING"/> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> </subsystem>
Example 30.4. JGroups TCP Socket Binding Configuration
socket-binding
section.
<subsystem xmlns="urn:infinispan:server:jgroups:8.0" default-stack="tcp"> <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <protocol type="TCPPING"> <property name="initial_hosts">192.168.1.2[7600],192.168.1.3[7600]</property> <property name="num_initial_members">2</property> <property name="port_range">0</property> <property name="timeout">2000</property> </protocol> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"> <property name="use_mcast_xmit">false</property> </protocol> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> </subsystem>
Important
30.2. Configure JGroups (Library Mode)
Example 30.5. JGroups XML Configuration
<infinispan xmlns="urn:infinispan:config:8.3"> <jgroups> <stack-file name="jgroupsStack" path="/path/to/jgroups/xml/jgroups.xml}"/> </jgroups> <cache-container name="default" default-cache="default"> <transport stack="jgroupsStack" lock-timeout="600000" cluster="default" /> </cache-container> </infinispan>
jgroups.xml
in the classpath; if no instances are found in the classpath it will then search for an absolute path name.
30.2.1. JGroups Transport Protocols
30.2.1.1. The UDP Transport Protocol
- IP multicasting to send messages to all members of a cluster.
- UDP datagrams for unicast messages, which are sent to a single member.
30.2.1.2. The TCP Transport Protocol
- When sending multicast messages, TCP sends multiple unicast messages.
30.2.1.3. Using the TCPPing Protocol
default-configs/default-jgroups-tcp.xml
includes the MPING
protocol, which uses UDP
multicast for discovery. When UDP
multicast is not available, the MPING
protocol, has to be replaced by a different mechanism. The recommended alternative is the TCPPING
protocol. The TCPPING
configuration contains a static list of IP addresses which are contacted for node discovery.
Example 30.6. Configure the JGroups Subsystem to Use TCPPING
<TCP bind_port="7800" /> <TCPPING initial_hosts="${jgroups.tcpping.initial_hosts:HostA[7800],HostB[7801]}" port_range="1" />
30.2.2. Pre-Configured JGroups Files
infinispan-embedded.jar
, and are available on the classpath by default. In order to use one of these files, specify one of these file names instead of using jgroups.xml
.
default-configs/default-jgroups-udp.xml
default-configs/default-jgroups-tcp.xml
default-configs/default-jgroups-ec2.xml
default-configs/default-jgroups-google.xml
30.2.2.1. default-jgroups-udp.xml
default-configs/default-jgroups-udp.xml
file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-udp.xml
configuration
- uses UDP as a transport and UDP multicast for discovery.
- is suitable for large clusters (over 8 nodes).
- is suitable if using Invalidation or Replication modes.
Table 30.1. default-jgroups-udp.xml System Properties
System Property | Description | Default | Required? |
---|---|---|---|
jgroups.udp.mcast_addr | IP address to use for multicast (both for communications and discovery). Must be a valid Class D IP address, suitable for IP multicast. | 228.6.7.8 | No |
jgroups.udp.mcast_port | Port to use for multicast socket | 46655 | No |
jgroups.udp.ip_ttl | Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped | 2 | No |
30.2.2.2. default-jgroups-tcp.xml
default-configs/default-jgroups-tcp.xml
file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-tcp.xml
configuration
- uses TCP as a transport and UDP multicast for discovery.
- is generally only used where multicast UDP is not an option.
- TCP does not perform as well as UDP for clusters of eight or more nodes. Clusters of four nodes or fewer result in roughly the same level of performance for both UDP and TCP.
Table 30.2. default-jgroups-tcp.xml System Properties
System Property | Description | Default | Required? |
---|---|---|---|
jgroups.tcp.address | IP address to use for the TCP transport. | 127.0.0.1 | No |
jgroups.tcp.port | Port to use for TCP socket | 7800 | No |
jgroups.mping.mcast_addr | IP address to use for multicast (for discovery). Must be a valid Class D IP address, suitable for IP multicast. | 228.6.7.8 | No |
jgroups.mping.mcast_port | Port to use for multicast socket | 46655 | No |
jgroups.udp.ip_ttl | Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped | 2 | No |
30.2.2.3. default-jgroups-ec2.xml
default-configs/default-jgroups-ec2.xml
file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-ec2.xml
configuration
- uses TCP as a transport and S3_PING for discovery.
- is suitable on Amazon EC2 nodes where UDP multicast isn't available.
Table 30.3. default-jgroups-ec2.xml System Properties
System Property | Description | Default | Required? |
---|---|---|---|
jgroups.tcp.address | IP address to use for the TCP transport. | 127.0.0.1 | No |
jgroups.tcp.port | Port to use for TCP socket | 7800 | No |
jgroups.s3.access_key | The Amazon S3 access key used to access an S3 bucket | Yes | |
jgroups.s3.secret_access_key | The Amazon S3 secret key used to access an S3 bucket | Yes | |
jgroups.s3.bucket | Name of the Amazon S3 bucket to use. Must be unique and must already exist | Yes | |
jgroups.s3.pre_signed_delete_url | The pre-signed URL to be used for the DELETE operation. | Yes | |
jgroups.s3.pre_signed_put_url | The pre-signed URL to be used for the PUT operation. | Yes | |
jgroups.s3.prefix | If set, S3_PING searches for a bucket with a name that starts with the prefix value. | No |
30.2.2.4. default-jgroups-google.xml
default-configs/default-jgroups-google.xml
file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-google.xml
configuration
- uses TCP as a transport and GOOGLE_PING for discovery.
- is suitable on Google Compute Engine nodes where UDP multicast isn't available.
Table 30.4. default-jgroups-google.xml System Properties
System Property | Description | Default | Required? |
---|---|---|---|
jgroups.tcp.address | IP address to use for the TCP transport. | 127.0.0.1 | No |
jgroups.tcp.port | Port to use for TCP socket | 7800 | No |
jgroups.google.access_key | The Google Compute Engine User's access key used to access the bucket | Yes | |
jgroups.google.secret_access_key | The Google Compute Engine User's secret access key used to access the bucket | Yes | |
jgroups.google.bucket | Name of the Google Compute Engine bucket to use. Must be unique and already exist | Yes |
30.3. Test Multicast Using JGroups
30.3.1. Testing With Different Red Hat JBoss Data Grid Versions
Note
Table 30.5. Testing with Different JBoss Data Grid Versions
Version | Test Case | Details |
---|---|---|
JBoss Data Grid 7.0.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.6.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.5.1 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.5.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.4.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.3.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.2.1 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.2.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.1.0 | Available |
The location of the test classes depends on the distribution:
|
JBoss Data Grid 6.0.1 | Not Available | This version of JBoss Data Grid is based on JBoss Enterprise Application Platform 6.0, which does not include the test classes used for this test. |
JBoss Data Grid 6.0.0 | Not Available | This version of JBoss Data Grid is based on JBoss Enterprise Application Server 6.0, which does not include the test classes used for this test. |
30.3.2. Testing Multicast Using JGroups
Ensure that the following prerequisites are met before starting the testing procedure.
- Set the
bind_addr
value to the appropriate IP address for the instance. - For added accuracy, set
mcast_addr
andport
values that are the same as the cluster communication values. - Start two command line terminal windows. Navigate to the location of the JGroups JAR file for one of the two nodes in the first terminal and the same location for the second node in the second terminal.
Procedure 30.1. Test Multicast Using JGroups
Run the Multicast Server on Node One
Run the following command on the command line terminal for the first node (replacejgroups.jar
with theinfinispan-embedded.jar
for Library mode):java -cp jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 230.1.2.3 -port 5555 -bind_addr $YOUR_BIND_ADDRESS
Run the Multicast Server on Node Two
Run the following command on the command line terminal for the second node (replacejgroups.jar
with theinfinispan-embedded.jar
for Library mode):java -cp jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr 230.1.2.3 -port 5555 -bind_addr $YOUR_BIND_ADDRESS
Transmit Information Packets
Enter information on instance for node two (the node sending packets) and press enter to send the information.View Receives Information Packets
View the information received on the node one instance. The information entered in the previous step should appear here.Confirm Information Transfer
Repeat steps 3 and 4 to confirm all transmitted information is received without dropped packets.Repeat Test for Other Instances
Repeat steps 1 to 4 for each combination of sender and receiver. Repeating the test identifies other instances that are incorrectly configured.
All information packets transmitted from the sender node must appear on the receiver node. If the sent information does not appear as expected, multicast is incorrectly configured in the operating system or the network.
Chapter 31. Use Red Hat Data Grid with Amazon Web Services
31.1. The S3_PING JGroups Discovery Protocol
S3_PING
is a discovery protocol that is ideal for use with Amazon's Elastic Compute Cloud (EC2) because EC2 does not allow multicast and therefore MPING
is not allowed.
31.2. S3_PING Configuration Options
- In Library mode, use JGroups'
default-configs/default-jgroups-ec2.xml
file (see Section 30.2.2.3, “default-jgroups-ec2.xml” for details) or use theS3_PING
protocol. - In Remote Client-Server mode, use JGroups'
S3_PING
protocol.
S3_PING
protocol for clustering to work in Amazon AWS:
- Use Private S3 Buckets. These buckets use Amazon AWS credentials.
- Use Pre-Signed URLs. These pre-assigned URLs are assigned to buckets with private write and public read rights.
- Use Public S3 Buckets. These buckets do not have any credentials.
31.2.1. Using Private S3 Buckets
- List
- Upload/Delete
- View Permissions
- Edit Permissions
S3_PING
configuration includes the following properties:
- the
location
where the bucket is found. - the
access_key
andsecret_access_key
properties for the AWS user.
Note
403
error displays when using this configuration, verify that the properties have the correct values. If the problem persists, confirm that the system time in the EC2 node is correct. Amazon S3 rejects requests with a time stamp that is more than 15
minutes old compared to their server's times for security purposes.
Example 31.1. Start the Red Hat JBoss Data Grid Server with a Private Bucket
bin/standalone.sh -c cloud.xml -Djboss.node.name={node_name} -Djboss.socket.binding.port-offset={port_offset} -Djboss.default.jgroups.stack=s3-private -Djgroups.s3.bucket={s3_bucket_name} -Djgroups.s3.access_key={access_key} -Djgroups.s3.secret_access_key={secret_access_key}
- Replace {node_name} with the server's desired node name.
- Replace {port_offset} with the port offset. To use the default ports specify this as
0
. - Replace {s3_bucket_name} with the appropriate bucket name.
- Replace {access_key} with the user's access key.
- Replace {secret_access_key} with the user's secret access key.
31.2.2. Using Pre-Signed URLs
Note
S3_PING
. For example, a path such as my_bucket/DemoCluster/jgroups.list
works while a longer path such as my_bucket/Demo/Cluster/jgroups.list
will not.
31.2.2.1. Generating Pre-Signed URLs
S3_PING
class includes a utility method to generate pre-signed URLs. The last argument for this method is the time when the URL expires expressed in the number of seconds since the Unix epoch (January 1, 1970).
String Url = S3_PING.generatePreSignedUrl("{access_key}", "{secret_access_key}", "{operation}", "{bucket_name}", "{path}", {seconds});
- Replace {operation} with either
PUT
orDELETE
. - Replace {access_key} with the user's access key.
- Replace {secret_access_key} with the user's secret access key.
- Replace {bucket_name} with the name of the bucket.
- Replace {path} with the desired path to the file within the bucket.
- Replace {seconds} with the number of seconds since the Unix epoch (January 1, 1970) that the path remains valid.
Example 31.2. Generate a Pre-Signed URL
String putUrl = S3_PING.generatePreSignedUrl("access_key", "secret_access_key", "put", "my_bucket", "DemoCluster/jgroups.list", 1234567890);
S3_PING
configuration includes the pre_signed_put_url
and pre_signed_delete_url
properties generated by the call to S3_PING.generatePreSignedUrl()
. This configuration is more secure than one using private S3 buckets, because the AWS credentials are not stored on each node in the cluster
Note
&
characters in the URL must be replaced with its XML entity (&
).
31.2.2.2. Set Pre-Signed URLs Using the Command Line
- Enclose the URL in double quotation marks (
"
"
). - In the URL, each occurrence of the ampersand (
&
) character must be escaped with a backslash (\
)
Example 31.3. Start a JBoss Data Grid Server with a Pre-Signed URL
bin/standalone.sh -c cloud.xml -Djboss.node.name={node_name} -Djboss.socket.binding.port-offset={port_offset} -Djboss.default.jgroups.stack=s3-presigned -Djgroups.s3.pre_signed_delete_url="http://{s3_bucket_name}.s3.amazonaws.com/jgroups.list?AWSAccessKeyId={access_key}\&Expires={expiration_time}\&Signature={signature}" -Djgroups.s3.pre_signed_put_url="http://{s3_bucket_name}.s3.amazonaws.com/jgroups.list?AWSAccessKeyId={access_key}\&Expires={expiration_time}\&Signature={signature}"
- Replace {node_name} with the server's desired node name.
- Replace {port_offset} with the port offset. To use the default ports specify this as
0
. - Replace {s3_bucket_name} with the appropriate bucket name.
- Replace {access_key} with the user's access key.
- Replace {expiration_time} with the values for the URL that are passed into the
S3_PING.generatePreSignedUrl()
method. - Replace {signature} with the values generated by the
S3_PING.generatePreSignedUrl()
method.
31.2.3. Using Public S3 Buckets
location
property must be specified with the bucket name for this configuration. This configuration method is the least secure because any user who knows the name of the bucket can upload and store data in the bucket and the bucket creator's account is charged for this data.
bin/standalone.sh -c cloud.xml -Djboss.node.name={node_name} -Djboss.socket.binding.port-offset={port_offset} -Djboss.default.jgroups.stack=s3-public -Djgroups.s3.bucket={s3_bucket_name}
- Replace {node_name} with the server's desired node name.
- Replace {port_offset} with the port offset. To use the default ports specify this as
0
. - Replace {s3_bucket_name} with the appropriate bucket name.
31.3. Utilizing an Elastic IP Address
Chapter 32. Use Red Hat JBoss Data Grid with Google Compute Engine
32.1. The GOOGLE_PING Protocol
GOOGLE_PING
is a discovery protocol used by JGroups during cluster formation. It is ideal to use with Google Compute Engine (GCE) and uses Google Cloud Storage to store information about individual cluster members.
32.2. GOOGLE_PING Configuration
- In Library mode, use the JGroups' configuration file
default-configs/default-jgroups-google.xml
or use theGOOGLE_PING
protocol in an existing configuration file. - In Remote Client-Server mode, define the properties on the command line when you start the server to use the JGroups Google stack ( see example in Section 32.2.1, “Starting the Server in Google Compute Engine”).
GOOGLE_PING
protocol to work in Google Compute Engine in Library and Remote Client-Server mode:
- Use JGroups bucket. These buckets use Google Compute Engine credentials.
- Use the access key.
- Use the secret access key.
Note
TCP
protocol is supported in Google Compute Engine since multicasts are not allowed.
32.2.1. Starting the Server in Google Compute Engine
GOOGLE_PING
configuration includes the following properties:
- the
access_key
and thesecret_access_key
properties for the Google Compute Engine user.
Example 32.1. Start the Red Hat JBoss Data Grid Server with a Bucket
bin/standalone.sh -c cloud.xml -Djboss.node.name={node_name} -Djboss.socket.binding.port-offset={port_offset} -Djboss.default.jgroups.stack=google -Djgroups.google.bucket={google_bucket_name} -Djgroups.google.access_key={access_key} -Djgroups.google.secret_access_key={secret_access_key}
- Replace {node_name} with the server's desired node name.
- Replace {port_offset} with the port offset. To use the default ports specify this as
0
. - Replace {google_bucket_name} with the appropriate bucket name.
- Replace {access_key} with the user's access key.
- Replace {secret_access_key} with the user's secret access key.
32.3. Utilizing a Static IP Address
Chapter 33. Integration with the Spring Framework
33.1. Enabling Spring Cache Support Declaratively (Library Mode)
- Add
<cache:annotation-driven/>
to the xml file. This line enables the standard spring annotations to be used by the application. - Define a cache manager using the
<infinispan:embedded-cache-manager ... />
.
Example 33.1. Sample Declarative Configuration
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:infinispan="http://www.infinispan.org/schemas/spring" xmlns:cache="http://www.springframework.org/schema/cache" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd http://www.infinispan.org/schemas/spring http://www.infinispan.org/schemas/infinispan-spring.xsd"> [...] <cache:annotation-driven/> <infinispan:embedded-cache-manager configuration="classpath:/path/to/cache-config.xml"/> [...]
33.2. Enabling Spring Cache Support Declaratively (Remote Client-Server Mode)
- Add
<cache:annotation-driven/>
to the xml file. This line enables the standard spring annotations to be used by the application. - Define the HotRod client properties using the
<infinispan:remote-cache-manager ... />
.
Example 33.2. Sample Declarative Configuration
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:infinispan="http://www.infinispan.org/schemas/spring" xmlns:cache="http://www.springframework.org/schema/cache" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd http://www.infinispan.org/schemas/spring http://www.infinispan.org/schemas/infinispan-spring.xsd"> [...] <cache:annotation-driven/> <infinispan:remote-cache-manager configuration="classpath:/path/to/hotrod-client.properties"/> [...]
Chapter 34. High Availability Using Server Hinting
ConsistentHashFactories
section in the JBoss Data Grid Developer Guide.
machineId
, rackId
, or siteId
in the transport configuration will trigger the use of TopologyAwareConsistentHashFactory
, which is the equivalent of the DefaultConsistentHashFactory
with Server Hinting enabled.
34.1. Establishing Server Hinting with JGroups
34.2. Configure Server Hinting (Remote Client-Server Mode)
transport
element for the default stack, as follows:
Procedure 34.1. Configure Server Hinting in Remote Client-Server Mode
<subsystem xmlns="urn:jboss:domain:jgroups:3.0" default-stack="${jboss.default.jgroups.stack:udp}"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp" site="${jboss.jgroups.transport.site:s1}" rack="${jboss.jgroups.transport.rack:r1}" machine="${jboss.jgroups.transport.machine:m1}"> <!-- Additional configuration elements here --> </transport> </stack> </subsystem>
- Find the JGroups subsystem configuration
- Enable Server Hinting via the
transport
Element- Set the site ID using the
site
parameter. - Set the rack ID using the
rack
parameter. - Set the machine ID using the
machine
parameter.
34.3. Configure Server Hinting (Library Mode)
Procedure 34.2. Configure Server Hinting for Library Mode
<transport cluster = "MyCluster" machine = "LinuxServer01" rack = "Rack01" site = "US-WestCoast" />
- The
cluster
attribute specifies the name assigned to the cluster. - The
machine
attribute specifies the JVM instance that contains the original data. This is particularly useful for nodes with multiple JVMs and physical hosts with multiple virtual hosts. - The
rack
attribute specifies the rack that contains the original data, so that other racks are used for backups. - The
site
attribute differentiates between nodes in different data centers replicating to each other.
machine
, rack
, or site
are included in the configuration, TopologyAwareConsistentHashFactory
is selected automatically, enabling Server Hinting. However, if Server Hinting is not configured, JBoss Data Grid's distribution algorithms are allowed to store replications in the same physical machine/rack/data center as the original data.
Chapter 35. Set Up Cross-Datacenter Replication
RELAY2
protocol.
35.1. Cross-Datacenter Replication Operations
Example 35.1. Cross-Datacenter Replication Example
Figure 35.1. Cross-Datacenter Replication Example
LON
, NYC
and SFO
. Each site hosts a running JBoss Data Grid cluster made up of three to four physical nodes.
Users
cache is active in all three sites - LON
, NYC
and SFO
. Changes to the Users
cache at the any one of these sites will be replicated to the other two as long as the cache defines the other two sites as its backups through configuration. The Orders
cache, however, is only available locally at the LON
site because it is not replicated to the other sites.
Users
cache can use different replication mechanisms each site. For example, it can back up data synchronously to SFO
and asynchronously to NYC
and LON
.
Users
cache can also have a different configuration from one site to another. For example, it can be configured as a distributed cache with owners
set to 2
in the LON
site, as a replicated cache in the NYC
site and as a distributed cache with owners
set to 1
in the SFO
site.
RELAY2
facilitates communication between sites. For more information, see Section F.4, “About RELAY2”
35.2. Configure Cross-Datacenter Replication
35.2.1. Configure Cross-Datacenter Replication (Remote Client-Server Mode)
Procedure 35.1. Set Up Cross-Datacenter Replication
Set Up RELAY
Add the following configuration to thestandalone.xml
file to set upRELAY
:<subsystem xmlns="urn:infinispan:server:jgroups:8.0"> <channels default="cluster"> <channel name="cluster"/> <channel name="xsite" stack="tcp"/> </channels> <stacks default="udp"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <...other protocols...> <relay site="LON"> <remote-site name="NYC" channel="xsite"/> <remote-site name="SFO" channel="xsite"/> </relay> </stack> </stacks> </subsystem>{
TheRELAY
protocol creates an additional stack (running parallel to the existingUDP
stack) to communicate with the remote site. If aTCP
based stack is used for the local cluster, twoTCP
based stack configurations are required: one for local communication and one to connect to the remote site. For an illustration, see Section 35.1, “Cross-Datacenter Replication Operations”Set Up Sites
Use the following configuration in thestandalone.xml
file to set up sites for each distributed cache in the cluster:<distributed-cache name="namedCache"> <!-- Additional configuration elements here --> <backups> <backup site="{FIRSTSITENAME}" strategy="{SYNC/ASYNC}" /> <backup site="{SECONDSITENAME}" strategy="{SYNC/ASYNC}" /> </backups> </distributed-cache>
Configure Local Site Transport
Add the name of the local site in thetransport
element to configure transport:<transport executor="infinispan-transport" lock-timeout="60000" cluster="LON" stack="udp"/>
$JDG_SERVER/docs/examples/configs/clustered-xsite.xml
.
35.2.2. Configure Cross-Data Replication (Library Mode)
35.2.2.1. Configure Cross-Datacenter Replication Declaratively
relay.RELAY2
protocol creates an additional stack (running parallel to the existing TCP
stack) to communicate with the remote site. If a TCP
-based stack is used for the local cluster, two TCP
based stack configurations are required: one for local communication and one to connect to the remote site.
Procedure 35.2. Setting Up Cross-Datacenter Replication
Configure the Local Site
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:8.0 http://www.infinispan.org/schemas/infinispan-config-8.0.xsd" xmlns="urn:infinispan:config:8.0"> <jgroups> <stack-file name="udp" path="jgroups-with-relay.xml"/> </jgroups> <cache-container default-cache="default"> <transport cluster="infinispan-cluster" lock-timeout="50000" stack="udp" node-name="node1" machine="machine1" rack="rack1" site="LON"/> <local-cache name="default"> <backups> <backup site="NYC" strategy="SYNC" failure-policy="IGNORE" timeout="12003"/> <backup site="SFO" strategy="ASYNC"/> </backups> </local-cache> <!-- Additional configuration information here --> </infinispan>
- Add the
site
attribute to thetransport
element to define the local site (in this example, the local site is namedLON
). - Cross-site replication requires a non-default JGroups configuration. Define the
jgroups
element and define a customstack-file
, passing in the name of the file to be referenced and the location to this custom configuration. In this example, the JGroups configuration file is namedjgroups-with-relay.xml
. - Configure the cache in site
LON
to back up to the sitesNYC
andSFO
. - Configure the back up caches:
- Configure the cache in site
NYC
to receive back up data fromLON
:<local-cache name="backupNYC"> <backups/> <backup-for remote-cache="default" remote-site="LON"/> </local-cache>
- Configure the cache in site
SFO
to receive back up data fromLON
:<local-cache name="backupSFO"> <backups/> <backup-for remote-cache="default" remote-site="LON"/> </local-cache>
Add the Contents of the Configuration File
As a default, Red Hat JBoss Data Grid includes JGroups configuration files such asdefault-configs/default-jgroups-tcp.xml
anddefault-configs/default-jgroups-udp.xml
in theinfinispan-embedded-{VERSION}.jar
package.Copy the JGroups configuration to a new file (in this example, it is namedjgroups-with-relay.xml
) and add the provided configuration information to this file. Note that therelay.RELAY2
protocol configuration must be the last protocol in the configuration stack.<config> ... <relay.RELAY2 site="LON" config="relay.xml" relay_multicasts="false" /> </config>
Configure the relay.xml File
Set up therelay.RELAY2
configuration in therelay.xml
file. This file describes the global cluster configuration.<RelayConfiguration> <sites> <site name="LON" id="0"> <bridges> <bridge config="jgroups-global.xml" name="global"/> </bridges> </site> <site name="NYC" id="1"> <bridges> <bridge config="jgroups-global.xml" name="global"/> </bridges> </site> <site name="SFO" id="2"> <bridges> <bridge config="jgroups-global.xml" name="global"/> </bridges> </site> </sites> </RelayConfiguration>
Configure the Global Cluster
The filejgroups-global.xml
referenced inrelay.xml
contains another JGroups configuration which is used for the global cluster: communication between sites.The global cluster configuration is usuallyTCP
-based and uses theTCPPING
protocol (instead ofPING
orMPING
) to discover members. Copy the contents ofdefault-configs/default-jgroups-tcp.xml
intojgroups-global.xml
and add the following configuration in order to configureTCPPING
:<config> <TCP bind_port="7800" ... /> <TCPPING initial_hosts="lon.hostname[7800],nyc.hostname[7800],sfo.hostname[7800]" ergonomics="false" /> <!-- Rest of the protocols --> </config>
Replace the hostnames (or IP addresses) inTCPPING.initial_hosts
with those used for your site masters. The ports (7800
in this example) must match theTCP.bind_port
.For more information about theTCPPING
protocol, see Section 30.2.1.3, “Using the TCPPing Protocol”.
35.3. Taking a Site Offline
- Configure automatically taking a site offline:
- Declaratively in Remote Client-Server mode.
- Declaratively in Library mode.
- Using the programmatic method.
- Manually taking a site offline:
- Using JBoss Operations Network (JON).
- Using the JBoss Data Grid Command Line Interface (CLI).
35.3.1. Taking a Site Offline
take-offline
element to the backup
element. This will configure when a site is automatically taken offline.
Example 35.2. Taking a Site Offline in Remote Client-Server Mode
<backup> <take-offline after-failures="${NUMBER}" min-wait="${PERIOD}" /> </backup>
take-offline
element use the following parameters to configure when to take a site offline:
- The
after-failures
parameter specifies the number of times attempts to contact a site can fail before the site is taken offline. - The
min-wait
parameter specifies the number (in milliseconds) to wait to mark an unresponsive site as offline. The site is offline when themin-wait
period elapses after the first attempt, and the number of failed attempts specified in theafter-failures
parameter occur.
35.3.2. Taking a Site Offline via JBoss Operations Network (JON)
35.3.3. Taking a Site Offline via the CLI
site
command.
site
command can be used to check the status of a site as follows:
[jmx://localhost:12000/MyCacheManager/namedCache]> site --status ${SITENAME}
online
or offline
according to the current status of the named site.
[jmx://localhost:12000/MyCacheManager/namedCache]> site --offline ${SITENAME}
[jmx://localhost:12000/MyCacheManager/namedCache]> site --online ${SITENAME}
ok
displays after the command. As an alternate, the site can also be brought online using JMX (see Section 35.3.4, “Bring a Site Back Online” for details).
35.3.4. Bring a Site Back Online
bringSiteOnline(siteName)
operation on the XSiteAdmin
MBean (See Section C.23, “XSiteAdmin” for details) or using the CLI (see Section 35.3.3, “Taking a Site Offline via the CLI” for details).
35.4. State Transfer Between Sites
pushState(SiteName String)
operation available in the XSiteAdminOperations
MBean.
pushState(SiteName String)
operation in JConsole:
Figure 35.2. PushState Operation
site push sitename
command. For example, when the master site is brought back online, the system administrator invokes the state transfer operation in the backup site, specifying the master site name that is to receive the state.
Note
35.4.1. Active-Passive State Transfer
- Boot the Red Hat JBoss Data Grid cluster in the master site.
- Command the backup site to push state to the master site.
- Wait until the state transfer is complete.
- Make the clients aware that the master site is available to process the requests.
35.4.2. Active-Active State Transfer
Warning
Note
- Boot the Red Hat JBoss Data Grid cluster in the new site.
- Command the running site to push state to the new site.
- Make the clients aware that the new site is available to process the requests.
35.4.3. State Transfer Configuration
<backups> <backup site="NYC" strategy="SYNC" failure-policy="FAIL"> <state-transfer chunk-size="512" timeout="1200000" max-retries="30" wait-time="2000" /> </backup> </backups>
35.5. Configure Multiple Site Masters
35.5.1. Multiple Site Master Operations
35.5.2. Configure Multiple Site Masters (Remote Client-Server Mode)
Configure Cross-Datacenter Replication for Red Hat JBoss Data Grid's Remote Client-Server Mode.
Procedure 35.3. Set Multiple Site Masters in Remote Client-Server Mode
<relay site="LON"> <remote-site name="NYC" stack="tcp" cluster="global"/> <remote-site name="SFO" stack="tcp" cluster="global"/> <property name="relay_multicasts">false</property> <property name="max_site_masters">16</property> <property name="can_become_site_master">true</property> </relay>
Locate the Target Configuration
Locate the target site's configuration in theclustered-xsite.xml
example configuration file. The sample configuration looks like example provided above.Configure Maximum Sites
Use themax_site_masters
property to determine the maximum number of master nodes within the site. Set this value to the number of nodes in the site to make every node a master.Configure Site Master
Use thecan_become_site_master
property to allow the node to become the site master. This flag is set totrue
as a default. Setting this flag tofalse
prevents the node from becoming a site master. This is required in situations where the node does not have a network interface connected to the external network.
35.5.3. Configure Multiple Site Masters (Library Mode)
Procedure 35.4. Configure Multiple Site Masters (Library Mode)
Configure Cross-Datacenter Replication
Configure Cross-Datacenter Replication in JBoss Data Grid. Use the instructions in Section 35.2.2.1, “Configure Cross-Datacenter Replication Declaratively” for an XML configuration. For instructions on a programmatic configuration refer to the JBoss Data Grid Developer Guide.Add the Contents of the Configuration File
Add thecan_become_site_master
andmax_site_masters
parameters to the configuration as follows:<config> <!-- Additional configuration information here --> <relay.RELAY2 site="LON" config="relay.xml" relay_multicasts="false" can_become_site_master="true" max_site_masters="16"/> </config>
Set themax_site_masters
value to the number of nodes in the cluster to make all nodes masters.
Chapter 36. Rolling Upgrades
Important
36.1. Rolling Upgrades Using Hot Rod
Important
- For JBoss Data Grid 7.0, use Hot Rod protocol version 2.5
- For JBoss Data Grid 6.6, use Hot Rod protocol version 2.3
- For JBoss Data Grid 6.5, use Hot Rod protocol version 2.0
- For JBoss Data Grid 6.4, use Hot Rod protocol version 2.0
- For JBoss Data Grid 6.3, use Hot Rod protocol version 2.0
- For JBoss Data Grid 6.2, use Hot Rod protocol version 1.3
- For JBoss Data Grid 6.1, use Hot Rod protocol version 1.2
This procedure assumes that a cluster is already configured and running, and that it is using an older version of JBoss Data Grid. This cluster is referred to below as the Source Cluster and the Target Cluster refers to the new cluster to which data will be migrated.
Configure the Target Cluster
Use either different network settings or a different JGroups cluster name to set the Target Cluster (consisting of nodes with new JBoss Data Grid) apart from the Source Cluster. For each cache, configure aRemoteCacheStore
with the following settings:- Ensure that
remote-server
points to the Source Cluster. - Ensure that the cache name matches the name of the cache on the Source Cluster.
- Ensure that
hotrod-wrapping
is enabled (set totrue
). - Ensure that
purge
is disabled (set tofalse
). - Ensure that
passivation
is disabled (set tofalse
).
Figure 36.1. Configure the Target Cluster with a RemoteCacheStore
Note
See the$JDG_HOME/docs/examples/configs/
standalone-hotrod-rolling-upgrade.xml
file for a full example of the Target Cluster configuration for performing Rolling Upgrades.Start the Target Cluster
Start the Target Cluster's nodes. Configure each client to point to the Target Cluster instead of the Source Cluster. Eventually, the Target Cluster handles all requests instead of the Source Cluster. The Target Cluster then lazily loads data from the Source Cluster on demand using theRemoteCacheStore
.Figure 36.2. Clients point to the Target Cluster with the Source Cluster as
RemoteCacheStore
for the Target Cluster.Dump the Source Cluster keyset
When all connections are using the Target Cluster, the keyset on the Source Cluster must be dumped. This can be done using either JMX or the CLI:JMX
Invoke therecordKnownGlobalKeyset
operation on theRollingUpgradeManager
MBean on the Source Cluster for every cache that must be migrated.CLI
Invoke theupgrade --dumpkeys
command on the Source Cluster for every cache that must be migrated, or use the--all
switch to dump all caches in the cluster.
Fetch remaining data from the Source Cluster
The Target Cluster fetches all remaining data from the Source Cluster. Again, this can be done using either JMX or CLI:JMX
Invoke thesynchronizeData
operation and specify thehotrod
parameter on theRollingUpgradeManager
MBean on the Target Cluster for every cache that must be migrated.CLI
Invoke theupgrade --synchronize=hotrod
command on the Target Cluster for every cache that must be migrated, or use the--all
switch to synchronize all caches in the cluster.
Disabling the
RemoteCacheStore
Once the Target Cluster has obtained all data from the Source Cluster, theRemoteCacheStore
on the Target Cluster must be disabled. This can be done as follows:JMX
Invoke thedisconnectSource
operation specifying thehotrod
parameter on theRollingUpgradeManager
MBean on the Target Cluster.CLI
Invoke theupgrade --disconnectsource=hotrod
command on the Target Cluster.
Decommission the Source Cluster
As a final step, decommission the Source Cluster.
36.2. Rolling Upgrades Using REST
Procedure 36.1. Perform Rolling Upgrades Using REST
Configure the Target Cluster
Use either different network settings or a different JGroups cluster name to set the Target Cluster (consisting of nodes with new JBoss Data Grid) apart from the Source Cluster. For each cache, configure aRestCacheStore
with the following settings:- Ensure that the host and port values point to the Source Cluster.
- Ensure that the path value points to the Source Cluster's REST endpoint.
Start the Target Cluster
Start the Target Cluster's nodes. Configure each client to point to the Target Cluster instead of the Source Cluster. Eventually, the Target Cluster handles all requests instead of the Source Cluster. The Target Cluster then lazily loads data from the Source Cluster on demand using theRestCacheStore
.Do not dump the Key Set during REST Rolling Upgrades
The REST Rolling Upgrades use case is designed to fetch all the data from the Source Cluster without using therecordKnownGlobalKeyset
operation.Warning
Do not invoke therecordKnownGlobalKeyset
operation for REST Rolling Upgrades. If you invoke this operation, it will cause data corruption and REST Rolling Upgrades will not complete successfully.Fetch the Remaining Data
The Target Cluster must fetch all the remaining data from the Source Cluster. This is done either using JMX or the CLI as follows:Using JMX
Invoke thesynchronizeData
operation with therest
parameter specified on theRollingUpgradeManager
MBean on the Target Cluster for all caches to be migrated.Using the CLI
Run theupgrade --synchronize=rest
on the Target Cluster for all caches to be migrated. Optionally, use the--all
switch to synchronize all caches in the cluster.
Disable the RestCacheStore
Disable theRestCacheStore
on the Target Cluster using either JMX or the CLI as follows:Using JMX
Invoke thedisconnectSource
operation with therest
parameter specified on theRollingUpgradeManager
MBean on the Target Cluster.Using the CLI
Run theupgrade --disconnectsource=rest
command on the Target Cluster. Optionally, use the--all
switch to disconnect all caches in the cluster.
Migration to the Target Cluster is complete. The Source Cluster can now be decommissioned.
36.3. RollingUpgradeManager Operations
RollingUpgradeManager
Mbean handles the operations that allow data to be migrated from one version of Red Hat JBoss Data Grid to another when performing rolling upgrades. The RollingUpgradeManager
operations are:
recordKnownGlobalKeyset
retrieves the entire keyset from the cluster running on the old version of JBoss Data Grid.synchronizeData
performs the migration of data from the Source Cluster to the Target Cluster, which is running the new version of JBoss Data Grid.disconnectSource
disables the Source Cluster, the older version of JBoss Data Grid, once data migration to the Target Cluster is complete.
36.4. RemoteCacheStore Parameters for Rolling Upgrades
36.4.1. rawValues and RemoteCacheStore
rawValues
parameter causes the raw values to be stored instead for interoperability with direct access by RemoteCacheManagers.
rawValues
must be enabled in order to interact with a Hot Rod cache via both RemoteCacheStore and RemoteCacheManager.
36.4.2. hotRodWrapping
hotRodWrapping
parameter is a shortcut that enables rawValues and sets an appropriate marshaller and entry wrapper for performing Rolling Upgrades.
Chapter 37. Custom Interceptors
Warning
37.1. Custom Interceptor Design
- A custom interceptor must extend the
CommandInterceptor
. - A custom interceptor must declare a public, empty constructor to allow for instantiation.
- A custom interceptor must have JavaBean style setters defined for any property that is defined through the
property
element.
37.2. Adding Custom Interceptors Declaratively
Procedure 37.1. Adding Custom Interceptors
<local-cache name="cacheWithCustomInterceptors"> <custom-interceptors> <interceptor position="FIRST" class="com.mycompany.CustomInterceptor1"> <property name="attributeOne" value="value1" /> <property name="attributeTwo" value="value2" /> </interceptor> <interceptor position="LAST" class="com.mycompany.CustomInterceptor2"/> <interceptor index="3" class="com.mycompany.CustomInterceptor1"/> <interceptor before="org.infinispan.interceptors.CallInterceptor" class="com.mycompany.CustomInterceptor2"/> <interceptor after="org.infinispan.interceptors.CallInterceptor" class="com.mycompany.CustomInterceptor1"/> </customInterceptors> </local-cache>
Define Custom Interceptors
All custom interceptors must extendorg.infinispan.interceptors.base.BaseCustomInterceptor
.Define the Position of the New Custom Interceptor
Interceptors must have a defined position. These options are mutually exclusive, meaning an interceptor cannot have both a position attribute and index attribute. Valid options are:via Position Attribute
FIRST
- Specifies that the new interceptor is placed first in the chain.LAST
- Specifies that the new interceptor is placed last in the chain.OTHER_THAN_FIRST_OR_LAST
- Specifies that the new interceptor can be placed anywhere except first or last in the chain.
via Index Attribute
- The
index
identifies the position of this interceptor in the chain. This index begins at 0, being the first position in the chain, and goes up to a number of interceptors in a given configuration.
via Before or After Attributes
- The
after
attributes places the new interceptor directly after the instance of the named interceptor, specified via its fully qualified class name. - The
before
attribute places the new interceptor directly before the instance of the named interceptor, specified via its fully qualified class name.
Define Interceptor Properties
Define specific interceptor properties.
Apply Other Custom Interceptors
In this example, the next custom interceptor is called CustomInterceptor2.
Note
OTHER_THAN_FIRST_OR_LAST
may cause the CacheManager to fail.
Note
Chapter 38. Externalize Sessions
38.1. Externalize HTTP Session from JBoss EAP to JBoss Data Grid
Note
Note
Procedure 38.1. Externalize HTTP Sessions
- Ensure the remote cache containers are defined in EAP's
infinispan
subsystem; in the example below thecache
attribute in theremote-store
element defines the cache name on the remote JBoss Data Grid server:<subsystem xmlns="urn:jboss:domain:infinispan:4.0"> [...] <cache-container name="web" default-cache="dist" module="org.jboss.as.clustering.web.infinispan" statistics-enabled="true"> <transport lock-timeout="60000"/> <invalidation-cache name="jdg" mode="SYNC"> <locking isolation="REPEATABLE_READ"/> <transaction mode="BATCH"/> <remote-store remote-servers="remote-jdg-server1 remote-jdg-server2" cache="default" socket-timeout="60000" preload="true" passivation="false" purge="false" shared="true"/> </replicated-cache> </cache-container> </subsystem>
- Define the location of the remote Red Hat JBoss Data Grid server by adding the networking information to the
socket-binding-group
:<socket-binding-group ...> <outbound-socket-binding name="remote-jdg-server1"> <remote-destination host="JDGHostName1" port="11222"/> </outbound-socket-binding> <outbound-socket-binding name="remote-jdg-server2"> <remote-destination host="JDGHostName2" port="11222"/> </outbound-socket-binding> </socket-binding-group>
- Repeat the above steps for each cache-container and each Red Hat JBoss Data Grid server. Each server defined must have a separate
<outbound-socket-binding>
element defined. - Add passivation and cache information into the application's
jboss-web.xml
. In the following exampleweb
is the name of the cache container, andjdg
is the name of the default cache located in this container. An example file is shown below:<?xml version="1.0" encoding="UTF-8"?> <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd" version="10.0"> <replication-config> <replication-granularity>SESSION</replication-granularity> <cache-name>web.jdg</cache-name> </replication-config> </jboss-web>
Note
The passivation timeouts above are provided assuming that a typical session is abandoned within 15 minutes and uses the default HTTP session timeout in JBoss EAP of 30 minutes. These values may need to be adjusted based on each application's workload.
Chapter 39. Data Interoperability
39.1. Protocol Interoperability
The compatibility
element's marshaller
parameter may be set to a custom marshaler to enable compatibility conversions. An example of this is found below:
Example 39.1. Compatibility Mode Enabled
<cache-container name="local" default-cache="default" statistics="true"> <local-cache name="default" start="EAGER" statistics="true"> <compatibility marshaller="com.example.CustomMarshaller"/> </local-cache> </cache-container>
Chapter 40. Handling Network Partitions (Split Brain)
owners
configuration attribute, which specifies the number of replicas for each cache entry in the cache. As a result, as long as the number of nodes that have failed are less than the value of owners
, JBoss Data Grid retains a copy of the lost data and can recover.
Note
owners
is always equal to the number of nodes in the cache, because each node contains a copy of every data item in the cache in this mode.
owners
can disappear from the cache. Two common reasons for this are:
- Split-Brain: Usually, as the result of a router crash, the cache is divided into two or more partitions. Each of the partitions operates independently of the other and each may contain different versions of the same data.
- Sucessive Crashed Nodes: A number of nodes greater than the value of
owners
crashes in succession for any reason. JBoss Data Grid is unable to properly balance the state between crashes, and the result is partial data loss.
40.1. Detecting and Recovering from a Split-Brain Problem
- At least one segment has lost all its owners, which means that a number of nodes equal to or greater than the value of
owners
have left the JGroups view. - The partition does not contain a majority of nodes (greater than half) of the nodes from the latest stable topology. The stable topology is updated each time a rebalance operation successfully concludes and the coordinator determines that additional rebalancing is not required.
AvailabilityException
.
Note
Warning
- Transactional writes that were in progress at t1 when the split physically occurred may be rolled back on some of the owners. This can result in inconsistency between the copies (after the partitions rejoin) of an entry that is affected by such a write. However, transactional writes that started after t1 will fail as expected.
- If the write is non-transactional, then during this time window, a value written only in a minor partition (due to physical split and because the partition has not yet been Degraded) can be lost when partitions rejoin, if this minor partition receives state from a primary (Available) partition upon rejoin. If the partition does not receive state upon rejoin (i.e. all partitions are degraded), then the value is not lost, but an inconsistency can remain.
- There is also a possibility of a stale read in a minor partition during this transition period, as an entry is still Available until the minor partition enters Degraded state.
- If one of the partitions was Available during the network partition, then the joining partition(s) are wiped out and state transfer occurs from the Available (primary) partition to the joining nodes.
- If all joining partitions were Degraded during the Split Brain, then no state transfer occurs during the merge. The combined cache is then Available only if the merging partitions contain a simple majority of the members in the latest stable topology (one with the highest topology ID) and has at least an owner for each segment (i.e. keys are not lost).
Warning
40.2. Split Brain Timing: Detecting a Split
FD_ALL
protocol a given node becomes suspected after the following amount of milliseconds have passed:
FD_ALL.timeout
+FD_ALL.interval
+VERIFY_SUSPECT.timeout
+GMS.view_ack_collection_timeout
Important
40.3. Split Brain Timing: Recovering From a Split
3.1 * MERGE3.max_interval
10 * MERGE3.max_interval
Important
40.4. Detecting and Recovering from Successive Crashed Nodes
owners
is greater than 1, the cluster remains available and JBoss Data Grid attempts to create new replicas of the lost data. However, if additional nodes crash during this rebalancing process, it is possible that for some entries, all copies of its data have left the node and therefore cannot be recovered.
owners
to ensure that even if a large number of nodes leave the cluster in rapid succession, JBoss Data Grid is able to rebalance the nodes to recover the lost data.
40.5. Network Partition Recovery Examples
- A distributed four node cluster with
owners
set to 3 at Section 40.5.1, “Distributed 4-Node Cache Example With 3 Owners” - A distributed four node cluster with
owners
set to 2 at Section 40.5.2, “Distributed 4-Node Cache Example With 2 Owners” - A distributed five node cluster with
owners
set to 3 at Section 40.5.3, “Distributed 5-Node Cache Example With 3 Owners” - A replicated four node cluster with
owners
set to 4 at Section 40.5.4, “Replicated 4-Node Cache Example With 4 Owners” - A replicated five node cluster with
owners
set to 5 at Section 40.5.5, “Replicated 5-Node Cache Example With 5 Owners” - A replicated eight node cluster with
owners
set to 8 at Section 40.5.6, “Replicated 8-Node Cache Example With 8 Owners”
40.5.1. Distributed 4-Node Cache Example With 3 Owners
k1
, k2
, k3
, and k4
). For this cache, owners
equals 3, which means that each data entry must have three copies on various nodes in the cache.
Figure 40.1. Cache Before and After a Network Partition
owners
) nodes left from the last stable view. As a result, none of the four entries (k1
, k2
, k3
, and k4
) are available for reads or writes. No new entries can be written in either degraded partition, as neither partition can store 3 copies of an entry.
Figure 40.2. Cache After Partitions Are Merged
k1
, k2
, k3
, and k4
).
40.5.2. Distributed 4-Node Cache Example With 2 Owners
owners
equals 2, so the four data entries (k1
, k2
, k3
and k4
) have two copies each in the cache.
Figure 40.3. Cache Before and After a Network Partition
k1
is available for reads and writes because owners
equals 2 and both copies of the entry remain in Partition 1. In Partition 2, k4
is available for reads and writes for the same reason. The entries k2
and k3
become unavailable in both partitions, as neither partition contains all copies of these entries. A new entry k5
can be written to a partition only if that partition were to own both copies of k5
.
Figure 40.4. Cache After Partitions Are Merged
k1
, k2
, k3
and k4
).
40.5.3. Distributed 5-Node Cache Example With 3 Owners
owners
equal to 3.
Figure 40.5. Cache Before and After a Network Partition
owners
nodes.
Figure 40.6. Partition 1 Rebalances and Another Entry is Added
owners
equals 3) in the cache. As a result, each of the three nodes contains a copy of every entry in the cache. Next, we add a new entry, k6
, to the cache. Since the owners
value is still 3, and there are three nodes in Partition 1, each node includes a copy of k6
.
Figure 40.7. Cache After Partitions Are Merged
owners
=3), JBoss Data Grid rebalances the nodes so that the data entries are distributed between the four nodes in the cache. The new combined cache becomes fully available.
40.5.4. Replicated 4-Node Cache Example With 4 Owners
owners
equal to 4.
Figure 40.8. Cache Before and After a Network Partition
k1
, k2
, k3
, and k4
are unavailable for reads and writes because neither of the two partitions owns all copies of any of the four keys.
Figure 40.9. Cache After Partitions Are Merged
k1
, k2
, k3
, and k4
).
40.5.5. Replicated 5-Node Cache Example With 5 Owners
owners
equal to 5.
Figure 40.10. Cache Before and After a Network Partition
Figure 40.11. Both Partitions Are Merged Into One Cache
40.5.6. Replicated 8-Node Cache Example With 8 Owners
owners
equal to 8.
Figure 40.12. Cache Before and After a Network Partition
Figure 40.13. Partition 2 Further Splits into Partitions 2A and 2B
There are four potential resolutions for the caches from this scenario:
- Case 1: Partitions 2A and 2B Merge
- Case 2: Partition 1 and 2A Merge
- Case 3: Partition 1 and 2B Merge
- Case 4: Partition 1, Partition 2A, and Partition 2B Merge Together
Figure 40.14. Case 1: Partitions 2A and 2B Merge
Figure 40.15. Case 2: Partition 1 and 2A Merge
Figure 40.16. Case 3: Partition 1 and 2B Merge
Figure 40.17. Case 4: Partition 1, Partition 2A, and Partition 2B Merge Together
40.6. Configure Partition Handling
Enable partition handling declaratively as follows:
<distributed-cache name="distributed_cache" owners="2" l1-lifespan="20000"> <partition-handling enabled="true"/> </distributed-cache>
Enable partition handling declaratively in remote client-server mode by using the following configuration:
<subsystem xmlns="urn:infinispan:server:core:8.3" default-cache-container="clustered"> <cache-container name="clustered" default-cache="default" statistics="true"> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER"> <partition-handling enabled="true" /> <locking isolation="READ_COMMITTED" acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> </distributed-cache> </cache-container> </subsystem>
Appendix A. Recommended JGroups Values for JBoss Data Grid
A.1. Supported JGroups Protocols
Table A.1. Supported JGroups Protocols
Protocol | Details |
---|---|
TCP |
TCP/IP is a replacement transport for UDP in situations where IP multicast cannot be used, such as operations over a WAN where routers may discard IP multicast packets.
TCP is a transport protocol used to send unicast and multicast messages.
As IP multicasting cannot be used to discover initial members, another mechanism must be used to find initial membership.
Red Hat JBoss Data Grid's Hot Rod is a custom TCP client/server protocol.
|
UDP |
UDP is a transport protocol that uses:
When the UDP transport is started, it opens a unicast socket and a multicast socket. The unicast socket is used to send and receive unicast messages, the multicast socket sends and receives multicast sockets. The physical address of the channel with be the same as the address and port number of the unicast socket.
|
PING |
The PING protocol is used for the initial discovery of members. It is used to detect the coordinator, which is the oldest member, by multicasting PING requests to an IP multicast address.
Each member responds to the ping with a packet containing the coordinator's and their own address. After a specified number of milliseconds (N) or replies (M), the joiner determines the coordinator from the responses and sends it a JOIN request (handled by GMS). If there is no response, the joiner is considered the first member of the group.
PING differs from TCPPING because it used dynamic discovery, which means that a member does not need to know in advance where the other cluster members are. PING uses the transport's IP multicasting abilities to send a discovery request to the cluster. As a result, PING requires UDP as transport.
|
TCPPING | The TCCPING protocol uses a set of known members and pings them for discovery. This protocol has a static configuration. |
MPING | The MPING (Multicast PING) protocol uses IP multicast to discover the initial membership. It can be used with all transports, but is usually used in combination with TCP. |
S3_PING |
S3_PING is a discovery protocol that is ideal for use with Amazon's Elastic Compute Cloud (EC2) because EC2 does not allow multicast and therefore MPING is not allowed.
Each EC2 instance adds a small file to an S3 data container, known as a bucket. Each instance then reads the files in the bucket to discover the other members of the cluster.
|
JDBC_PING |
JDBC_PING is a discovery protocol that utilizes a shared database to store information regarding nodes in the cluster.
|
TCPGOSSIP |
TCPGOSSIP is a discovery protocol that uses one or more configured GossipRouter processes to store information about the nodes in the cluster.
|
MERGE3 | The MERGE3 protocol is available in JGroups 3.1 onwards. Unlike MERGE2, in MERGE3, all members periodically send an INFO message with their address (UUID), logical name, physical address and View ID. Periodically, each coordinator reviews the INFO details to ensure that there are no inconsistencies. |
FD_ALL | Used for failure detection, FD_ALL uses a simple heartbeat protocol. Each member maintains a table of all other members (except itself) and periodically multicasts a heartbeat. For example, when data or a heartbeat from P is received, the timestamp for P is set to the current time. Periodically, expired members are identified using the timestamp values. |
FD_SOCK | FD_SOCK is a failure detection protocol based on a ring of TCP sockets created between cluster members. Each cluster member connects to its neighbor (the last member connects to the first member), which forms a ring. Member B is suspected when its neighbor A detects an abnormal closing of its TCP socket (usually due to node B crashing). However, if member B is leaving gracefully, it informs member A and does not become suspected when it does exit. |
FD_HOST |
FD_HOST is a failure detection protocol that detects the crashing or hanging of entire hosts and suspects all cluster members of that host through ICMP ping messages or custom commands. FD_HOST does not detect the crashing or hanging of single members on the local hosts, but only checks whether all the other hosts in the cluster are live and available. It is therefore used in conjunction with other failure detection protocols such as FD_ALL and FD_SOCK. This protocol is typically used when multiple cluster members are running on the same physical box.
The FD_HOST protocol is supported on Windows for JBoss Data Grid. The
cmd parameter must be set to ping.exe and the ping count must be specified.
|
VERIFY_SUSPECT | The VERIFY_SUSPECT protocol verifies whether a suspected member is dead by pinging the member before excluding it. If the member responds, the suspect message is discarded. |
NAKACK2 |
The NAKACK2 protocol is a successor to the NAKACK protocol and was introduced in JGroups 3.1.
The NACKACK2 protocol is used for multicast messages and uses NAK. Each message is tagged with a sequence number. The receiver tracks the sequence numbers and delivers the messages in order. When a gap in the sequence numbers is detected, the receiver asks the sender to retransmit the missing message.
|
UNICAST3 |
The UNICAST3 protocol provides reliable delivery (no message sent by a sender is lost because they are sent in a numbered sequence) and uses the FIFO (First In First Out) properties for point to point messages between a sender and a receiver.
UNICAST3 uses positive acks for retransmission. For example, sender A keeps sending message M until receiver B receives message M and returns an ack to indicate a successful delivery. Sender A keeps resending message M until it receives an ack from B, until B leaves the cluster, or A crashes.
|
STABLE | The STABLE protocol is a garbage collector for messages that have been viewed by all members in a cluster. Each member stores all messages because retransmission may be required. A message can only be removed from the retransmission buffers when all members have seen the message. The STABLE protocol periodically gossips its highest and lowest messages seen. The lowest value is used to compute the min (all lowest sequence numbers for all members) and messages with a sequence number below the min value can be discarded |
GMS | The GMS protocol is the group membership protocol. This protocol handles joins/leaves/crashes (suspicions) and emits new views accordingly. |
MFC | MFC is the Multicast version of the flow control protocol. |
UFC | UFC is the Unicast version of the flow control protocol. |
FRAG2 | The FRAG2 protocol fragments large messages into smaller ones and then sends the smaller messages. At the receiver side, the smaller fragments are reassembled into larger, complete messages and delivered to the application. FRAG2 is used for both multicast and unicast messages. |
SYM_ENCRYPT |
JGroups includes the SYM_ENCRYPT protocol to provide encryption for cluster traffic. By default, encryption only encrypts the message body; it does not encrypt message headers. To encrypt the entire message, including all headers, as well as destination and source addresses, the property
encrypt_entire_message must be true. When defining this protocol it should be placed directly under NAKACK2 .
The SYM_ENCRYPT layer is used to encrypt and decrypt communication in JGroups by defining a secret key in a keystore.
Each message is identified as encrypted with a specific encryption header identifying the encrypt header and an MD5 digest identifying the version of the key being used to encrypt and decrypt messages.
|
ASYM_ENCRYPT |
JGroups includes the ASYM_ENCRYPT protocol to provide encryption for cluster traffic. By default, encryption only encrypts the message body; it does not encrypt message headers. To encrypt the entire message, including all headers, as well as destination and source addresses, the property
encrypt_entire_message must be true. When defining this protocol it should be placed directly under NAKACK2 .
The ASYM_ENCRYPT layer is used to encrypt and decrypt communication in JGroups by having a coordinator generate a secret key using defined algorithms and key sizes.
Each message is identified as encrypted with a specific encryption header identifying the encrypt header and an MD5 digest identifying the version of the key being used to encrypt and decrypt messages.
|
SASL | The SASL (Simple Authentication and Security Layer) protocol is a framework that provides authentication and data security services in connection-oriented protocols using replaceable mechanisms. Additionally, SASL provides a structured interface between protocols and mechanisms. |
RELAY2 |
The RELAY protocol bridges two remote clusters by creating a connection between one node in each site. This allows multicast messages sent out in one site to be relayed to the other and vice versa.
JGroups includes the RELAY2 protocol, which is used for communication between sites in Red Hat JBoss Data Grid's Cross-Site Replication.
The RELAY2 protocol works similarly to RELAY but with slight differences. Unlike RELAY, the RELAY2 protocol:
|
A.2. TCP Default and Recommended Values
Note
- Values in
JGroups Default Value
indicate values that are configured internally to JGroups, but may be overridden by a custom configuration file or by a JGroups configuration file shipped with JBoss Data Grid. - Values in
JBoss Data Grid Configured Values
indicate values that are in use by default when using one of the configuration files for JGroups as shipped with JBoss Data Grid. It is recommended to use these values when custom configuration files for JGroups are in use with JBoss Data Grid.
Table A.2. Recommended and Default Values for TCP
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
bind_addr | Any non-loopback | Set address on specific interface |
bind_port | Any free port | Set specific port |
loopback | true | Same as default |
port_range | 50 | Set based on desired range of ports |
recv_buf_size | 150,000 | Same as default |
send_buf_size | 150,000 | 640,000 |
use_send_queues | true | Same as default |
sock_conn_timeout | 2,000 | 300 |
max_bundle_size | 64,000 | 64,000 |
enable_diagnostics | true | false |
thread_pool.enabled | true | Same as default |
thread_pool.min_threads | 2 | This should equal the number of nodes |
thread_pool.max_threads | 30 | This should be higher than thread_pool.min_threads . For example, for a smaller grid (2-10 nodes), set this value to twice the number of nodes, but for a larger grid (20 or more nodes), the ratio should be lower. As an example, if a grid contains 20 nodes, set this value to 25 and if the grid contains 100 nodes, set the value to 110. |
thread_pool.keep_alive_time | 30,000 | 60,000 |
thread_pool.queue_enabled | true | false |
thread_pool.queue_max_size | 500 | None, queue should be disabled |
thread_pool.rejection_policy | Discard | Same as default |
internal_thread_pool.enabled | true | Same as default |
internal_thread_pool.min_threads | 2 | 5 |
internal_thread_pool.max_threads | 4 | 20 |
internal_thread_pool.keep_alive_time | 30,000 | 60,000 |
internal_thread_pool.queue_enabled | true | false |
internal_thread_pool.rejection_policy | Discard | Abort |
oob_thread_pool.enabled | true | Same as default |
oob_thread_pool.min_threads | 2 | 20 or higher |
oob_thread_pool.max_threads | 10 | 200 or higher, depending on the load |
oob_thread_pool.keep_alive_time | 30,000 | 60,000 |
oob_thread_pool.queue_enabled | true | false |
oob_thread_pool.queue_max_size | 500 | None, queue should be disabled |
oob_thread_pool.rejection_policy | Discard | Same as default |
Note
pbcast.GMS join_timeout
value indicates the timeout period instead.
See Section 31.2, “S3_PING Configuration Options” for details about configuring S3_PING for JBoss Data Grid.
See Section A.5, “TCPGOSSIP Configuration Options” for details about configuring TCPGOSSIP for JBoss Data Grid.
Table A.3. Recommended Values for MPING
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
bind_addr | Any non-loopback | Set address on specific interface |
break_on_coord_rsp | true | Same as default |
mcast_addr | 230.5.6.7 | Same as default |
mcast_port | 7555 | Same as default |
ip_ttl | 8 | 2 |
Note
pbcast.GMS join_timeout
value indicates the timeout period instead.
Table A.4. Recommended Values for MERGE3
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
min_interval | 1,000 | 10,000 |
max_interval | 10,000 | 30,000 |
Table A.5. Recommended Values for FD_SOCK
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
client_bind_por | 0 (randomly selects a port and uses it) | Same as default |
get_cache_timeout | 1000 milliseconds | Same as default |
keep_alive | true | Same as default |
num_tries | 3 | Same as default |
start_port | 0 (randomly selects a port and uses it) | Same as default |
suspect_msg_interval | 5000 milliseconds. | Same as default |
Table A.6. Recommended Values for FD_ALL
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
timeout | 40,000 | 60,000. The FD_ALL timeout value is set to two times the longest possible stop the world garbage collection pause in the CMS garbage collector. In a well-tuned JVM, the longest pause is proportional to heap size and should not exceed 1 second per GB of heap. For example, an 8GB heap should not have a pause longer than 8 seconds, so the FD_ALL timeout value must be set to 16 seconds. If longer garbage collection pauses are used, then this timeout value should be increased to avoid false failure detection on a node. |
interval | 8,000 | 15,000. The FD_ALL interval value must be at least four times smaller than the value set for FD_ALL's timeout value. |
timeout_check_interval | 2,000 | 5,000 |
Table A.7. Recommended Values for FD_HOST
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
check_timeout | 3,000 | 5,000 |
cmd | InetAddress.isReachable() (ICMP ping) | - |
interval | 20,000 | 15,000. The interval value for FD_HOST must be four times smaller than FD_HOST's timeout value. |
timeout | 60,000 | 60,000. |
Table A.8. Recommended Values for VERIFY_SUSPECT
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
timeout | 2,000 | 5,000 |
Table A.9. Recommended Values for pbcast.NAKACK2
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
use_mcast_xmit | true | false |
xmit_interval | 1,000 | Same as default |
xmit_table_num_rows | 50 | 50 |
xmit_table_msgs_per_row | 10,000 | 1,024 |
xmit_table_max_compaction_time | 10,000 | 30,000 |
max_msg_batch_size | 100 | Same as default |
resend_last_seqno | false | true |
Table A.10. Recommended Values for UNICAST3
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
xmit_interval | 500 | Same as default |
xmit_table_num_rows | 100 | 50 |
xmit_table_msgs_per_row | 1,0000 | 1,024 |
xmit_table_max_compaction_time | 600,000 | 30,000 |
max_msg_batch_size | 500 | 100 |
conn_close_timeout | 60,000 | No recommended value. |
conn_expiry_timeout | 120,000 | 0 |
Table A.11. Recommended Values for pbcast.STABLE
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
stability_delay | 6,000 | 500 |
desired_avg_gossip | 20,000 | 5,000 |
max_bytes | 2,000,000 | 1,000,000 |
Table A.12. Recommended Values for pbcast.GMS
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
print_local_addr | true | false |
join_timeout | 5,000 | 15,000 |
view_bundling | true | Same as default |
Table A.13. Recommended Values for MFC
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
max_credits | 500,000 | 2,000,000 |
min_threshold | 0.40 | Same as default |
Table A.14. Recommended Values for FRAG2
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
frag_size | 60,000 | Same as default |
Table A.15. Recommended Values for SYM_ENCRYPT
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
sym_algorithm | AES | - |
sym_keylength | 128 | - |
sym_provider | Bouncy Castle Provider | - |
Table A.16. Recommended Values for ASYM_ENCRYPT
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
asym_algorithm | RSA | - |
asym_keylength | 512 | - |
asym_provider | Bouncy Castle Provider | - |
change_keys_on_leave | false | - |
See the Red Hat JBoss Data Grid Developer Guide's User Authentication over Hot Rod Using SASL section for details.
See Chapter 35, Set Up Cross-Datacenter Replication for details.
A.3. UDP Default and Recommended Values
Note
- Values in
JGroups Default Value
indicate values that are configured internally to JGroups, but may be overridden by a custom configuration file or by a JGroups configuration file shipped with JBoss Data Grid. - Values in
JBoss Data Grid Configured Values
indicate values that are in use by default when using one of the configuration files for JGroups as shipped with JBoss Data Grid. It is recommended to use these values when custom configuration files for JGroups are in use with JBoss Data Grid.
Table A.17. Recommended Values for UDP
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
bind_addr | Any non-loopback | Set address on specific interface |
bind_port | Any free port | Set specific port |
loopback | true | true |
port_range | 50 | Set based on desired range of ports |
mcast_addr | 228.8.8.8 | Same as default |
mcast_port | 7600 | Same as default |
tos | 8 | Same as default |
ucast_recv_buf_size | 64,000 | 20,000,000 |
ucast_send_buf_size | 100,000 | 1,000,000 |
mcast_recv_buf_size | 500,000 | 25,000,000 |
mcast_send_buf_size | 100,000 | 1,000,000 |
ip_ttl | 8 | 2 |
thread_naming_pattern | cl | pl |
max_bundle_size | 64,000 | Same as default |
enable_diagnostics | true | false |
thread_pool.enabled | true | Same as default |
thread_pool.min_threads | 2 | This should equal the number of nodes. |
thread_pool.max_threads | 30 | This should be higher than thread_pool.min_threads. For example, for a smaller grid (2-10 nodes), set this value to twice the number of nodes, but for a larger grid (20 or more nodes), the ratio should be lower. As an example, if a grid contains 20 nodes, set this value to 25 and if the grid contains 100 nodes, set the value to 110. |
thread_pool.keep_alive_time | 30,000 | 60,000 |
thread_pool.queue_enabled | true | false |
thread_pool.queue_max_size | 500 | None, queue should be disabled |
thread_pool.rejection_policy | Discard | Same as default |
internal_thread_pool.enabled | true | Same as default |
internal_thread_pool.min_threads | 2 | 5 |
internal_thread_pool.max_threads | 4 | 20 |
internal_thread_pool.keep_alive_time | 30,000 | 60,000 |
internal_thread_pool.queue_enabled | true | false |
internal_thread_pool.rejection_policy | Discard | Abort |
oob_thread_pool.enabled | true | Same as default |
oob_thread_pool.min_threads | 2 | 20 or higher |
oob_thread_pool.max_threads | 10 | 200 or higher based on the load |
oob_thread_pool.keep_alive_time | 30,000 | 60,000 |
oob_thread_pool.queue_enabled | true | false |
oob_thread_pool.queue_max_size | 500 | None, queue should be disabled |
oob_thread_pool.rejection_policy | Discard | Same as default |
Note
join_timeout
value indicates the timeout period instead.
Table A.18. Recommended Values for MERGE3
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
min_interval | 1,000 | 10,000 |
max_interval | 10,000 | 30,000 |
Table A.19. Recommended Values for FD_SOCK
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
client_bind_por | 0 (randomly selects a port and uses it) | Same as default |
get_cache_timeout | 1000 milliseconds | Same as default |
keep_alive | true | Same as default |
num_tries | 3 | Same as default |
start_port | 0 (randomly selects a port and uses it) | Same as default |
suspect_msg_interval | 5000 milliseconds. | Same as default |
Table A.20. Recommended Values for FD_ALL
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
timeout | 40,000 | 60,000. The FD_ALL timeout value is set to two times the longest possible stop the world garbage collection pause in the CMS garbage collector. In a well-tuned JVM, the longest pause is proportional to heap size and should not exceed 1 second per GB of heap. For example, an 8GB heap should not have a pause longer than 8 seconds, so the FD_ALL timeout value must be set to 16 seconds. If longer garbage collection pauses are used, then this timeout value should be increased to avoid false failure detection on a node. |
interval | 8,000 | 15,000. The FD_ALL interval value must be at least four times smaller than the value set for FD_ALL's timeout value. |
timeout_check_interval | 2,000 | 5,000 |
Table A.21. Recommended Values for FD_HOST
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
check_timeout | 3,000 | 5,000 |
cmd | InetAddress.isReachable() (ICMP ping) | - |
interval | 20,000 | 15,000. The interval value for FD_HOST must be four times smaller than FD_HOST's timeout value. |
timeout | - | 60,000. |
Table A.22. Recommended Values for VERIFY_SUSPECT
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
timeout | 2,000 | 5,000 |
Table A.23. Recommended Values for pbcast.NAKACK2
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
use_mcast_xmit | true | false |
xmit_interval | 1,000 | Same as default |
xmit_table_num_rows | 50 | 50 |
xmit_table_msgs_per_row | 10,000 | 1,024 |
xmit_table_max_compaction_time | 10,000 | 30,000 |
max_msg_batch_size | 100 | Same as default |
resend_last_seqno | false | true |
Table A.24. Recommended Values for UNICAST3
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
xmit_interval | 500 | Same as default |
xmit_table_num_rows | 100 | 50 |
xmit_table_msgs_per_row | 1,0000 | 1,024 |
xmit_table_max_compaction_time | 600,000 | 30,000 |
max_msg_batch_size | 500 | 100 |
conn_close_timeout | 60,000 | No recommended value |
conn_expiry_timeout | 120,000 | 0 |
Table A.25. Recommended Values for pbcast.STABLE
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
stability_delay | 6,000 | 500 |
desired_avg_gossip | 20,000 | 5,000 |
max_bytes | 2,000,000 | 1,000,000 |
Table A.26. Recommended Values for pbcast.GMS
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
print_local_addr | true | false |
join_timeout | 5,000 | 15,000 |
view_bundling | true | Same as default |
Table A.27. Recommended Values for UFC
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
max_credits | 500,000 | 2,000,000 |
min_threshold | 0.40 | Same as default |
Table A.28. Recommended Values for MFC
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
max_credits | 500,000 | 2,000,000 |
min_threshold | 0.40 | Same as default |
Table A.29. Recommended Values for FRAG2
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
frag_size | 60,000 | Same as default |
Table A.30. Recommended Values for SYM_ENCRYPT
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
sym_algorithm | AES | - |
sym_keylength | 128 | - |
sym_provider | Bouncy Castle Provider | - |
Table A.31. Recommended Values for ASYM_ENCRYPT
Parameter | JGroups Default Value | JBoss Data Grid Configured Values |
---|---|---|
asym_algorithm | RSA | - |
asym_keylength | 512 | - |
asym_provider | Bouncy Castle Provider | - |
change_keys_on_leave | false | - |
See the Red Hat JBoss Data Grid Developer Guide's User Authentication over Hot Rod Using SASL section for details.
See Chapter 35, Set Up Cross-Datacenter Replication for details.
A.4. The TCPGOSSIP JGroups Protocol
TCPGOSSIP
discovery protocol uses one or more configured GossipRouter processes to store information about the nodes in the cluster.
Important
The GossipRouter is included in the JGroups jar file, and must be running before any nodes are started. This process may be started by pointing to the GossipRouter
class in the JGroups jar file included with JBoss Data Grid:
java -classpath jgroups-${jgroups.version}.jar org.jgroups.stack.GossipRouter -bindaddress IP_ADDRESS -port PORT
In Library Mode the JGroups xml file should be used to configure TCPGOSSIP
; however, there is no TCPGOSSIP
configuration included by default. It is recommended to use one of the preexisting files specified in Section 30.2.2, “Pre-Configured JGroups Files” and then adjust the configuration to include TCPGOSSIP
. For instance, default-configs/default-jgroups-ec2.xml
could be selected and the S3_PING
protocol removed, and then the following block added in its place:
<TCPGOSSIP initial_hosts="IP_ADDRESS_0[PORT_0],IP_ADDRESS_1[PORT_1]" />
In Remote Client-Server Mode a stack
may be defined for TCPGOSSIP
in the jgroups
subsystem of the server's configuration file. The following configuration snippet contains an example of this:
<subsystem xmlns="urn:infinispan:server:jgroups:8.0" default-stack="${jboss.default.jgroups.stack:tcpgossip}"> [...] <stack name="jdbc_ping"> <transport type="TCP" socket-binding="jgroups-tcp"/> <protocol type="TCPGOSSIP"> <property name="initial_hosts">IP_ADDRESS_0[PORT_0],IP_ADDRESS_1[PORT_1]</property> </protocol> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"> <property name="use_mcast_xmit">false</property> </protocol> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> [...] </subsystem>
A.5. TCPGOSSIP Configuration Options
TCPGOSSIP
specific properties may be configured:
initial_hosts
- Comma delimited list of hosts to be contacted for initial membership.reconnect_interval
- Interval (in milliseconds) by which a disconnected node attempts to reconnect to the Gossip Router.sock_conn_timeout
- Max time (in milliseconds) allowed for socket creation. Defaults to1000
.sock_read_timeout
- Max time (in milliseconds) to block on a read. A value of0
will block forever.
A.6. JBoss Data Grid JGroups Configuration Files
infinispan-embedded-${infinispan.version}.jar
found with the Library mode distribution.
default-configs/default-jgroups-ec2.xml
default-configs/default-jgroups-google.xml
default-configs/default-jgroups-tcp.xml
default-configs/default-jgroups-udp.xml
Appendix B. Connecting with JConsole
B.1. Connect to JDG via JConsole
Procedure B.1. Add Management User to JBoss Data Grid
- Navigate to the
bin
directorycd $JDG_HOME/bin
- Execute the
add-user.sh
script../add-user.sh
- Accept the default option of
ManagementUser
by pressing return. - Accept the default option of
ManagementRealm
by pressing return. - Enter the desired username. In this example
jmxadmin
will be used. - Enter and confirm the password.
- Accept the default option of no groups by pressing return.
- Confirm that the desired user will be added to the
ManagementRealm
by enteringyes
. - Enter
no
as this user will not be used for connections between processes. - The following image shows an example execution run.
Figure B.1. Execution of add-user.sh
By default JBoss Data Grid will start with the management interface binding to 127.0.0.1. In order to connect remotely this interface must be bound to an IP address that is visible by the network. Either of the following options will correct this:
- Option 1: Runtime - By adjusting the
jboss.bind.address.management
property on startup a new IP address can be specified. In the following example JBoss Data Grid is starting with this bound to 192.168.122.5:./standalone.sh ... -Djboss.bind.address.management=192.168.122.5
- Option 2: Configuration - Adjust the
jboss.bind.address.management
in the configuration file. This is found in theinterfaces
subsystem. A snippet of the configuration file, with the IP adjusted to 192.168.122.5, is provided below:<interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:192.168.122.5}"/> </interface> [...] </interface>
A jconsole.sh
script is provided in the $JDG_HOME/bin
directory. Executing this script will launch JConsole.
Procedure B.2. Connecting to a remote JBoss Data Grid instance using JConsole
- Execute the
$JDG_HOME/bin/jconsole.sh
script. This will result in the following window appearing:Figure B.2. JConsole
- Select
Remote Process
. - Enter
service:jmx:remoting-jmx://$IP:9999
in the text area. - Enter the username and password, created from the
add-user.sh
script. - Click
Connect
to initiate the connection. - Once connected ensure that the cache-related nodes may be viewed. The following screenshot shows such a node.
Figure B.3. JConsole: Showing a Cache
Appendix C. JMX MBeans in RedHat JBoss Data Grid
C.1. Activation
org.infinispan.eviction.ActivationManagerImpl
Table C.1. Attributes
Name | Description | Type | Writable |
---|---|---|---|
activations | Number of activation events. | String | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
Table C.2. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.2. Cache
org.infinispan.CacheImpl
Table C.3. Attributes
Name | Description | Type | Writable |
---|---|---|---|
cacheName | Returns the cache name. | String | No |
cacheStatus | Returns the cache status. | String | No |
configurationAsProperties | Returns the cache configuration in form of properties. | Properties | No |
version | Returns the version of Infinispan | String | No |
cacheAvailability | Returns the cache availability | String | Yes |
Table C.4. Operations
Name | Description | Signature |
---|---|---|
start | Starts the cache. | void start() |
stop | Stops the cache. | void stop() |
clear | Clears the cache. | void clear() |
C.3. CacheContainerStats
org.infinispan.stats.impl.CacheContainerStatsImpl
Table C.5. Attributes
Name | Description | Type | Writable |
---|---|---|---|
averageReadTime | Cache container total average number of milliseconds for all read operations in this cache container. | long | No |
averageRemoveTime | Cache container total average number of milliseconds for all remove operations in this cache container. | long | No |
averageWriteTime | Cache container total average number of milliseconds for all write operations in this cache container. | long | No |
evictions | Cache container total number of cache eviction operations. | long | No |
hitRatio | Cache container total percentage hit/(hit+miss) ratio for this cache. | double | No |
hits | Cache container total number of cache attribute hits. | long | No |
misses | Cache container total number of cache attribute misses. | long | No |
numberOfEntries | Cache container total number of entries currently in all caches from this cache container. | int | No |
readWriteRatio | Cache container read/writes ratio in all caches from this cache container. | double | No |
removeHits | Cache container total number of removal hits. | double | No |
removeMisses | Cache container total number of cache removals where keys were not found. | long | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
stores | Cache container total number of cache attribute put operations. | long | No |
C.4. CacheLoader
org.infinispan.interceptors.CacheLoaderInterceptor
Table C.6. Attributes
Name | Description | Type | Writable |
---|---|---|---|
cacheLoaderLoads | Number of entries loaded from the cache store. | long | No |
cacheLoaderMisses | Number of entries that did not exist in cache store. | long | No |
stores | Returns a collection of cache loader types which are configured and enabled. | Collection | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
Table C.7. Operations
Name | Description | Signature |
---|---|---|
disableStore | Disable all cache loaders of a given type, where type is a fully qualified class name of the cache loader to disable. | void disableStore(String storeType) |
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.5. CacheManager
org.infinispan.manager.DefaultCacheManager
Table C.8. Attributes
Name | Description | Type | Writable |
---|---|---|---|
cacheManagerStatus | The status of the cache manager instance. | String | No |
clusterMembers | Lists members in the cluster. | String | No |
clusterName | Cluster name. | String | No |
clusterSize | Size of the cluster in the number of nodes. | int | No |
createdCacheCount | The total number of created caches, including the default cache. | String | No |
definedCacheCount | The total number of defined caches, excluding the default cache. | String | No |
definedCacheNames | The defined cache names and their statuses. The default cache is not included in this representation. | String | No |
name | The name of this cache manager. | String | No |
nodeAddress | The network address associated with this instance. | String | No |
physicalAddresses | The physical network addresses associated with this instance. | String | No |
runningCacheCount | The total number of running caches, including the default cache. | String | No |
version | Infinispan version. | String | No |
globalConfigurationAsProperties | Global configuration properties | Properties | No |
Table C.9. Operations
Name | Description | Signature |
---|---|---|
startCache | Starts the default cache associated with this cache manager. | void startCache() |
startCache | Starts a named cache from this cache manager. | void startCache (String p0) |
C.6. CacheStore
org.infinispan.interceptors.CacheWriterInterceptor
Table C.10. Attributes
Name | Description | Type | Writable |
---|---|---|---|
writesToTheStores | Number of writes to the store. | long | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
Table C.11. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.7. ClusterCacheStats
org.infinispan.stats.impl.ClusterCacheStatsImpl
Table C.12. Attributes
Name | Description | Type | Writable |
---|---|---|---|
activations | The total number of activations in the cluster. | long | No |
averageReadTime | Cluster wide total average number of milliseconds for a read operation on the cache. | long | No |
averageRemoveTime | Cluster wide total average number of milliseconds for a remove operation in the cache. | long | No |
averageWriteTime | Cluster wide average number of milliseconds for a write operation in the cache. | long | No |
cacheLoaderLoads | The total number of cacheloader load operations in the cluster. | long | No |
cacheLoaderMisses | The total number of cacheloader load misses in the cluster. | long | No |
evictions | Cluster wide total number of cache eviction operations. | long | No |
hitRatio | Cluster wide total percentage hit/(hit+miss) ratio for this cache. | double | No |
hits | Cluster wide total number of cache hits. | long | No |
invalidations | The total number of invalidations in the cluster. | long | No |
misses | Cluster wide total number of cache attribute misses. | long | No |
numberOfEntries | Cluster wide total number of entries currently in the cache. | int | No |
numberOfLocksAvailable | Total number of exclusive locks available in the cluster. | int | No |
numberOfLocksHeld | The total number of locks held in the cluster. | int | No |
passivations | The total number of passivations in the cluster. | long | No |
readWriteRatio | Cluster wide read/writes ratio for the cache. | double | No |
removeHits | Cluster wide total number of cache removal hits. | double | No |
removeMisses | Cluster wide total number of cache removals where keys were not found. | long | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
storeWrites | The total number of cachestore store operations in the cluster. | long | No |
stores | Cluster wide total number of cache attribute put operations. | long | No |
timeSinceStart | Number of seconds since the first cache node started. | long | No |
Table C.13. Operations
Name | Description | Signature |
---|---|---|
setStaleStatsTreshold | Sets the threshold for cluster wide stats refresh (in milliseconds). | void setStaleStatsTreshold(long staleStatsThreshold) |
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.8. DeadlockDetectingLockManager
org.infinispan.util.concurrent.locks.DeadlockDetectingLockManager
Table C.14. Attributes
Name | Description | Type | Writable |
---|---|---|---|
detectedLocalDeadlocks | Number of local transactions that were rolled back due to deadlocks. | long | No |
detectedRemoteDeadlocks | Number of remote transactions that were rolled back due to deadlocks. | long | No |
overlapWithNotDeadlockAwareLockOwners | Number of situations when we try to determine a deadlock and the other lock owner is NOT a transaction. In this scenario we cannot run the deadlock detection mechanism. | long | No |
totalNumberOfDetectedDeadlocks | Total number of local detected deadlocks. | long | No |
concurrencyLevel | The concurrency level that the MVCC Lock Manager has been configured with. | int | No |
numberOfLocksAvailable | The number of exclusive locks that are available. | int | No |
numberOfLocksHeld | The number of exclusive locks that are held. | int | No |
Table C.15. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.9. DistributionManager
org.infinispan.distribution.DistributionManagerImpl
Note
Table C.16. Operations
Name | Description | Signature |
---|---|---|
isAffectedByRehash | Determines whether a given key is affected by an ongoing rehash. | boolean isAffectedByRehash(Object p0) |
isLocatedLocally | Indicates whether a given key is local to this instance of the cache. Only works with String keys. | boolean isLocatedLocally(String p0) |
locateKey | Locates an object in a cluster. Only works with String keys. | List locateKey(String p0) |
C.10. Interpreter
org.infinispan.cli.interpreter.Interpreter
Table C.17. Attributes
Name | Description | Type | Writable |
---|---|---|---|
cacheNames | Retrieves a list of caches for the cache manager. | String[] | No |
Table C.18. Operations
Name | Description | Signature |
---|---|---|
createSessionId | Creates a new interpreter session. | String createSessionId(String cacheName) |
execute | Parses and executes IspnCliQL statements. | String execute(String p0, String p1) |
C.11. Invalidation
org.infinispan.interceptors.InvalidationInterceptor
Table C.19. Attributes
Name | Description | Type | Writable |
---|---|---|---|
invalidations | Number of invalidations. | long | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
Table C.20. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.12. LockManager
org.infinispan.util.concurrent.locks.LockManagerImpl
Table C.21. Attributes
Name | Description | Type | Writable |
---|---|---|---|
concurrencyLevel | The concurrency level that the MVCC Lock Manager has been configured with. | int | No |
numberOfLocksAvailable | The number of exclusive locks that are available. | int | No |
numberOfLocksHeld | The number of exclusive locks that are held. | int | No |
C.13. LocalTopologyManager
org.infinispan.topology.LocalTopologyManagerImpl
Note
Table C.22. Attributes
Name | Description | Type | Writable |
---|---|---|---|
rebalancingEnabled | If false, newly started nodes will not join the existing cluster nor will the state be transferred to them. If any of the current cluster members are stopped when rebalancing is disabled, the nodes will leave the cluster but the state will not be rebalanced among the remaining nodes. This will result in fewer copies than specified by the owners attribute until rebalancing is enabled again. | boolean | Yes |
clusterAvailability | If AVAILABLE the node is currently operating regularly. If DEGRADED then data can not be safely accessed due to either a network split, or successive nodes leaving. | String | No |
C.14. MassIndexer
org.infinispan.query.MassIndexer
Table C.23. Operations
Name | Description | Signature |
---|---|---|
start | Starts rebuilding the index. | void start() |
Note
C.15. Passivation
org.infinispan.eviction.PassivationManager
Table C.24. Attributes
Name | Description | Type | Writable |
---|---|---|---|
passivations | Number of passivation events. | String | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component | boolean | Yes |
Table C.25. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.16. RecoveryAdmin
org.infinispan.transaction.xa.recovery.RecoveryAdminOperations
Table C.26. Operations
Name | Description | Signature |
---|---|---|
forceCommit | Forces the commit of an in-doubt transaction. | String forceCommit(long p0) |
forceCommit | Forces the commit of an in-doubt transaction | String forceCommit(int p0, byte[] p1, byte[] p2) |
forceRollback | Forces the rollback of an in-doubt transaction. | String forceRollback(long p0) |
forceRollback | Forces the rollback of an in-doubt transaction | String forceRollback(int p0, byte[] p1, byte[] p2) |
forget | Removes recovery info for the given transaction. | String forget(long p0) |
forget | Removes recovery info for the given transaction. | String forget(int p0, byte[] p1, byte[] p2) |
showInDoubtTransactions | Shows all the prepared transactions for which the originating node crashed. | String showInDoubtTransactions() |
C.17. RollingUpgradeManager
org.infinispan.upgrade.RollingUpgradeManager
Table C.27. Operations
Name | Description | Signature |
---|---|---|
disconnectSource | Disconnects the target cluster from the source cluster according to the specified migrator. | void disconnectSource(String p0) |
recordKnownGlobalKeyset | Dumps the global known keyset to a well-known key for retrieval by the upgrade process. | void recordKnownGlobalKeyset() |
synchronizeData | Synchronizes data from the old cluster to this using the specified migrator. | long synchronizeData(String p0) |
C.18. RpcManager
org.infinispan.remoting.rpc.RpcManagerImpl
Note
Table C.28. Attributes
Name | Description | Type | Writable |
---|---|---|---|
averageReplicationTime | The average time spent in the transport layer, in milliseconds. | long | No |
committedViewAsString | Retrieves the committed view. | String | No |
pendingViewAsString | Retrieves the pending view. | String | No |
replicationCount | Number of successful replications. | long | No |
replicationFailures | Number of failed replications. | long | No |
successRatio | Successful replications as a ratio of total replications. | String | No |
successRatioFloatingPoint | Successful replications as a ratio of total replications in numeric double format. | double | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
Table C.29. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
setStatisticsEnabled | Whether statistics should be enabled or disabled (true/false) | void setStatisticsEnabled(boolean enabled) |
C.19. StateTransferManager
org.infinispan.statetransfer.StateTransferManager
Note
Table C.30. Attributes
Name | Description | Type | Writable |
---|---|---|---|
joinComplete | If true, the node has successfully joined the grid and is considered to hold state. If false, the join process is still in progress.. | boolean | No |
stateTransferInProgress | Checks whether there is a pending inbound state transfer on this cluster member. | boolean | No |
C.20. Statistics
org.infinispan.interceptors.CacheMgmtInterceptor
Table C.31. Attributes
Name | Description | Type | Writable |
---|---|---|---|
averageReadTime | Average number of milliseconds for a read operation on the cache. | long | No |
averageWriteTime | Average number of milliseconds for a write operation in the cache. | long | No |
elapsedTime | Number of seconds since cache started. | long | No |
evictions | Number of cache eviction operations. | long | No |
hitRatio | Percentage hit/(hit+miss) ratio for the cache. | double | No |
hits | Number of cache attribute hits. | long | No |
misses | Number of cache attribute misses. | long | No |
numberOfEntries | Number of entries currently in the cache. | int | No |
readWriteRatio | Read/writes ratio for the cache. | double | No |
removeHits | Number of cache removal hits. | long | No |
removeMisses | Number of cache removals where keys were not found. | long | No |
stores | Number of cache attribute PUT operations. | long | No |
timeSinceReset | Number of seconds since the cache statistics were last reset. | long | No |
averageRemoveTime | Average number of milliseconds for a remove operation in the cache | long | No |
Table C.32. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.21. Transactions
org.infinispan.interceptors.TxInterceptor
Table C.33. Attributes
Name | Description | Type | Writable |
---|---|---|---|
commits | Number of transaction commits performed since last reset. | long | No |
prepares | Number of transaction prepares performed since last reset. | long | No |
rollbacks | Number of transaction rollbacks performed since last reset. | long | No |
statisticsEnabled | Enables or disables the gathering of statistics by this component. | boolean | Yes |
Table C.34. Operations
Name | Description | Signature |
---|---|---|
resetStatistics | Resets statistics gathered by this component. | void resetStatistics() |
C.22. Transport
org.infinispan.server.core.transport.NettyTransport
Table C.35. Attributes
Name | Description | Type | Writable |
---|---|---|---|
hostName | Returns the host to which the transport binds. | String | No |
idleTimeout | Returns the idle timeout. | String | No |
numberOfGlobalConnections | Returns a count of active connections in the cluster. This operation will make remote calls to aggregate results, so latency may have an impact on the speed of calculation for this attribute. | Integer | false |
numberOfLocalConnections | Returns a count of active connections this server. | Integer | No |
numberWorkerThreads | Returns the number of worker threads. | String | No |
port | Returns the port to which the transport binds. | String | |
receiveBufferSize | Returns the receive buffer size. | String | No |
sendBufferSize | Returns the send buffer size. | String | No |
totalBytesRead | Returns the total number of bytes read by the server from clients, including both protocol and user information. | String | No |
totalBytesWritten | Returns the total number of bytes written by the server back to clients, including both protocol and user information. | String | No |
tcpNoDelay | Returns whether TCP no delay was configured or not. | String | No |
C.23. XSiteAdmin
org.infinispan.xsite.XSiteAdminOperations
Table C.36. Operations
Name | Description | Signature |
---|---|---|
bringSiteOnline | Brings the given site back online on all the cluster. | String bringSiteOnline(String p0) |
amendTakeOffline | Amends the values for 'TakeOffline' functionality on all the nodes in the cluster. | String amendTakeOffline(String p0, int p1, long p2) |
getTakeOfflineAfterFailures | Returns the value of the 'afterFailures' for the 'TakeOffline' functionality. | String getTakeOfflineAfterFailures(String p0) |
getTakeOfflineMinTimeToWait | Returns the value of the 'minTimeToWait' for the 'TakeOffline' functionality. | String getTakeOfflineMinTimeToWait(String p0) |
setTakeOfflineAfterFailures | Amends the values for 'afterFailures' for the 'TakeOffline' functionality on all the nodes in the cluster. | String setTakeOfflineAfterFailures(String p0, int p1) |
setTakeOfflineMinTimeToWait | Amends the values for 'minTimeToWait' for the 'TakeOffline' functionality on all the nodes in the cluster. | String setTakeOfflineMinTimeToWait(String p0, long p1) |
siteStatus | Check whether the given backup site is offline or not. | String siteStatus(String p0) |
status | Returns the status(offline/online) of all the configured backup sites. | String status() |
takeSiteOffline | Takes this site offline in all nodes in the cluster. | String takeSiteOffline(String p0) |
pushState | Starts the cross-site state transfer to the site name specified. | String pushState(String p0) |
cancelPushState | Cancels the cross-site state transfer to the site name specified. | String cancelPushState(String p0) |
getSendingSiteName | Returns the site name that is pushing state to this site. | String getSendingSiteName() |
cancelReceiveState | Restores the site to the normal state. It is used when the link between the sites is broken during the state transfer. | String cancelReceiveState(String p0) |
getPushStateStatus | Returns the status of completed and running cross-site state transfer. | String getPushStateStatus() |
clearPushStateStatus | Clears the status of completed cross-site state transfer. | String clearPushStateStatus() |
Appendix D. Configuration Recommendations
D.1. Timeout Values
Table D.1. Timeout Value Recommendations for JBoss Data Grid
Timeout Value | Parent Element | Default Value | Recommended Value |
---|---|---|---|
distributedSyncTimeout | transport | 240,000 (4 minutes) | Same as default |
lockAcquisitionTimeout | locking | 10,000 (10 seconds) | Same as default |
cacheStopTimeout | transaction | 30,000 (30 seconds) | Same as default |
completedTxTimeout | transaction | 60,000 (60 seconds) | Same as default |
replTimeout | sync | 15,000 (15 seconds) | Same as default |
timeout | stateTransfer | 240,000 (4 minutes) | Same as default |
timeout | backup | 10,000 (10 seconds) | Same as default |
flushLockTimeout | async | 1 (1 millisecond) | Same as default. Note that this value applies to asynchronous cache stores, but not asynchronous caches. |
shutdownTimeout | async | 25,000 (25 seconds) | Same as default. Note that this value applies to asynchronous cache stores, but not asynchronous caches. |
pushStateTimeout | singletonStore | 10,000 (10 seconds) | Same as default. |
backup | replicationTimeout | 10,000 (10 seconds) | |
remoteCallTimeout | clusterLoader | 0 | For most requirements, same as default. This value is usually set to the same as the sync.replTimeout value. |
Appendix E. Performance Recommendations
E.1. Concurrent Startup for Large Clusters
- Start the first node in the cluster.
- Set JMX attribute
jboss.infinispan/CacheManager/"clustered"/LocalTopologyManager/rebalancingEnabled
tofalse
, as seen in Section C.13, “LocalTopologyManager”. - Start the remaining nodes in the cluster.
- Re-enable the JMX attribute
jboss.infinispan/CacheManager/"clustered"/LocalTopologyManager/rebalancingEnabled
by setting this value back totrue
, as seen in Section C.13, “LocalTopologyManager”.
Appendix F. References
F.1. About Consistency
F.2. About Consistency Guarantee
- If Key
K
is hashed to nodes{A,B}
and transactionTX1
acquires a lock forK
on, for example, nodeA
and - If another cache access occurs on node
B
, or any other node, andTX2
attempts to lockK
, this access attempt fails with a timeout because the transactionTX1
already holds a lock onK.
K
is always deterministically acquired on the same node of the cluster, irrespective of the transaction's origin.
F.3. About JBoss Cache
F.4. About RELAY2
RELAY
protocol bridges two remote clusters by creating a connection between one node in each site. This allows multicast messages sent out in one site to be relayed to the other and vice versa.
RELAY2
protocol, which is used for communication between sites in Red Hat JBoss Data Grid's Cross-Site Replication.
RELAY2
protocol works similarly to RELAY
but with slight differences. Unlike RELAY
, the RELAY2
protocol:
- connects more than two sites.
- connects sites that operate autonomously and are unaware of each other.
- offers both unicasts and multicast routing between sites.
F.5. About Return Values
F.6. About Runnable Interfaces
run()
method, which executes the active part of the class' code. The Runnable object can be executed in its own thread after it is passed to a thread constructor.
F.7. About Two Phase Commit (2PC)
F.8. About Key-Value Pairs
- A key is unique to a particular data entry. It consists of entry data attributes from the related entry.
- A value is the data assigned to and identified by the key.
F.9. Requesting a Full Byte Array
As a default, JBoss Data Grid only partially prints byte arrays to logs to avoid unnecessarily printing large byte arrays. This occurs when either:
- JBoss Data Grid caches are configured for lazy deserialization. Lazy deserialization is not available in JBoss Data Grid's Remote Client-Server mode.
- A
Memcached
orHot Rod
server is run.
-Dinfinispan.arrays.debug=true
system property at start up.
Example F.1. Partial Byte Array Log
2010-04-14 15:46:09,342 TRACE [ReadCommittedEntry] (HotRodWorker-1-1) Updating entry (key=CacheKey{data=ByteArray{size=19, hashCode=1b3278a, array=[107, 45, 116, 101, 115, 116, 82, 101, 112, 108, ..]}} removed=false valid=true changed=true created=true value=CacheValue{data=ByteArray{size=19, array=[118, 45, 116, 101, 115, 116, 82, 101, 112, 108, ..]}, version=281483566645249}] And here's a log message where the full byte array is shown: 2010-04-14 15:45:00,723 TRACE [ReadCommittedEntry] (Incoming-2,Infinispan-Cluster,eq-6834) Updating entry (key=CacheKey{data=ByteArray{size=19, hashCode=6cc2a4, array=[107, 45, 116, 101, 115, 116, 82, 101, 112, 108, 105, 99, 97, 116, 101, 100, 80, 117, 116]}} removed=false valid=true changed=true created=true value=CacheValue{data=ByteArray{size=19, array=[118, 45, 116, 101, 115, 116, 82, 101, 112, 108, 105, 99, 97, 116, 101, 100, 80, 117, 116]}, version=281483566645249}]
Appendix G. Revision History
Revision History | |||||
---|---|---|---|---|---|
Revision 7.0.0-7 | Thur Jul 20 2017 | John Brier | |||
| |||||
Revision 7.0.0-6 | Wed Jun 28 2017 | John Brier | |||
| |||||
Revision 7.0.0-5 | Thur May 25 2017 | John Brier | |||
| |||||
Revision 7.0.0-4 | Mon 18 Jul 2016 | Rakesh Ghatvisave, Christian Huffman | |||
| |||||
Revision 7.0.0-3 | Wed 27 Apr 2016 | Christian Huffman | |||
| |||||
Revision 7.0.0-2 | Wed 27 Apr 2016 | Christian Huffman | |||
| |||||
Revision 7.0.0-1 | Wed 27 Apr 2016 | Rakesh Ghatvisave | |||
| |||||
Revision 7.0.0-0 | Tue 19 Apr 2016 | Christian Huffman | |||
|