Chapter 5. Persistence

Persistence allows configuring external (persistent) storage engines complementary to the default in memory storage offered by Red Hat Data Grid. An external persistent storage might be useful for several reasons:

  • Increased Durability. Memory is volatile, so a cache store could increase the life-span of the information store in the cache.
  • Write-through. Interpose Red Hat Data Grid as a caching layer between an application and a (custom) external storage engine.
  • Overflow Data. By using eviction and passivation, one can store only the "hot" data in memory and overflow the data that is less frequently used to disk.

The integration with the persistent store is done through the following SPI: CacheLoader, CacheWriter, AdvancedCacheLoader and AdvancedCacheWriter (discussed in the following sections).

These SPIs allow for the following features:

  • Alignment with JSR-107. The CacheWriter and CacheLoader interface are similar to the the loader and writer in JSR 107. This should considerably help writing portable stores across JCache compliant vendors.
  • Simplified Transaction Integration. All necessary locking is handled by Red Hat Data Grid automatically and implementations don’t have to be concerned with coordinating concurrent access to the store. Even though concurrent writes on the same key are not going to happen (depending locking mode in use), implementors should expect operations on the store to happen from multiple/different threads and code the implementation accordingly.
  • Parallel Iteration. It is now possible to iterate over entries in the store with multiple threads in parallel.
  • Reduced Serialization. This translates in less CPU usage. The new API exposes the stored entries in serialized format. If an entry is fetched from persistent storage for the sole purpose of being sent remotely, we no longer need to deserialize it (when reading from the store) and serialize it back (when writing to the wire). Now we can write to the wire the serialized format as read from the storage directly.

5.1. Configuration

Stores (readers and/or writers) can be configured in a chain. Cache read operation looks at all of the specified CacheLoader s, in the order they are configured, until it finds a valid and non-null element of data. When performing writes all cache CacheWriter s are written to, except if the ignoreModifications element has been set to true for a specific cache writer.

Implementing both a CacheWriter and CacheLoader

Store providers should implement both the CacheWriter and the CacheLoader interfaces. Stores that do this are considered both for reading and writing (assuming read-only=false) data.

This is the configuration of a custom(not shipped with infinispan) store:
   <local-cache name="myCustomStore">
      <persistence passivation="false">
         <store
            class="org.acme.CustomStore"
            fetch-state="false" preload="true" shared="false"
            purge="true" read-only="false" singleton="false">

            <write-behind modification-queue-size="123" thread-pool-size="23" />

            <property name="myProp">${system.property}</property>
         </store>
      </persistence>
   </local-cache>

Parameters that you can use to configure persistence are as follows:

connection-attempts
Sets the maximum number of attempts to start each configured CacheWriter/CacheLoader. If the attempts to start are not successful, an exception is thrown and the cache does not start.
connection-interval
Specifies the time, in milliseconds, to wait between connection attempts on startup. A negative or zero value means no wait between connection attempts.
availability-interval
Specifies the time, in milliseconds, between availability checks to determine if the PersistenceManager is available. In other words, this interval sets how often stores/loaders are polled via their org.infinispan.persistence.spi.CacheWriter#isAvailable or org.infinispan.persistence.spi.CacheLoader#isAvailable implementation. If a single store/loader is not available, an exception is thrown during cache operations.
passivation

Enables passivation. The default value is false (boolean).

This property has a significant impact on Red Hat Data Grid interactions with the loaders. See Cache Passivation for more information.

class
Defines the class of the store and must implement CacheLoader, CacheWriter, or both.
fetch-state

Fetches the persistent state of a cache when joining a cluster. The default value is false (boolean).

The purpose of this property is to retrieve the persistent state of a cache and apply it to the local cache store of a node when it joins a cluster. Fetching the persistent state does not apply if a cache store is shared because it accesses the same data as the other stores.

This property can be true for one configured cache loader only. If more than one cache loader fetches the persistent state, a configuration exception is thrown when the cache service starts.

preload

Pre-loads data into memory from the cache loader when the cache starts. The default value is false (boolean).

This property is useful when data in the cache loader is required immediately after startup to prevent delays with cache operations when the data is loaded lazily. This property can provide a "warm cache" on startup but it impacts performance because it affects start time.

Pre-loading data is done locally, so any data loaded is stored locally in the node only. The pre-loaded data is not replicated or distributed. Likewise, Red Hat Data Grid pre-loads data only up to the maximum configured number of entries in eviction.

shared

Determines if the cache loader is shared between cache instances. The default value is false (boolean).

This property prevents duplicate writes of data to the cache loader by different cache instances. An example is where all cache instances in a cluster use the same JDBC settings for the same remote, shared database.

read-only
Prevents new data from being persisted to the cache store. The default value is false (boolean).
purge
Empties the specified cache loader at startup. The default value is false (boolean). This property takes effect only if the read-only property is set to false.
max-batch-size

Sets the maximum size of a batch to insert of delete from the cache store. The default value is 100.

If the value is less than 1, no upper limit applies to the number of operations in a batch.

write-behind
Asynchronously persists data to the cache store. The default value is false (boolean). See Asynchronous Write-Behind for more information.
singleton

Enables one node in the cluster, the coordinator, to store modifications. The default value is false (boolean).

Whenever data enters a node, it is replicated or distributed to keep the in-memory state of the caches synchronized. The coordinator is responsible for pushing that state to disk.

If true you must set this property on all nodes in the cluster. Only the coordinator of the cluster persists data. However, all nodes must have this property enabled to prevent other nodes from persisting data.

You cannot enable the singleton property if the cache store is shared.

Note

You can define additional attributes in the properties section to configure specific aspects of each cache loader, such as the myProp attribute in the previous example.

Other cache loaders with more complex configurations also include additional properties. See the following JDBC cache store configuration for examples.

The preceding configuration applies a generic cache store implementation. However, the default Red Hat Data Grid store implementation has a more complex configuration schema, in which the properties section is replaced with XML attributes:

<persistence passivation="false">
   <!-- note that class is missing and is induced by the fileStore element name -->
   <file-store
           shared="false" preload="true"
           fetch-state="true"
           read-only="false"
           purge="false"
           path="${java.io.tmpdir}">
      <write-behind thread-pool-size="5" />
   </file-store>
</persistence>

The same configuration can be achieved programmatically:

   ConfigurationBuilder builder = new ConfigurationBuilder();
   builder.persistence()
         .passivation(false)
         .addSingleFileStore()
            .preload(true)
            .shared(false)
            .fetchPersistentState(true)
            .ignoreModifications(false)
            .purgeOnStartup(false)
            .location(System.getProperty("java.io.tmpdir"))
            .async()
               .enabled(true)
               .threadPoolSize(5)
            .singleton()
               .enabled(true)
               .pushStateWhenCoordinator(true)
               .pushStateTimeout(20000);

5.2. Cache Passivation

A CacheWriter can be used to enforce entry passivation and activation on eviction in a cache. Cache passivation is the process of removing an object from in-memory cache and writing it to a secondary data store (e.g., file system, database) on eviction. Cache activation is the process of restoring an object from the data store into the in-memory cache when it’s needed to be used. In order to fully support passivation, a store needs to be both a CacheWriter and a CacheLoader. In both cases, the configured cache store is used to read from the loader and write to the data writer.

When an eviction policy in effect evicts an entry from the cache, if passivation is enabled, a notification that the entry is being passivated will be emitted to the cache listeners and the entry will be stored. When a user attempts to retrieve a entry that was evicted earlier, the entry is (lazily) loaded from the cache loader into memory. When the entry has been loaded a notification is emitted to the cache listeners that the entry has been activated. In order to enable passivation just set passivation to true (false by default). When passivation is used, only the first cache loader configured is used and all others are ignored.

5.2.1. Limitations

Due to the unique nature of passivation, it is not supported with some other store configurations.

  • Transactional store - Passivation writes/removes entries from the store outside the scope of the actual Infinispan commit boundaries.
  • Shared store - Shared store requires entries always being in the store for other owners. Thus passivation makes no sense as we can’t remove the entry from the store.

5.2.2. Cache Loader Behavior with Passivation Disabled vs Enabled

When passivation is disabled, whenever an element is modified, added or removed, then that modification is persisted in the backend store via the cache loader. There is no direct relationship between eviction and cache loading. If you don’t use eviction, what’s in the persistent store is basically a copy of what’s in memory. If you do use eviction, what’s in the persistent store is basically a superset of what’s in memory (i.e. it includes entries that have been evicted from memory). When passivation is enabled, and with an unshared store, there is a direct relationship between eviction and the cache loader. Writes to the persistent store via the cache loader only occur as part of the eviction process. Data is deleted from the persistent store when the application reads it back into memory. In this case, what’s in memory and what’s in the persistent store are two subsets of the total information set, with no intersection between the subsets. With a shared store, entries which have been passivated in the past will continue to exist in the store, although they may have a stale value if this has been overwritten in memory.

The following is a simple example, showing what state is in RAM and in the persistent store after each step of a 6 step process:

OperationPassivation OffPassivation On, Shared OffPassivation On, Shared On

Insert keyOne

Memory: keyOne
Disk: keyOne

Memory: keyOne
Disk: (none)

Memory: keyOne
Disk: (none)

Insert keyTwo

Memory: keyOne, keyTwo
Disk: keyOne, keyTwo

Memory: keyOne, keyTwo
Disk: (none)

Memory: keyOne, keyTwo
Disk: (none)

Eviction thread runs, evicts keyOne

Memory: keyTwo
Disk: keyOne, keyTwo

Memory: keyTwo
Disk: keyOne

Memory: keyTwo
Disk: keyOne

Read keyOne

Memory: keyOne, keyTwo
Disk: keyOne, keyTwo

Memory: keyOne, keyTwo
Disk: (none)

Memory: keyOne, keyTwo
Disk: keyOne

Eviction thread runs, evicts keyTwo

Memory: keyOne
Disk: keyOne, keyTwo

Memory: keyOne
Disk: keyTwo

Memory: keyOne
Disk: keyOne, keyTwo

Remove keyTwo

Memory: keyOne
Disk: keyOne

Memory: keyOne
Disk: (none)

Memory: keyOne
Disk: keyOne

5.3. Cache Loaders and transactional caches

When a cache is transactional and a cache loader is present, the cache loader won’t be enlisted in the transaction in which the cache is part. That means that it is possible to have inconsistencies at cache loader level: the transaction to succeed applying the in-memory state but (partially) fail applying the changes to the store. Manual recovery would not work with caches stores.

5.4. Write-Through And Write-Behind Caching

Red Hat Data Grid can optionally be configured with one or several cache stores allowing it to store data in a persistent location such as shared JDBC database, a local filesystem, etc. Red Hat Data Grid can handle updates to the cache store in two different ways:

  • Write-Through (Synchronous)
  • Write-Behind (Asynchronous)

5.4.1. Write-Through (Synchronous)

In this mode, which is supported in version 4.0, when clients update a cache entry, i.e. via a Cache.put() invocation, the call will not return until Red Hat Data Grid has gone to the underlying cache store and has updated it. Normally, this means that updates to the cache store are done within the boundaries of the client thread.

The main advantage of this mode is that the cache store is updated at the same time as the cache, hence the cache store is consistent with the cache contents. On the other hand, using this mode reduces performance because the latency of having to access and update the cache store directly impacts the duration of the cache operation.

Configuring a write-through or synchronous cache store does not require any particular configuration option. By default, unless marked explicitly as write-behind or asynchronous, all cache stores are write-through or synchronous. Please find below a sample configuration file of a write-through unshared local file cache store:

<persistence passivation="false">
   <file-store fetch-state="true"
               read-only="false"
               purge="false" path="${java.io.tmpdir}"/>
</persistence>

5.4.2. Write-Behind (Asynchronous)

In this mode, updates to the cache are asynchronously written to the cache store. Red Hat Data Grid puts pending changes into a modification queue so that it can quickly store changes.

The configured number of threads consume the queue and apply the modifications to the underlying cache store. If the configured number of threads cannot consume the modifications fast enough, or if the underlying store becomes unavailable, the modification queue becomes full. In this event, the cache store becomes write-through until the queue can accept new entries.

This mode provides an advantage in that cache operations are not affected by updates to the underlying store. However, because updates happen asynchronously, there is a period of time during which data in the cache store is inconsistent with data in the cache.

The write-behind strategy is suitable for cache stores with low latency and small operational cost; for example, an unshared file-based cache store that is local to the cache itself. In this case, the time during which data is inconsistent between the cache store and the cache is reduced to the lowest possible period.

The following is an example configuration for the write-behind strategy:

<persistence passivation="false">
   <file-store fetch-state="true"
               read-only="false"
               purge="false" path="${java.io.tmpdir}">
   <!-- start write-behind configuration -->
   <write-behind modification-queue-size="123" thread-pool-size="23" />
   <!-- end write-behind configuration -->
   </file-store>
</persistence>

5.5. Filesystem based cache stores

A filesystem-based cache store is typically used when you want to have a cache with a cache store available locally which stores data that has overflowed from memory, having exceeded size and/or time restrictions.

Warning

Usage of filesystem-based cache stores on shared filesystems like NFS, Windows shares, etc. should be avoided as these do not implement proper file locking and can cause data corruption. File systems are inherently not transactional, so when attempting to use your cache in a transactional context, failures when writing to the file (which happens during the commit phase) cannot be recovered.

5.5.1. Single File Store

The single file cache store keeps all data in a single file. The way it looks up data is by keeping an in-memory index of keys and the positions of their values in this file. This results in greater performance compared to old file cache store. There is one caveat though. Since the single file based cache store keeps keys in memory, it can lead to increased memory consumption, and hence it’s not recommended for caches with big keys.

In certain use cases, this cache store suffers from fragmentation: if you store larger and larger values, the space is not reused and instead the entry is appended at the end of the file. The space (now empty) is reused only if you write another entry that can fit there. Also, when you remove all entries from the cache, the file won’t shrink, and neither will be de-fragmented.

These are the available configuration options for the single file cache store:

  • path is the file-system location where Red Hat Data Grid stores data.

    For embedded deployments, Red Hat Data Grid stores data to the current working directory. To store data in a different location, specify the absolute path to that directory as the value.

    For server deployments, Red Hat Data Grid stores data in $RHDG_HOME/data. You should not change the default location for Red Hat Data Grid server. In other words, do not set a value for path with server deployments.

you should create a path declaration and use the relative-to attribute for the file store location. If you set relative-to to a directory location or environment variable, Red Hat Data Grid server startup fails.

  • max-entries specifies the maximum number of entries to keep in this file store. As mentioned before, in order to speed up lookups, the single file cache store keeps an index of keys and their corresponding position in the file. To avoid this index resulting in memory consumption problems, this cache store can bounded by a maximum number of entries that it stores. If this limit is exceeded, entries are removed permanently using the LRU algorithm both from the in-memory index and the underlying file based cache store. So, setting a maximum limit only makes sense when Red Hat Data Grid is used as a cache, whose contents can be recomputed or they can be retrieved from the authoritative data store. If this maximum limit is set when the Red Hat Data Grid is used as an authoritative data store, it could lead to data loss, and hence it’s not recommended for this use case. The default value is -1 which means that the file store size is unlimited.

5.5.1.1. Declarative Configuration

The following examples show declarative configuration for a single file cache store.

Library Mode

<persistence>
   <file-store path="/tmp/myDataStore" max-entries="5000"/>
</persistence>

Server Mode

<persistence>
   <file-store max-entries="5000"/>
</persistence>

5.5.1.2. Programmatic Configuration

The following example shows programmatic configuration for a single file cache store:

ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
    .addSingleFileStore()
    .location("/tmp/myDataStore")
    .maxEntries(5000);

5.5.2. Soft-Index File Store

The Soft-Index File Store is an experimental local file-based. It is a pure Java implementation that tries to get around Single File Store’s drawbacks by implementing a variant of B+ tree that is cached in-memory using Java’s soft references - here’s where the name Soft-Index File Store comes from. This B+ tree (called Index) is offloaded on filesystem to single file that does not need to be persisted - it is purged and rebuilt when the cache store restarts, its purpose is only offloading.

The data that should be persisted are stored in a set of files that are written in append-only way - that means that if you store this on conventional magnetic disk, it does not have to seek when writing a burst of entries. It is not stored in single file but set of files. When the usage of any of these files drops below 50% (the entries from the file are overwritten to another file), the file starts to be collected, moving the live entries into different file and in the end removing that file from disk.

Most of the structures in Soft Index File Store are bounded, therefore you don’t have to be afraid of OOMEs. For example, you can configure the limits for concurrently open files as well.

5.5.2.1. Configuration

Here is an example of Soft-Index File Store configuration via XML:

<persistence>
    <soft-index-file-store xmlns="urn:infinispan:config:store:soft-index:8.0">
        <index path="/tmp/sifs/testCache/index" />
        <data path="/tmp/sifs/testCache/data" />
    </soft-index-file-store>
</persistence>

Programmatic configuration would look as follows:

ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
    .addStore(SoftIndexFileStoreConfigurationBuilder.class)
        .indexLocation("/tmp/sifs/testCache/index");
        .dataLocation("/tmp/sifs/testCache/data")

5.5.2.2. Current limitations

Size of a node in the Index is limited, by default it is 4096 bytes, though it can be configured. This size also limits the key length (or rather the length of the serialized form): you can’t use keys longer than size of the node - 15 bytes. Moreover, the key length is stored as 'short', limiting it to 32767 bytes. There’s no way how you can use longer keys - SIFS throws an exception when the key is longer after serialization.

When entries are stored with expiration, SIFS cannot detect that some of those entries are expired. Therefore, such old file will not be compacted (method AdvancedStore.purgeExpired() is not implemented). This can lead to excessive file-system space usage.

5.6. JDBC String based Cache Store

A cache store which relies on the provided JDBC driver to load/store values in the underlying database.

Each key in the cache is stored in its own row in the database. In order to store each key in its own row, this store relies on a (pluggable) bijection that maps the each key to a String object. The bijection is defined by the Key2StringMapper interface. Red Hat Data Grids ships a default implementation (smartly named DefaultTwoWayKey2StringMapper) that knows how to handle primitive types.

Note

By default Red Hat Data Grid shares are not stored, meaning that all nodes in the cluster will write to the underlying store upon each update. If you wish for an operation to only be written to the underlying database once, you must configure the JDBC store to be shared.

5.6.1. Connection management (pooling)

In order to obtain a connection to the database the JDBC cache store relies on a ConnectionFactory implementation. The connection factory is specified programmatically using one of the connectionPool(), dataSource() or simpleConnection() methods on the JdbcStringBasedStoreConfigurationBuilder class or declaratively using one of the <connectionPool />, <dataSource /> or <simpleConnection /> elements.

Red Hat Data Grid ships with three ConnectionFactory implementations:

  • PooledConnectionFactoryConfigurationBuilder is a factory based on HikariCP. Additional properties for HikariCP can be provided by a properties file, either via placing a hikari.properties file on the classpath or by specifying the path to the file via PooledConnectionFactoryConfiguration.propertyFile or properties-file in the connection pool’s xml config. N.B. a properties file specified explicitly in the configuration is loaded instead of the hikari.properties file on the class path and Connection pool characteristics which are explicitly set in PooledConnectionFactoryConfiguration always override the values loaded from a properties file.

Refer to the official documentation for details of all configuration properties.

  • ManagedConnectionFactoryConfigurationBuilder is a connection factory that can be used within managed environments, such as application servers. It knows how to look into the JNDI tree at a certain location (configurable) and delegate connection management to the DataSource.
  • SimpleConnectionFactoryConfigurationBuilder is a factory implementation that will create database connection on a per invocation basis. Not recommended in production.

The PooledConnectionFactory is generally recommended for stand-alone deployments (i.e. not running within AS or servlet container). ManagedConnectionFactory can be used when running in a managed environment where a DataSource is present, so that connection pooling is performed within the DataSource.

5.6.2. Sample configurations

Below is a sample configuration for the JdbcStringBasedStore. For detailed description of all the parameters used refer to the JdbcStringBasedStore.

<persistence>
   <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:9.4" dialect="H2">
      <connection-pool connection-url="jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1" username="sa" driver="org.h2.Driver"/>
      <string-keyed-table drop-on-exit="true" create-on-start="true" prefix="ISPN_STRING_TABLE">
         <id-column name="ID_COLUMN" type="VARCHAR(255)" />
         <data-column name="DATA_COLUMN" type="BINARY" />
         <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" />
      </string-keyed-table>
   </string-keyed-jdbc-store>
</persistence>
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class)
      .shared(true)
      .table()
         .dropOnExit(true)
         .createOnStart(true)
         .tableNamePrefix("ISPN_STRING_TABLE")
         .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)")
         .dataColumnName("DATA_COLUMN").dataColumnType("BINARY")
         .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT")
      .connectionPool()
         .connectionUrl("jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1")
         .username("sa")
         .driverClass("org.h2.Driver");

Finally, below is an example of a JDBC cache store with a managed connection factory, which is chosen implicitly by specifying a datasource JNDI location:

<persistence>
   <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:9.4" shared="true" dialect="H2">
      <data-source jndi-url="java:/StringStoreWithManagedConnectionTest/DS" />
      <string-keyed-table drop-on-exit="true" create-on-start="true" prefix="ISPN_STRING_TABLE">
         <id-column name="ID_COLUMN" type="VARCHAR(255)" />
         <data-column name="DATA_COLUMN" type="BINARY" />
         <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" />
      </string-keyed-table>
   </string-keyed-jdbc-store>
</persistence>
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class)
      .shared(true)
      .table()
         .dropOnExit(true)
         .createOnStart(true)
         .tableNamePrefix("ISPN_STRING_TABLE")
         .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)")
         .dataColumnName("DATA_COLUMN").dataColumnType("BINARY")
         .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT")
      .dataSource()
         .jndiUrl("java:/StringStoreWithManagedConnectionTest/DS");

5.7. Remote store

The RemoteStore is a cache loader and writer implementation that stores data in a remote Red Hat Data Grid cluster. In order to communicate with the remote cluster, the RemoteStore uses the HotRod client/server architecture. HotRod bering the load balancing and fault tolerance of calls and the possibility to fine-tune the connection between the RemoteCacheStore and the actual cluster. Please refer to Hot Rod for more information on the protocol, client and server configuration. For a list of RemoteStore configuration refer to the javadoc . Example:

5.7.1. Sample Usage

<persistence>
   <remote-store xmlns="urn:infinispan:config:store:remote:8.0" cache="mycache" raw-values="true">
      <remote-server host="one" port="12111" />
      <remote-server host="two" />
      <connection-pool max-active="10" exhausted-action="CREATE_NEW" />
      <write-behind />
   </remote-store>
</persistence>
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence().addStore(RemoteStoreConfigurationBuilder.class)
      .fetchPersistentState(false)
      .ignoreModifications(false)
      .purgeOnStartup(false)
      .remoteCacheName("mycache")
      .rawValues(true)
.addServer()
      .host("one").port(12111)
      .addServer()
      .host("two")
      .connectionPool()
      .maxActive(10)
      .exhaustedAction(ExhaustedAction.CREATE_NEW)
      .async().enable();

In this sample configuration, the remote cache store is configured to use the remote cache named "mycache" on servers "one" and "two". It also configures connection pooling and provides a custom transport executor. Additionally the cache store is asynchronous.

5.8. Cluster cache loader

The ClusterCacheLoader is a cache loader implementation that retrieves data from other cluster members.

It is a cache loader only as it doesn’t persist anything (it is not a Store), therefore features like fetchPersistentState (and like) are not applicable.

A cluster cache loader can be used as a non-blocking (partial) alternative to stateTransfer : keys not already available in the local node are fetched on-demand from other nodes in the cluster. This is a kind of lazy-loading of the cache content.

<persistence>
   <cluster-loader remote-timeout="500"/>
</persistence>
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
    .addClusterLoader()
    .remoteCallTimeout(500);

For a list of ClusterCacheLoader configuration refer to the javadoc .

Note

The ClusterCacheLoader does not support preloading(preload=true). It also won’t provide state if fetchPersistentSate=true.

5.9. Command-Line Interface cache loader

The Command-Line Interface (CLI) cache loader is a cache loader implementation that retrieves data from another Red Hat Data Grid node using the CLI. The node to which the CLI connects to could be a standalone node, or could be a node that it’s part of a cluster. This cache loader is read-only, so it will only be used to retrieve data, and hence, won’t be used when persisting data.

The CLI cache loader is configured with a connection URL pointing to the Red Hat Data Grid node to which connect to. Here is an example:

Note

Details on the format of the URL and how to make sure a node can receive invocations via the CLI can be found in the Command-Line Interface chapter.

<persistence>
   <cli-loader connection="jmx://1.2.3.4:4444/MyCacheManager/myCache" />
</persistence>
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
    .addStore(CLInterfaceLoaderConfigurationBuilder.class)
    .connectionString("jmx://1.2.3.4:4444/MyCacheManager/myCache");

5.10. RocksDB Cache Store

Red Hat Data Grid supports using RocksDB as a cache store.

5.10.1. Introduction

RocksDB is a fast key-value filesystem-based storage from Facebook. It started as a fork of Google’s LevelDB, but provides superior performance and reliability, especially in highly concurrent scenarios.

5.10.1.1. Sample Usage

The RocksDB cache store requires 2 filesystem directories to be configured - each directory contains a RocksDB database: one location is used to store non-expired data, while the second location is used to store expired keys pending purge.

Configuration cacheConfig = new ConfigurationBuilder().persistence()
				.addStore(RocksDBStoreConfigurationBuilder.class)
				.build();
EmbeddedCacheManager cacheManager = new DefaultCacheManager(cacheConfig);

Cache<String, User> usersCache = cacheManager.getCache("usersCache");
usersCache.put("raytsang", new User(...));

5.10.2. Configuration

It is also possible to configure the underlying rocks db instance. This can be done via properties in the store configuration. Any property that is prefixed with the name database will configure the rocks db database. Data is now stored in column families, these can be configured independently of the database by setting a property prefixed with the name data.

Note that you do not have to supply properties and this is entirely optional.

5.10.2.1. Sample Programatic Configuration

Properties props = new Properties();
props.put("database.max_background_compactions", "2");
props.put("data.write_buffer_size", "512MB");

Configuration cacheConfig = new ConfigurationBuilder().persistence()
				.addStore(RocksDBStoreConfigurationBuilder.class)
				.location("/tmp/rocksdb/data")
				.expiredLocation("/tmp/rocksdb/expired")
        .properties(props)
				.build();
ParameterDescription

location

Directory to use for RocksDB to store primary cache store data. The directory will be auto-created if it does not exit.

expiredLocation

Directory to use for RocksDB to store expiring data pending to be purged permanently. The directory will be auto-created if it does not exit.

expiryQueueSize

Size of the in-memory queue to hold expiring entries before it gets flushed into expired RocksDB store

clearThreshold

There are two methods to clear all entries in RocksDB. One method is to iterate through all entries and remove each entry individually. The other method is to delete the database and re-init. For smaller databases, deleting individual entries is faster than the latter method. This configuration sets the max number of entries allowed before using the latter method

compressionType

Configuration for RocksDB for data compression, see CompressionType enum for options

blockSize

Configuration for RocksDB - see documentation for performance tuning

cacheSize

Configuration for RocksDB - see documentation for performance tuning

5.10.2.2. Sample XML Configuration

infinispan.xml

<local-cache name="vehicleCache">
   <persistence>
      <rocksdb-store path="/tmp/rocksdb/data">
         <expiration path="/tmp/rocksdb/expired"/>
         <property name="database.max_background_compactions">2</property>
         <property name="data.write_buffer_size">512MB</property>
      </rocksdb-store>
   </persistence>
</local-cache>

Note

Red Hat Data Grid servers log warning messages at startup when you add configuration properties that are specific to RocksDB.

For example, as in the preceding example, if you set data.write_buffer_size in your configuration, then the following message is written to logs:

WARN  [org.infinispan.configuration.parsing.XmlConfigHelper] ISPN000292:
Unrecognized attribute 'data.write_buffer_size'. Please check your
configuration. Ignoring!

Red Hat Data Grid applies the RocksDB properties even though the log message indicates they do not take effect. You can ignore these WARN messages.

5.10.3. Additional References

Refer to the test case for code samples in action.

Refer to test configurations for configuration samples.

5.11. LevelDB Cache Store

Warning

The LevelDB Cache Store has been deprecated and has been replaced with the RocksDB Cache Store. If you have existing data stored in a LevelDB Cache Store, the RocksDB Cache Store will convert it to the new SST-based format on the first run.

5.12. JPA Cache Store

The implementation depends on JPA 2.0 specification to access entity meta model.

In normal use cases, it’s recommended to leverage Red Hat Data Grid for JPA second level cache and/or query cache. However, if you’d like to use only Red Hat Data Grid API and you want Red Hat Data Grid to persist into a cache store using a common format (e.g., a database with well defined schema), then JPA Cache Store could be right for you.

Things to note

  • When using JPA Cache Store, the key should be the ID of the entity, while the value should be the entity object.
  • Only a single @Id or @EmbeddedId annotated property is allowed.
  • Auto-generated ID is not supported.
  • Lastly, all entries will be stored as immortal entries.

5.12.1. Sample Usage

For example, given a persistence unit "myPersistenceUnit", and a JPA entity User:

persistence.xml

<persistence-unit name="myPersistenceUnit">
	...
</persistence-unit>

User entity class

User.java

@Entity
public class User implements Serializable {
	@Id
	private String username;
	private String firstName;
	private String lastName;

	...
}

Then you can configure a cache "usersCache" to use JPA Cache Store, so that when you put data into the cache, the data would be persisted into the database based on JPA configuration.

EmbeddedCacheManager cacheManager = ...;


Configuration cacheConfig = new ConfigurationBuilder().persistence()
            .addStore(JpaStoreConfigurationBuilder.class)
            .persistenceUnitName("org.infinispan.loaders.jpa.configurationTest")
            .entityClass(User.class)
            .build();
cacheManager.defineCache("usersCache", cacheConfig);

Cache<String, User> usersCache = cacheManager.getCache("usersCache");
usersCache.put("raytsang", new User(...));

Normally a single Red Hat Data Grid cache can store multiple types of key/value pairs, for example:

Cache<String, User> usersCache = cacheManager.getCache("myCache");
usersCache.put("raytsang", new User());
Cache<Integer, Teacher> teachersCache = cacheManager.getCache("myCache");
teachersCache.put(1, new Teacher());

It’s important to note that, when a cache is configured to use a JPA Cache Store, that cache would only be able to store ONE type of data.

Cache<String, User> usersCache = cacheManager.getCache("myJPACache"); // configured for User entity class
usersCache.put("raytsang", new User());
Cache<Integer, Teacher> teachersCache = cacheManager.getCache("myJPACache"); // cannot do this when this cache is configured to use a JPA cache store
teachersCache.put(1, new Teacher());

Use of @EmbeddedId is supported so that you can also use composite keys.

@Entity
public class Vehicle implements Serializable {
	@EmbeddedId
	private VehicleId id;
	private String color;	...
}

@Embeddable
public class VehicleId implements Serializable
{
	private String state;
	private String licensePlate;
	...
}

Lastly, auto-generated IDs (e.g., @GeneratedValue) is not supported. When putting things into the cache with a JPA cache store, the key should be the ID value!

5.12.2. Configuration

5.12.2.1. Sample Programatic Configuration

Configuration cacheConfig = new ConfigurationBuilder().persistence()
             .addStore(JpaStoreConfigurationBuilder.class)
             .persistenceUnitName("org.infinispan.loaders.jpa.configurationTest")
             .entityClass(User.class)
             .build();
ParameterDescription

persistenceUnitName

JPA persistence unit name in JPA configuration (persistence.xml) that contains the JPA entity class

entityClass

JPA entity class that is expected to be stored in this cache. Only one class is allowed.

5.12.2.2. Sample XML Configuration

<local-cache name="vehicleCache">
   <persistence passivation="false">
      <jpa-store xmlns="urn:infinispan:config:store:jpa:7.0"
         persistence-unit="org.infinispan.persistence.jpa.configurationTest"
         entity-class="org.infinispan.persistence.jpa.entity.Vehicle">
		/>
   </persistence>
</local-cache>
ParameterDescription

persistence-unit

JPA persistence unit name in JPA configuration (persistence.xml) that contains the JPA entity class

entity-class

Fully qualified JPA entity class name that is expected to be stored in this cache. Only one class is allowed.

5.12.3. Additional References

Refer to the test case for code samples in action.

Refer to test configurations for configuration samples.

5.13. Custom Cache Stores

If the provided cache stores do not fulfill all of your requirements, it is possible for you to implement your own store. The steps required to create your own store are as follows:

  1. Write your custom store by implementing one of the following interfaces:

    • org.infinispan.persistence.spi.AdvancedCacheWriter
    • org.infinispan.persistence.spi.AdvancedCacheLoader
    • org.infinispan.persistence.spi.CacheLoader
    • org.infinispan.persistence.spi.CacheWriter
    • org.infinispan.persistence.spi.ExternalStore
    • org.infinispan.persistence.spi.AdvancedLoadWriteStore
    • org.infinispan.persistence.spi.TransactionalCacheWriter
  2. Annotate your store class with the @Store annotation and specify the properties relevant to your store, e.g. is it possible for the store to be shared in Replicated or Distributed mode: @Store(shared = true).
  3. Create a custom cache store configuration and builder. This requires extending AbstractStoreConfiguration and AbstractStoreConfigurationBuilder. As an optional step, you should add the following annotations to your configuration - @ConfigurationFor, @BuiltBy as well as adding @ConfiguredBy to your store implementation class. These additional annotations will ensure that your custom configuration builder is used to parse your store configuration from xml. If these annotations are not added, then the CustomStoreConfigurationBuilder will be used to parse the common store attributes defined in AbstractStoreConfiguration and any additional elements will be ignored. If a store and its configuration do not declare the @Store and @ConfigurationFor annotations respectively, a warning message will be logged upon cache initialisation.
  4. Add your custom store to your cache’s configuration:

    1. Add your custom store to the ConfigurationBuilder, for example:

      Configuration config = new ConfigurationBuilder()
                  .persistence()
                  .addStore(CustomStoreConfigurationBuilder.class)
                  .build();
    2. Define your custom store via xml:

      <local-cache name="customStoreExample">
        <persistence>
          <store class="org.infinispan.persistence.dummy.DummyInMemoryStore" />
        </persistence>
      </local-cache>

5.13.1. HotRod Deployment

A Custom Cache Store can be packaged into a separate JAR file and deployed in a HotRod server using the following steps:

  1. Follow Custom Cache Stores, steps 1-3>> in the previous section and package your implementations in a JAR file (or use a Custom Cache Store Archetype).
  2. In your Jar create a proper file under META-INF/services/, which contains the fully qualified class name of your store implementation. The name of this service file should reflect the interface that your store implements. For example, if your store implements the AdvancedCacheWriter interface than you need to create the following file:

    • /META-INF/services/org.infinispan.persistence.spi.AdvancedCacheWriter
  3. Deploy the JAR file in the Red Hat Data Grid Server.

5.14. Store Migrator

Red Hat Data Grid 7.3 introduced changes to internal marshalling functionality that are not backwardly compatible with previous versions of Red Hat Data Grid. As a result, Red Hat Data Grid 7.3.x and later cannot read cache stores created in earlier versions of Red Hat Data Grid. Additionally, Red Hat Data Grid no longer provides some store implementations such as JDBC Mixed and Binary stores.

You can use StoreMigrator.java to migrate cache stores. This migration tool reads data from cache stores in previous versions and rewrites the content for compatibility with the current marshalling implementation.

5.14.1. Migrating Cache Stores

To perform a migration with StoreMigrator,

  1. Put infinispan-tools-9.4.0.jar and dependencies for your source and target databases, such as JDBC drivers, on your classpath.
  2. Create a .properties file that contains configuration properties for the source and target cache stores.

    You can find an example properties file that contains all applicable configuration options in migrator.properties.

  3. Specify .properties file as an argument for StoreMigrator.
  4. Run mvn exec:java to execute the migrator.

See the following example Maven pom.xml for StoreMigrator:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.infinispan.example</groupId>
    <artifactId>jdbc-migrator-example</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-tools</artifactId>
            <version>${version.infinispan}</version>
        </dependency>

        <!-- ADD YOUR REQUIRED DEPENDENCIES HERE -->
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.2.1</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>java</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <mainClass>StoreMigrator</mainClass>
                    <arguments>
                        <argument><!-- PATH TO YOUR MIGRATOR.PROPERTIES FILE --></argument>
                    </arguments>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Replace ${version.infinispan} with the appropriate version of Red Hat Data Grid.

5.14.2. Store Migrator Properties

All migrator properties are configured within the context of a source or target store. Each property must start with either source. or target..

All properties in the following sections apply to both source and target stores, except for table.binary.* properties because it is not possible to migrate to a binary table.

5.14.2.1. Common Properties

PropertyDescriptionExample valueRequired

type

JDBC_STRING | JDBC_BINARY | JDBC_MIXED | LEVELDB | ROCKSDB | SINGLE_FILE_STORE | SOFT_INDEX_FILE_STORE

JDBC_MIXED

TRUE

cache_name

The name of the cache associated with the store

persistentMixedCache

TRUE

5.14.2.2. JDBC Properties

PropertyDescriptionExample valueRequired

dialect

The dialect of the underlying database

POSTGRES

TRUE

marshaller.type

The marshaller to use for the store. Possible values are:

- LEGACY Red Hat Data Grid 7.2.x marshaller. Valid for source stores only.

- CURRENT Red Hat Data Grid 7.3.x marshaller.

- CUSTOM Custom marshaller.

CURRENT

TRUE

marshaller.class

The class of the marshaller if type=CUSTOM

org.example.CustomMarshaller

 

marshaller.externalizers

A comma-separated list of custom AdvancedExternalizer implementations to load [id]:<Externalizer class>

25:Externalizer1,org.example.Externalizer2

 

connection_pool.connection_url

The JDBC connection url

jdbc:postgresql:postgres

TRUE

connection_pool.driver_class

The class of the JDBC driver

org.postrgesql.Driver

TRUE

connection_pool.username

Database username

 

TRUE

connection_pool.password

Database password

 

TRUE

db.major_version

Database major version

9

 

db.minor_version

Database minor version

5

 

db.disable_upsert

Disable db upsert

false

 

db.disable_indexing

Prevent table index being created

false

 

table.<binary|string>.table_name_prefix

Additional prefix for table name

tablePrefix

 

table.<binary|string>.<id|data|timestamp>.name

Name of the column

id_column

TRUE

table.<binary|string>.<id|data|timestamp>.type

Type of the column

VARCHAR

TRUE

key_to_string_mapper

TwoWayKey2StringMapper Class

org.infinispan.persistence.keymappers. DefaultTwoWayKey2StringMapper

 

5.14.2.3. LevelDB/RocksDB Properties

PropertyDescriptionExample valueRequired

location

The location of the db directory

/some/example/dir

TRUE

compression

The compression type to be used

SNAPPY

 

5.14.2.4. SingleFileStore Properties

PropertyDescriptionExample valueRequired

location

The directory containing the store’s .dat file

/some/example/dir

TRUE

5.14.2.5. SoftIndexFileStore Properties

PropertyDescriptionExample valueRequired

location

The location of the db directory

/some/example/dir

TRUE

index_location

The location of the db’s index

/some/example/dir-index

Target Only

Following sections describe the SPI and also discuss the SPI implementations that Red Hat Data Grid ships out of the box.

5.15. SPI

The following class diagram presents the main SPI interfaces of the persistence API:

Figure 5.1. Persistence SPI

Figure2 1 persistence API

Some notes about the classes:

  • ByteBuffer - abstracts the serialized form of an object
  • MarshalledEntry - abstracts the information held within a persistent store corresponding to a key-value added to the cache. Provides method for reading this information both in serialized (ByteBuffer) and deserialized (Object) format. Normally data read from the store is kept in serialized format and lazily deserialized on demand, within the MarshalledEntry implementation
  • CacheWriter and CacheLoader provide basic methods for reading and writing to a store
  • AdvancedCacheLoader and AdvancedCacheWriter provide operations to manipulate the underlaying storage in bulk: parallel iteration and purging of expired entries, clear and size.

A provider might choose to only implement a subset of these interfaces:

  • Not implementing the AdvancedCacheWriter makes the given writer not usable for purging expired entries or clear
  • If a loader does not implement the AdvancedCacheLoader inteface, then it will not participate in preloading nor in cache iteration (required also for stream operations).

If you’re looking at migrating your existing store to the new API or to write a new store implementation, the SingleFileStore might be a good starting point/example.

5.16. More implementations

Many more cache loader and cache store implementations exist. Visit this website for more details.