Upgrading Data Grid

Red Hat Data Grid 8.0

Data Grid Documentation

Red Hat Customer Content Services


Find out about changes in Data Grid 8.0 that affect migration from previous versions and then complete the steps to upgrade deployments and migrate your data.

Chapter 1. Red Hat Data Grid

Data Grid is a high-performance, distributed in-memory data store.

Schemaless data structure
Flexibility to store different objects as key-value pairs.
Grid-based data storage
Designed to distribute and replicate data across clusters.
Elastic scaling
Dynamically adjust the number of nodes to meet demand without service disruption.
Data interoperability
Store, retrieve, and query data in the grid from different endpoints.

1.1. Data Grid Documentation

Documentation for Data Grid is available on the Red Hat customer portal.

1.2. Data Grid Downloads

Access the Data Grid Software Downloads on the Red Hat customer portal.


You must have a Red Hat account to access and download Data Grid software.

Chapter 2. Migrating to Data Grid 8.0

Review changes in Data Grid 8.0 that affect migration from previous releases.

2.1. Data Grid 8.0 Server

As of 8.0, Data Grid server is no longer based on Red Hat JBoss Enterprise Application Platform (EAP) and is re-designed to be lightweight and more secure with much faster start times.

Data Grid servers use $RHDG_HOME/server/conf/infinispan.xml for configuration.

Data store configuration

You configure how Data Grid stores your data through cache definitions. By default, Data Grid servers include a Cache Manager configuration that lets you create, configure, and manage your cache definitions.

<cache-container name="default" 1
                 statistics="true"> 2
  <transport cluster="${infinispan.cluster.name}" 3
             stack="${infinispan.cluster.stack:tcp}" 4
Creates a Cache Manager named "default".
Exports Cache Manager statistics through the metrics endpoint.
Adds a JGroups cluster transport that allows Data Grid servers to automatically discover each other and form clusters.
Uses the default TCP stack for cluster traffic.

In the preceding configuration, there are no cache definitions. When you start 8.0 server, it instantiates the default Cache Manager so you can create cache definitions at runtime through the CLI, REST API, or from remote Hot Rod clients.


Data Grid server no longer provides a domain mode as in previous versions that were based on EAP. However, Data Grid server provides a default configuration with clustering capabilities so your data is replicated across all nodes.

Server configuration

Data Grid 8.0 extends infinispan.xml with a server element that defines configuration specific to Data Grid servers.

    <interface name="public">
      <inet-address value="${infinispan.bind.address:}"/> 1

  <socket-bindings default-interface="public"
    <socket-binding name="default"
                    port="${infinispan.bind.port:11222}"/> 2
    <socket-binding name="memcached"
                    port="11221"/> 3

        <security-realm name="default"> 4
           <properties-realm groups-attribute="Roles">
              <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/>
              <group-properties path="groups.properties" relative-to="infinispan.server.config.path" />

  <endpoints socket-binding="default" security-realm="default"> 5
     <hotrod-connector name="hotrod"/>
     <rest-connector name="rest"/>
Creates a default public interface that uses the loopback address.
Creates a default socket binding that binds the public interface to port 11222.
Creates a socket binding for the Memcached connector. Note that the Memcached endpoint is now deprecated.
Defines a default security realm that uses property files to define credentials and RBAC settings.
Exposes the Hot Rod and REST endpoints at

The REST endpoint handles administrative operations that the Data Grid command line interface (CLI) and console use. For this reason, you should never disable the REST endpoint.

Table 2.1. Cheat Sheet


./standalone.sh -c clustered.xml



./server.sh -c infinispan-local.xml






-j udp

  • Use custom UDP/TCP addresses as follows:


  • Enable JMX as follows:

    <cache-container name="default"
                     statistics="true"> 1
      <jmx enabled="true" /> 2
    Enables statistics for the Cache Manager. This is the default.
    Exports JMX MBeans.

2.2. Data Grid Caches

Except for the Cache service on OpenShift, Data Grid provides empty cache containers by default. When you start Data Grid 8.0 it instantiates a Cache Manager so you can create caches at runtime.

In Data Grid 8.0, cache definitions that you create through the CacheContainerAdmin API are permanent to ensure that they survive cluster restarts.

   .withFlags(AdminFlag.VOLATILE) 1
   .getOrCreateCache("myTemporaryCache", "org.infinispan.DIST_SYNC"); 2
includes the VOLATILE flag that changes the default behavior and creates temporary caches.
returns a cache named "myTemporaryCache" or creates one using the DIST_SYNC configuration template.

AdminFlag.PERMANENT is enabled by default to ensure that cache definitions survive restarts. You must separately add persistent storage to Data Grid for data to survive restarts, for example:

ConfigurationBuilder b = new ConfigurationBuilder();

Cache Configuration Templates

Get the list of cache configuration templates as follows:

  • Use Tab auto-completion with the CLI:

    [//containers/default]> create cache --template=
  • Use the REST API:


2.3. Creating Caches

Add cache definitions to Data Grid to configure how it stores your data.

Library Mode

The following example initializes the Cache Manager and creates a cache definition named "myDistributedCache" that uses the distributed, synchronous cache mode:

GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
      DefaultCacheManager cacheManager = new DefaultCacheManager(global.build());
      ConfigurationBuilder builder = new ConfigurationBuilder();
cacheManager.defineConfiguration("myDistributedCache", builder.build());

You can also use the getOrCreate() method to create your cache definition or return it if it already exists, for example:

cacheManager.administration().getOrCreateCache("myDistributedCache", builder.build());

Data Grid Server

Remotely create caches at runtime as follows:

  • Use the CLI.

    To create a cache named "myCache" with the DIST_SYNC cache template, run the following:

    [//containers/default]> create cache --template=org.infinispan.DIST_SYNC name=myDistributedCache
  • Use the REST API.

    To create a cache named "myCache", use the following POST invocation and include the cache definition in the request payload in XML or JSON format:

    POST /rest/v2/caches/myCache
  • Use Hot Rod clients.

    import org.infinispan.client.hotrod.RemoteCacheManager;
    import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
    import org.infinispan.client.hotrod.impl.ConfigurationProperties;
    import org.infinispan.commons.api.CacheContainerAdmin.AdminFlag;
    import org.infinispan.commons.configuration.XMLStringConfiguration;
    // Create a configuration for a locally running server.
    ConfigurationBuilder builder = new ConfigurationBuilder();
            manager = new RemoteCacheManager(builder.build());
        private void createTemporaryCacheWithTemplate() {
                       //Override the default and create a volatile cache that
                       //does not survive cluster restarts.
                       //Create a cache named myTemporaryCache that uses the
                       //distributed, synchronous cache template
                       //or return it if it already exists.
                       .getOrCreateCache("myTemporaryCache", "org.infinispan.DIST_SYNC");

For more examples of creating caches with a Hot Rod Java client, see the Data Grid tutorials.

2.4. Cache Health Status

Data Grid now returns one of the following for cache health:

HEALTHY means a cache is operating as expected.
HEALTHY_REBALANCING means a cache is in the rebalancing state but otherwise operating as expected.
DEGRADED indicates a cache is not operating as expected and possibly requires troubleshooting.

2.5. Marshalling Capabilities

As of this release, the default marshaller for Data Grid is ProtoStream, which marshalls data as Protocol Buffers, a language-neutral, backwards compatible format.

To use ProtoStream, Data Grid requires serialization contexts that contain:

  • .proto schemas that provide a structured representation of your Java objects as Protobuf message types.
  • Marshaller implementations to encode your Java objects to Protobuf format.

Data Grid provides direct integration with ProtoStream libraries and can generate everything you need to initialize serialization contexts.


Cache stores in previous versions of Data Grid store data in a binary format that is not compatible with ProtoStream marshallers. You must use the StoreMigrator utility to migrate your data.

  • Data Grid Library Mode does not include JBoss Marshalling by default. You must add the infinispan-jboss-marshalling dependency to your classpath.
  • Data Grid servers do support JBoss Marshalling but clients must declare the marshaller to use, as in the following Hot Rod client configuration:


  • Spring integration does not yet support the default ProtoStream marshaller. For this reason you should use the Java Serialization Marshaller.
  • To use the Java Serialization Marshaller, you must add classes to the deserialization whitelist.

2.6. Data Grid Configuration

New and Modified Elements and Attributes

  • stack adds support for inline JGroups stack definitions.
  • stack.combine and stack.position attributes let you override and modify JGroups stack definitions.
  • metrics lets you configure how Data Grid exports metrics that are compatible with the Eclipse MicroProfile Metrics API.
  • context-initializer lets you specify a SerializationContextInitializer implementation that initializes a Protostream-based marshaller for user types.
  • key-transformers lets you register transformers that convert custom keys to String for indexing with Lucene.
  • statistics now defaults to "false".

Deprecated Elements and Attributes

The following elements and attributes are now deprecated:

  • address-count attribute for the off-heap element.
  • protocol attribute for the transaction element.
  • duplicate-domains attribute for the jmx element.
  • advanced-externalizer
  • custom-interceptors
  • state-transfer-executor
  • transaction-protocol

Refer to the Configuration Schema for possible replacements or alternatives.

Removed Elements and Attributes

The following elements and attributes were deprecated in a previous release and are now removed:

  • deadlock-detection-spin
  • compatibility
  • write-skew
  • versioning
  • data-container
  • eviction
  • eviction-thread-policy

2.7. Persistence

In comparison with some previous versions of Data Grid, such as 7.1, there are changes to cache store configurations. Cache store definitions must:

  • Be contained within persistence elements.
  • Include an xlmns namespace.

As of this release, cache store configuration:

  • Defaults to segmented="true" if the cache store implementation supports segmentation.
  • Removes the singleton attribute for the store element. Use shared=true instead.

JDBC String-Based cache stores use connections factories based on Agroal to connect to databases. It is no longer possible to use c3p0.properties and hikari.properties files.

Likewise, JDBC String-Based cache store configuration that use segmentation, which is now the default, must include the segmentColumnName and segmentColumnType parameters.

MySQL Example


PostgreSQL Example



Previous versions of the Data Grid REST API were v1, which is now replaced by REST API v2.

The default context path is now You must update any clients or scripts to use REST API v2.

2.9. Hot Rod Client Authentication

Hot Rod clients now use SCRAM-SHA-512 as the default authentication mechanism instead of DIGEST-MD5.


If you use property security realms, you must use the PLAIN authentication mechanism.

2.10. Java Distributions Available in Maven

Data Grid no longer provides Java artifacts outside the Maven repository, with the exception of the Data Grid server distribution. For information on adding required dependencies for the Data Grid Library, Hot Rod Java client, and utilities such as StoreMigrator, see the relevant documentation.

2.11. Red Hat JBoss Enterprise Application Platform (EAP) Modules

Data Grid no longer provides modules for applications running on EAP. Instead, EAP will provide direct integration with Data Grid in a future release.

However, until EAP provides functionality for handling the infinispan subsystem, you must package Data Grid 8.0 artifacts in your EAP deployments.

2.12. Deprecated Features and Functionality

Support for deprecated functionality is not available beyond the release in which it is deprecated.


Red Hat does not recommend including, enabling, or configuring deprecated functionality in new deployments.

2.12.1. Deprecations

Data Grid 8.0 deprecates the following features and functionality:

Memcached Endpoint Connector

As of this release, Data Grid no longer supports the Memcached endpoint. The Memcached connector is deprecated and planned for removal in a future release.


If you have a use case or requirement for the Memcached connector, contact your Red Hat support team to discuss requirements for a future Data Grid implementation of the Memcached connector.

JBoss Marshalling

JBoss Marshalling is a Serialization-based marshalling library and was the default marshaller in previous Data Grid versions. You should not use serialization-based marshalling with Data Grid but instead use Protostream, which is a high-performance binary wire format that ensures backwards compatibility.


The following interfaces and annotations are now deprecated:

  • org.infinispan.commons.marshall.AdvancedExternalizer
  • org.infinispan.commons.marshall.Externalizer
  • @SerializeWith

Data Grid ignores AdvancedExternalizer implementations when persisting data unless you use JBoss Marshalling.

Total Order Transaction Protocol

The org.infinispan.transaction.TransactionProtocol#TOTAL_ORDER protocol is deprecated. Use the default 2PC protocol instead.

Lucene Directory

The functionality to use Data Grid as a shared, in-memory index for Hibernate Search queries is now deprecated.

Custom Interceptors

The functionality to create custom interceptors with the AdvancedCache interface is now deprecated.

2.12.2. Removed Features and Functionality

Data Grid 8.0 no longer includes the following features and functionality that was either deprecated in a previous release or replaced with new components:

  • Uberjars (replaced with Maven dependencies and individual JAR files)
  • EAP Modules (replaced by the EAP Infinispan subsystem)
  • Cassandra Cache Store
  • Apache Spark Connector
  • Apache Hadoop Connector
  • Apache Camel component: jboss-datagrid-camel-library is replaced by the camel-infinispan component in Red Hat Fuse 7.3 and later.
  • REST Cache Store
  • REST API v1 (replaced by REST API v2)
  • Compatibility Mode
  • Distributed Execution
  • CLI Cache Loader
  • LevelDB Cache Store
  • infinispan-cloud (replaced by default configuration in infinispan-core)
  • org.infinispan.atomic package
  • getBulk() methods in the RemoteCache API for Hot Rod clients
  • JDBC PooledConnectionFactory via C3P0 and HikariCP connection pools
  • OSGI support
  • infinispan.server.hotrod.workerThreads system property
  • JON Plugin

Chapter 3. Performing Rolling Upgrades for Data Grid Servers

Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss. Rolling upgrades migrate both your Data Grid servers and your data to the target version over Hot Rod.

3.1. Setting Up Target Clusters

Create a cluster that runs the target Data Grid version and uses a remote cache store to load data from the source cluster.


  • Install a Data Grid cluster with the target upgrade version.

Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and specify port offsets to keep the target and source clusters separate.


  1. Add a RemoteCacheStore on the target cluster for each cache you want to migrate from the source cluster.

    Remote cache stores use the Hot Rod protocol to retrieve data from remote Data Grid clusters. When you add the remote cache store to the target cluster, it can lazily load data from the source cluster to handle client requests.

  2. Switch clients over to the target cluster so it starts handling all requests.

    1. Update client configuration with the location of the target cluster.
    2. Restart clients.

3.1.1. Remote Cache Stores for Rolling Upgrades

You must use specific remote cache store configuration to perform rolling upgrades, as follows:

<persistence passivation="false"> 1
   <remote-store xmlns="urn:infinispan:config:store:remote:10.1"
                 cache="myDistCache" 2
                 protocol-version="2.5" 3
                 hotrod-wrapping="true" 4
                 raw-values="true"> 5
      <remote-server host="" port="11222"/> 6
Disables passivation. Remote cache stores for rolling upgrades must disable passivation.
Matches the name of a cache in the source cluster. Target clusters load data from this cache using the remote cache store.
Matches the Hot Rod protocol version of the source cluster. 2.5 is the minimum version and is suitable for any upgrade paths. You do not need to set another Hot Rod version.
Ensures that entries are wrapped in a suitable format for the Hot Rod protocol.
Stores data in the remote cache store in raw format. This ensures that clients can use data directly from the remote cache store.
Points to the location of the source cluster.

3.2. Synchronizing Data to Target Clusters

When your target cluster is running and handling client requests using a remote cache store to load data on demand, you can synchronize data from the source cluster to the target cluster.

This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache in your Data Grid configuration.


  1. Start the synchronization operation for each cache in your Data Grid configuration that you want to migrate to the target cluster.

    Use the Data Grid REST API and invoke GET requests with the ?action=sync- data parameter. For example, to synchronize data in a cache named "myCache" from a source cluster to a target cluster, do the following:

    GET /v2/caches/myCache?action=sync-data

    When the operation completes, Data Grid responds with the total number of entries copied to the target cluster.

    Alternatively, you can use JMX by invoking synchronizeData(migratorName=hotrod) on the RollingUpgradeManager MBean.

  2. Disconnect each node in the target cluster from the source cluster.

    For example, to disconnect the "myCache" cache from the source cluster, invoke the following GET request:

    GET /v2/caches/myCache?action=disconnect-source

    To use JMX, invoke disconnectSource(migratorName=hotrod) on the RollingUpgradeManager MBean.

Next steps

After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.

Chapter 4. Patching Data Grid Server Installations

Install and manage patches for Data Grid server installations.

You can apply patches to multiple Data Grid servers with different versions to upgrade to a desired target version. However, patches do not take effect if Data Grid servers are running. For this reason you install patches while servers are offline. If you want to upgrade Data Grid clusters without downtime, create a new cluster with the target version and perform a rolling upgrade to that version instead of patching.

4.1. Data Grid Server Patches

Data Grid server patches are .zip archives that contain artifacts that you can apply to your $RHDG_HOME directory to fix issues and add new features.

Patches also provide a set of rules for Data Grid to modify your server installation. When you apply patches, Data Grid overwrites some files and removes others, depending on if they are required for the target version.

However, Data Grid does not make any changes to configuration files that you have created or modified when applying a patch. Server patches do not modify or replace any custom configuration or data.

4.2. Downloading Server Patches

Download patches that you can apply to Data Grid servers.


  1. Access the Red Hat customer portal.
  2. Download the appropriate Data Grid server patch from the software downloads section.
  3. Open a terminal window and navigate to $RHDG_HOME.
  4. Start the CLI.

    $ bin/cli.sh
  5. Describe the patch file you downloaded.

    [disconnected]> patch describe /path/to/redhat-datagrid-$version-server-patch.zip
    Red Hat Data Grid patch target=$target_version source=$source_version created=$timestamp
    • $target_version is the Data Grid version that applies when you install the patch on a server.
    • $source_version is one or more Data Grid server versions where you can install the patch.


Use the checksum to verify the integrity of your download.

  1. Run the md5sum or sha256sum command with the downloaded patch as the argument, for example:

    $ sha256sum redhat-datagrid-$version-server-patch.zip
  2. Compare with the MD5 or SHA-256 checksum value on the Data Grid Software Details page.

4.3. Creating Server Patches

You can create patches for Data Grid servers from an existing server installation.

You can create patches for Data Grid servers starting from 8.0.1. You can patch 8.0 GA servers with 8.0.1. However you cannot patch 7.3.x or earlier servers with 8.0.1 or later.

You can also create patches that either upgrade or downgrade the Data Grid server version. For example, you can create a patch from version 8.0.1 and use it to upgrade version 8.0 GA or downgrade a later version.


Red Hat supports patched server deployments only with patches that you download from the Red Hat customer portal. Red Hat does not support server patches that you create yourself.


  1. Navigate to $RHDG_HOME for a Data Grid server installation that has the target version for the patch you want to create.
  2. Start the CLI.

    $ bin/cli.sh
  3. Use the patch create command to generate a patch archive and include the -q option with a meaningful qualifier to describe the patch.

    [disconnected]> patch create -q "this is my test patch" path/to/mypatch.zip \
    path/to/target/server/home path/to/source/server/home

    The preceding command generates a .zip archive in the specified directory. Paths are relative to $RHDG_HOME for the target server.


    Create single patches for multiple different Data Grid versions, for example:

    [disconnected]> patch create -q "this is my test patch" path/to/mypatch.zip \
    path/to/target/server/home \
    path/to/source/server1/home path/to/source/server2/home

    Where server1 and server2 are different Data Grid versions where you can install "mypatch.zip".

  4. Describe the generated patch archive.

    [disconnected]> patch describe path/to/mypatch.zip
    Red Hat Data Grid patch target=$target_version(my test patch)  source=$source_version created=$timestamp
    • $target_version is the Data Grid server version from which the patch was created.
    • $source_version is one or more Data Grid server versions to which you can apply the patch.

      You can apply patches to Data Grid servers that match the $source_version only. Attempting to apply patches to other versions results in the following exception:

      java.lang.IllegalStateException: The supplied patch cannot be applied to `$source_version`

4.4. Installing Server Patches

Apply patches to Data Grid servers to upgrade or downgrade an existing version.


  • Download a server patch for the target version.


  1. Navigate to $RHDG_HOME for the Data Grid server you want to patch.
  2. Stop the server if it is running.


    If you patch a server while it is running, the version changes take effect after restart. If you do not want to stop the server, create a new cluster with the target version and perform a rolling upgrade to that version instead of patching.

  3. Start the CLI.

    $ bin/cli.sh
  4. Install the patch.

    [disconnected]> patch install path/to/patch.zip
    Red Hat Data Grid patch target=$target_version source=$source_version \
    created=$timestamp installed=$timestamp
    • $target_version displays the Data Grid version that the patch installed.
    • $source_version displays the Data Grid version before you installed the patch.
  5. Start the server to verify the patch is installed.

    $ bin/server.sh
    ISPN080001: Red Hat Data Grid Server $version

    If the patch is installed successfully $version matches $target_version.


Use the --server option to install patches in a different $RHDG_HOME directory, for example:

[disconnected]> patch install path/to/patch.zip --server=path/to/server/home

4.5. Rolling Back Server Patches

Remove patches from Data Grid servers by rolling them back and restoring the previous Data Grid version.


If a server has multiple patches installed, you can roll back the last installed patch only.

Rolling back patches does not revert configuration changes you make to Data Grid server. Before you roll back patches, you should ensure that your configuration is compatible with the version to which you are rolling back.


  1. Navigate to $RHDG_HOME for the Data Grid server installation you want to roll back.
  2. Stop the server if it is running.
  3. Start the CLI.

    $ bin/cli.sh
  4. List the installed patches.

    [disconnected]> patch ls
    Red Hat Data Grid patch target=$target_version source=$source_version
    created=$timestamp installed=$timestamp
    • $target_version is the Data Grid server version after the patch was applied.
    • $source_version is the version for Data Grid server before the patch was applied. Rolling back the patch restores the server to this version.
  5. Roll back the last installed patch.

    [disconnected]> patch rollback
  6. Quit the CLI.

    [disconnected]> quit
  7. Start the server to verify the patch is rolled back to the previous version.

    $ bin/server.sh
    ISPN080001: Data Grid Server $version

    If the patch is rolled back successfully $version matches $source_version.


Use the --server option to rollback patches in a different $RHDG_HOME directory, for example:

[disconnected]> patch rollback --server=path/to/server/home

Chapter 5. Migrating Data Between Cache Stores

Data Grid provides a Java utility for migrating persisted data between cache stores.

In the case of upgrading Data Grid, functional differences between major versions do not allow backwards compatibility between cache stores. You can use StoreMigrator to convert your data so that it is compatible with the target version.

For example, upgrading to Data Grid 8.0 changes the default marshaller to Protostream. In previous Data Grid versions, cache stores use a binary format that is not compatible with the changes to marshalling. This means that Data Grid 8.0 cannot read from cache stores with previous Data Grid versions.

In other cases Data Grid versions deprecate or remove cache store implementations, such as JDBC Mixed and Binary stores. You can use StoreMigrator in these cases to convert to different cache store implementations.

5.1. Cache Store Migrator

Data Grid provides the StoreMigrator.java utility that recreates data for the latest Data Grid cache store implementations.

StoreMigrator takes a cache store from a previous version of Data Grid as source and uses a cache store implementation as target.

When you run StoreMigrator, it creates the target cache with the cache store type that you define using the EmbeddedCacheManager interface. StoreMigrator then loads entries from the source store into memory and then puts them into the target cache.

StoreMigrator also lets you migrate data from one type of cache store to another. For example, you can migrate from a JDBC String-Based cache store to a Single File cache store.


StoreMigrator cannot migrate data from segmented cache stores to:

  • Non-segmented cache store.
  • Segmented cache stores that have a different number of segments.

5.2. Getting the Store Migrator

StoreMigrator is available as part of the Data Grid tools library, infinispan-tools, and is included in the Maven repository.


  • Configure your pom.xml for StoreMigrator as follows:

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
          <!-- Additional dependencies -->

5.3. Configuring the Store Migrator

Set properties for source and target cache stores in a migrator.properties file.


  1. Create a migrator.properties file.
  2. Configure the source cache store in migrator.properties.

    1. Prepend all configuration properties with source. as in the following example:

  3. Configure the target cache store in migrator.properties.

    1. Prepend all configuration properties with target. as in the following example:


5.3.1. Store Migrator Properties

Configure source and target cache stores in a StoreMigrator properties.

Table 5.1. Cache Store Type Property



Specifies the type of cache store type for a source or target.










Table 5.2. Common Properties

PropertyDescriptionExample ValueRequired/Optional


Names the cache that the store backs.




Specifies the number of segments for target cache stores that can use segmentation.

The number of segments must match clustering.hash.numSegments in the Data Grid configuration.

In other words, the number of segments for a cache store must match the number of segments for the corresponding cache. If the number of segments is not the same, Data Grid cannot read data from the cache store.



Table 5.3. JDBC Properties



Specifies the dialect of the underlying database.



Specifies the marshaller version for source cache stores. Set one of the following values:

* 8 for Data Grid 7.2.x

* 9 for Data Grid 7.3.x

* 10 Data Grid 8.x

Required for source stores only.

For example: source.version=9


Specifies a custom marshaller class.

Required if using custom marshallers.


Specifies a comma-separated list of custom AdvancedExternalizer implementations to load in this format: [id]:<Externalizer class>



Specifies the JDBC connection URL.



Specifies the class of the JDBC driver.



Specifies a database username.



Specifies a password for the database username.



Sets the database major version.



Sets the database minor version.



Disables database upsert.



Specifies if table indexes are created.



Specifies additional prefixes for the table name.



Specifies the column name.



Specifies the column type.



Specifies the TwoWayKey2StringMapper class.



To migrate from Binary cache stores in older Data Grid versions, change table.string.* to table.binary.\* in the following properties:

  • source.table.binary.table_name_prefix
  • source.table.binary.<id\|data\|timestamp>.name
  • source.table.binary.<id\|data\|timestamp>.type
# Example configuration for migrating to a JDBC String-Based cache store
target.key_to_string_mapper=org.infinispan.persistence.keymappers. DefaultTwoWayKey2StringMapper

Table 5.4. RocksDB Properties



Sets the database directory.



Specifies the compression type to use.


# Example configuration for migrating from a RocksDB cache store.

Table 5.5. SingleFileStore Properties



Sets the directory that contains the cache store .dat file.


# Example configuration for migrating to a Single File cache store.

Table 5.6. SoftIndexFileStore Properties




Sets the database directory.



Sets the database index directory.

# Example configuration for migrating to a Soft-Index File cache store.

5.4. Migrating Cache Stores

Run StoreMigrator to migrate data from one cache store to another.


  • Get infinispan-tools.jar.
  • Create a migrator.properties file that configures the source and target cache stores.


  • If you build infinispan-tools.jar from source, do the following:

    1. Add infinispan-tools.jar and dependencies for your source and target databases, such as JDBC drivers, to your classpath.
    2. Specify migrator.properties file as an argument for StoreMigrator.
  • If you pull infinispan-tools.jar from the Maven repository, run the following command:

    mvn exec:java

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.