Chapter 26. Performing Rolling Upgrades

Upgrade Red Hat Data Grid without downtime or data loss. You can perform rolling upgrades in Remote Client/Server Mode to start using a more recent version of Red Hat Data Grid.


This section explains how to upgrade Red Hat Data Grid servers, see the appropriate documentation for your Hot Rod client for upgrade procedures.

From a high-level, you do the following to perform rolling upgrades:

  1. Set up a target cluster. The target cluster is the Red Hat Data Grid version to which you want to migrate data. The source cluster is the Red Hat Data Grid deployment that is currently in use. After the target cluster is running, you configure all clients to point to it instead of the source cluster.
  2. Synchronize data from the source cluster to the target cluster.

26.1. Setting Up a Target Cluster

  1. Start the target cluster with unique network properties or a different JGroups cluster name to keep it separate from the source cluster.
  2. Configure a RemoteCacheStore on the target cluster for each cache you want to migrate from the source cluster.

    RemoteCacheStore settings
    • remote-server must point to the source cluster via the outbound-socket-binding property.
    • remoteCacheName must match the cache name on the source cluster.
    • hotrod-wrapping must be true (enabled).
    • shared must be true (enabled).
    • purge must be false (disabled).
    • passivation must be false (disabled).
    • protocol-version matches the Hot Rod protocol version of the source cluster.

      Example RemoteCacheStore Configuration

         <remote-store cache="MyCache" socket-timeout="60000" tcp-no-delay="true" protocol-version="2.5" shared="true" hotrod-wrapping="true" purge="false" passivation="false">
            <remote-server outbound-socket-binding="remote-store-hotrod-server"/>
      <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
        <outbound-socket-binding name="remote-store-hotrod-server">
           <remote-destination host="" port="11222"/>

  3. Configure the target cluster to handle all client requests instead of the source cluster:

    1. Configure all clients to point to the target cluster instead of the source cluster.
    2. Restart each client node.

      The target cluster lazily loads data from the source cluster on demand via RemoteCacheStore.

26.2. Synchronizing Data from the Source Cluster

  1. Call the synchronizeData() method in the TargetMigrator interface. Do one of the following on the target cluster for each cache that you want to migrate:

    Invoke the synchronizeData operation and specify the hotrod parameter on the RollingUpgradeManager MBean.
    $ bin/ --connect controller= -c "/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=MyCache:synchronize-data(migrator-name=hotrod)"

    Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data.

    Use the following parameters to tune the operation:

    • read-batch configures the number of entries to read from the source cluster at a time. The default value is 10000.
    • write-threads configures the number of threads used to write data. The default value is the number of processors available.

      For example:

      synchronize-data(migrator-name=hotrod, read-batch=100000, write-threads=3)

  2. Disable the RemoteCacheStore on the target cluster. Do one of the following:

    Invoke the disconnectSource operation and specify the hotrod parameter on the RollingUpgradeManager MBean.
    $ bin/ --connect controller= -c "/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=MyCache:disconnect-source(migrator-name=hotrod)"
  3. Decommission the source cluster. == Extending Red Hat Data Grid Red Hat Data Grid can be extended to provide the ability for an end user to add additional configurations, operations and components outside of the scope of the ones normally provided by Red Hat Data Grid.

26.3. Custom Commands

Red Hat Data Grid makes use of a command/visitor pattern to implement the various top-level methods you see on the public-facing API. This is explained in further detail in the Architectural Overview section. While the core commands - and their corresponding visitors - are hard-coded as a part of Red Hat Data Grid’s core module, module authors can extend and enhance Red Hat Data Grid by creating new custom commands.

As a module author (such as infinispan-query, etc.) you can define your own commands.

You do so by:

  1. Create a META-INF/services/org.infinispan.commands.module.ModuleCommandExtensions file and ensure this is packaged in your jar.
  2. Implementing ModuleCommandFactory, ModuleCommandInitializer and ModuleCommandExtensions
  3. Specifying the fully-qualified class name of the ModuleCommandExtensions implementation in META-INF/services/org.infinispan.commands.module.ModuleCommandExtensions.
  4. Implement your custom commands and visitors for these commands

26.3.1. An Example

Here is an example of an META-INF/services/org.infinispan.commands.module.ModuleCommandExtensions file, configured accordingly:



For a full, working example of a sample module that makes use of custom commands and visitors, check out Red Hat Data Grid Sample Module .

26.3.2. Preassigned Custom Command Id Ranges

This is the list of Command identifiers that are used by Red Hat Data Grid based modules or frameworks. Red Hat Data Grid users should avoid using ids within these ranges. (RANGES to be finalised yet!) Being this a single byte, ranges can’t be too large.

Red Hat Data Grid Query:

100 - 119

Hibernate Search:

120 - 139

Hot Rod Server:

140 - 141

26.4. Extending the configuration builders and parsers

If your custom module requires configuration, it is possible to enhance Red Hat Data Grid’s configuration builders and parsers. Look at the custom module tests for a detail example on how to implement this.