Chapter 33. High Availability Using Server Hinting

33.1. High Availability Using Server Hinting

In Red Hat JBoss Data Grid, Server Hinting ensures that backed up copies of data are not stored on the same physical server, rack, or data center as the original. Server Hinting does not apply to total replication because total replication mandates complete replicas on every server, rack, and data center.

Data distribution across nodes is controlled by the Consistent Hashing mechanism. JBoss Data Grid offers a pluggable policy to specify the consistent hashing algorithm. For details see ConsistentHashFactories.

Setting a machineId, rackId, or siteId in the transport configuration will trigger the use of TopologyAwareConsistentHashFactory, which is the equivalent of the DefaultConsistentHashFactory with Server Hinting enabled.

Server Hinting is particularly important when ensuring the high availability of your JBoss Data Grid implementation.

33.2. ConsistentHashFactories

33.2.1. ConsistentHashFactories

Red Hat JBoss Data Grid offers a pluggable mechanism for selecting the consistent hashing algorithm. It is shipped with four implementations but a custom implementation can also be used.

JBoss Data Grid ships with four ConsistentHashFactory implementations:

  • DefaultConsistentHashFactory - keeps segments balanced evenly across all the nodes, however the key mapping is not guaranteed to be same across caches,as this depends on the history of each cache. If no consistentHashFactory is specified this is the class that will be used.
  • SyncConsistentHashFactory - guarantees that the key mapping is the same for each cache, provided the current membership is the same. This has a drawback in that a node joining the cache can cause the existing nodes to also exchange segments, resulting in either additional state transfer traffic, the distribution of the data becoming less even, or both.
  • TopologyAwareConsistentHashFactory - equivalent of DefaultConsistentHashFactory, but automatically selected when the configuration includes server hinting.
  • TopologyAwareSyncConsistentHashFactory - equivalent of SyncConsistentHashFactory, but automatically selected when the configuration includes server hinting.

The consistent hash implementation can be selected via the hash configuration:

<hash consistent-hash-factory="org.infinispan.distribution.ch.SyncConsistentHashFactory"/>

This configuration guarantees caches with the same members have the same consistent hash, and if the machineId, rackId, or siteId attributes are specified in the transport configuration it also spreads backup copies across physical machines/racks/data centers.

It has a potential drawback in that it can move a greater number of segments than necessary during re-balancing. This can be mitigated by using a larger number of segments.

Another potential drawback is that the segments are not distributed as evenly as possible, and actually using a very large number of segments can make the distribution of segments worse.

Despite the above potential drawbacks the SyncConsistentHashFactory and TopologyAwareSyncConsistentHashFactory both tend to reduce overhead in clustered environments, as neither of these calculate the hash based on the order that nodes have joined the cluster. In addition, both of these classes are typically faster than the default algorithms as both of these classes allow larger differences in the number of segments allocated to each node.

33.2.2. Implementing a ConsistentHashFactory

A custom ConsistentHashFactory must implement the org.infinispan.distribution.ch.ConsistenHashFactory interface with the following methods (all of which return an implementation of org.infinispan.distribution.ch.ConsistentHash):

ConsistentHashFactory Methods

create(Hash hashFunction, int numOwners, int numSegments, List<Address>
members,Map<Address, Float> capacityFactors)
updateMembers(ConsistentHash baseCH, List<Address> newMembers, Map<Address,
Float> capacityFactors)
rebalance(ConsistentHash baseCH)
union(ConsistentHash ch1, ConsistentHash ch2)

Currently it is not possible to pass custom parameters to ConsistentHashFactory implementations.

33.3. Key Affinity Service

33.3.1. Key Affinity Service

The key affinity service allows a value to be placed in a certain node in a distributed Red Hat JBoss Data Grid cluster. The service returns a key that is hashed to a particular node based on a supplied cluster address identifying it.

The keys returned by the key affinity service cannot hold any meaning, such as a username. These are only random identifiers that are used throughout the application for this record. The provided key generators do not guarantee that the keys returned by this service are unique. For custom key format, you can pass your own implementation of KeyGenerator.

The following is an example of how to obtain and use a reference to this service.

Key Affinity Service

EmbeddedCacheManager cacheManager = getCacheManager();
Cache cache = cacheManager.getCache();
KeyAffinityService keyAffinityService =
        KeyAffinityServiceFactory.newLocalKeyAffinityService(
            cache,
            new RndKeyGenerator(),
            Executors.newSingleThreadExecutor(),
            100);
Object localKey = keyAffinityService.getKeyForAddress(cacheManager.getAddress());
cache.put(localKey, "yourValue");

The following procedure is an explanation of the provided example.

Using the Key Affinity Service

  1. Obtain a reference to a cache manager and cache.
  2. This starts the service, then uses the supplied Executor to generate and queue keys.
  3. Obtain a key from the service which will be mapped to the local node (cacheManager.getAddress() returns the local address).
  4. The entry with a key obtained from the KeyAffinityService is always stored on the node with the provided address. In this case, it is the local node.

33.3.2. Lifecycle

KeyAffinityService extends Lifecycle, which allows the key affinity service to be stopped, started, and restarted.

Key Affinity Service Lifecycle Parameter

public interface Lifecycle {
     void start();
     void stop();
}

The service is instantiated through the KeyAffinityServiceFactory. All factory methods have an Executor, that is used for asynchronous key generation, so that this does not occur in the caller’s thread. The user controls the shutting down of this Executor.

The KeyAffinityService must be explicitly stopped when it is no longer required. This stops the background key generation, and releases other held resources. The KeyAffinityServce will only stop itself when the cache manager with which it is registered is shut down.

33.3.3. Topology Changes

KeyAffinityService key ownership may change when a topology change occurs. The key affinity service monitors topology changes and updates so that it doesn’t return stale keys, or keys that would map to a different node than the one specified. However, this does not guarantee that a node affinity hasn’t changed when a key is used. For example:

  1. Thread (T1) reads a key (K1) that maps to a node (A).
  2. A topology change occurs, resulting in K1 mapping to node B.
  3. T1 uses K1 to add something to the cache. At this point, K1 maps to B, a different node to the one requested at the time of read.

The above scenario is a not ideal, however it is a supported behavior for the application, as the keys that are already in use may be moved over during cluster change. The KeyAffinityService provides an access proximity optimization for stable clusters, which does not apply during the instability of topology changes.