Chapter 12. New Features and Enhancements in 7.3.0

12.1. Data Grid for OpenShift Improvements

This release includes several improvements for Data Grid for OpenShift, including:

  • Full support for the Cache service (cache-service).
  • A Data Grid service (datagrid-service) that provides a full distribution of Data Grid for OpenShift.
  • Enhancements to the Data Grid for OpenShift image.
  • Monitoring capabilities through integration with Prometheus.
  • Library Mode support that allows you to embed Data Grid in containerized applications running on OpenShift. Limitations apply. See Data Grid for OpenShift for more information.

    Note

    Red Hat does not recommend embedding Data Grid in custom server applications. If you want Data Grid to handle caching requests from client applications, deploy Data Grid for OpenShift.

Visit the Data Grid for OpenShift Documentation to find out more and get started.

12.2. Framework Integration

This release of Data Grid improves integration with well-known enterprise Java frameworks such as Spring and Hibernate.

12.2.1. Spring Enhancements

This release adds several enhancements to Data Grid integration with the Spring Framework:

Note

This release of Data Grid supports specific versions of Spring Framework and Spring Boot. See the supported configurations at https://access.redhat.com/articles/2435931.

12.2.1.1. Synchronized Get Operations

Data Grid implements SPR-9254 so that the get() method uses a multi-threaded mechanism for returning key values.

For more information, see the description for get in the org.springframework.cache interface.

12.2.1.2. Asynchronous Operations and Timeout Configuration

You can now set a maximum time to wait for read and write operations when using Data Grid as a Spring cache provider. The timeout allows method calls to happen asynchronously.

Consider the differences before and after timeouts with the following put() method examples:

Before Write Timeouts

public void put(Object key, Object value, long lifespan, TimeUnit unit) {
      this.cacheImplementation.put(key, value != null ? value : NullValue.NULL, lifespan, unit);
}

After Write Timeouts

public void put(final Object key, final Object value) {
      this.nativeCache.put(key, value);
      try {
         if (writeTimeout > 0)
            this.nativeCache.putAsync(key, value != null ? value : NullValue.NULL).get(writeTimeout, TimeUnit.MILLISECONDS);
         else
            this.nativeCache.put(key, value != null ? value : NullValue.NULL);
      } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
         throw new CacheException(e);
      } catch (ExecutionException | TimeoutException e) {
         throw new CacheException(e);
      }
   }

If you configure a timeout for write operations, putAsync is called, which is a "fire-and-forget" call that does not block other writes.

If you do not configure a timeout, a synchronous put is called, which blocks other writes.

Set timeout configuration with infinispan.spring.operation.read.timeout and infinispan.spring.operation.write.timeout. Read the Configuring Timeouts in the documentation to learn how.

12.2.1.3. Centralized Configuration Properties for Spring Applications

If you are using Data Grid as a Spring Cache Provider in remote client-server mode, you can set configuration properties in hotrod-client.properties on your classpath. Your application can then create a RemoteCacheManager with that configuration.

Information about available configuration properties is available in the org.infinispan.client.hotrod.configuration Package Description.

12.2.1.4. Ability to Retrieve Cache Names

The RemoteCacheManager class now includes a getCacheNames() method that returns cache names as a JSON array of strings, for example, ["cache1", "cache2"]. This method is included in the org.springframework.cache.CacheManager implementation so that you can look up defined cache names when using Data Grid as a Sping cache provider.

Find out more in the Javadocs for RemoteCacheManager.

12.2.1.5. Spring Boot Starter

Data Grid includes a Spring Boot starter to help you quickly get up and running. See Data Grid Spring Boot Starter.

12.2.2. Hibernate Second-level (L2) Caching

Data Grid seamlessly integrates with Hibernate as an (L2) cache provider to improve the performance of your application’s persistence layer.

Hibernate provides Object/Relational Mapping (ORM) capabilities for Java and is a fully compliant JPA (Java Persistence API) persistence provider. Hibernate uses first-level (L1) caching where objects in the cache are bound to sessions. As an L2 cache provider, Data Grid acts as a global cache for objects across all sessions.

You can configure Data Grid as the L2 cache in:

  • JPA: persistence.xml
  • Spring: application.properties

For complete information on enabling L2 cache, along with information for different deployment scenarios, see JPA/Hibernate L2 Cache in the documentation.

12.2.3. Integration with Red Hat SSO in Embedded Mode

This release provides support for using Red Hat SSO to secure access to Data Grid in Library (embedded) Mode.

See the secure-embedded-cache quickstart for more information and to deploy and run a sample application that demonstrates integration with Red Hat SSO.

12.3. Hot Rod Client Improvements

12.3.1. Support for Transactions

Java, C++, and C# Hot Rod clients can now start and participate in transactions.

The Java Hot Rod client supports both FULL_XA and NON_XA transaction modes. C++ and C# Hot Rod clients provide support for NON_XA transaction modes only.

For more information, see Hot Rod Transactions.

12.3.2. Java Hot Rod Client Statistics in JMX

The ServerStatistics interface now exposes statistics for the Java Hot Rod client through JMX.

You must enable JMX statistics in your Hot Rod client implementation, as in the following example:

ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServer()
        	.host("127.0.0.1")
        	.port(11222)
        	.statistics()
        	.enable()
        	.jmxEnable();
  • enabled() lets you collect client-side statistics.
  • jmxEnabled() exposes statistics through JMX.

For more information, see ServerStatistics in the Javadocs.

12.3.3. Java Hot Rod Client Configuration Enhancements

This release improves configuration for the Java Hot Rod client with the hotrod-client.properties file. You can configure near cache settings, cross-site (xsite) properties, settings to control authentication and encryption, and more.

For more information, see the Hot Rod client configuration summary.

12.3.4. New Java Hot Rod Client Implementation Based on Netty

The Hot Rod Java client is built with the Netty framework, providing improved performance and support for executing operations concurrently over the same connection.

In previous releases, application-executed operations were done synchronously or delegated to dedicated thread pools.

As of this release, operations are executed asynchronously and the response is processed in the HotRod-client-async-pool thread pool. Operations can be multiplexed over the same connection, which requires fewer connections.

Note

Custom marshallers must not rely on any particular thread calling them to unmarshall data.

For more information, see the Javadocs:

12.3.5. Javascript Hot Rod Client Support for JSON Objects

The node.js Hot Rod client adds support for native JSON objects as keys and values. In previous releases, the client supported String keys and values only.

To use native JSON objects, you must configure the client as follows:

var infinispan = require('infinispan');
var connected = infinispan.client(
    {port: 11222, host: '127.0.0.1'}
    , {
        dataFormat : {
            keyType: 'application/json',
            valueType: 'application/json'
        }
    }
);
connected.then(function (client) {
  var clientPut = client.put({k: 'key'}, {v: 'value'});
  var clientGet = clientPut.then(
      function() { return client.get({k: 'key'}); });
  var showGet = clientGet.then(
      function(value) { console.log("get({k: 'key'})=" + JSON.stringify(value)); });
  return showGet.finally(
      function() { return client.disconnect(); });
}).catch(function(error) {
  console.log("Got error: " + error.message);
});

You can configure data types for keys and values separately. For example, you can configure keys as String and values as JSON.

Note

Scripts do not currently support native JSON objects.

12.3.6. Retrieving Cache Names Through Hot Rod

This release includes the getCacheNames() method that returns a collection of caches names that have been defined declaratively or programmatically as well as the caches that have been created at runtime through RemoteCacheManager.

The Hot Rod protocol also now includes a @@cache@names admin task that returns cache names as a JSON array of strings, for example, ["cache1", "cache2"].

For more information, see:

12.4. Improvements to Persistence

12.4.1. Fault Tolerance for Write-Behind Cache Stores

Data Grid now lets you configure write-behind cache stores so that, in the event a write-behind operation fails, additional operations on the cache are not allowed. Additionally, modifications that failed to write to the cache store are queued until the underlying cache store becomes available.

You can configure fault tolerance with the connection-attempts, connection-interval, and fail-silently declaratively.

To configure fault tolerance programmatically, use the following:

  • connectionAttempts() and connectionInterval() methods in the PersistenceConfiguration class.
  • failSilently() method in the AsyncStoreConfiguration class.

For more information, see the documentation:

12.5. Data Interoperability

Data Grid provides transcoding capabilities that convert data between formats that is suitable for different endpoints. For example, you can write ProtoBuf-encoded data through the Hot Rod endpoint and then retrieve it as a JSON document through the REST endpoint.

Note

Compatibility mode is now deprecated and will be removed from Data Grid. You should use protocol interoperability capabilities instead.

12.6. Default Analyzers

Data Grid includes a set of default analyzers that convert input data into one or more terms that you can index and query.

For more information, see Analysis.

12.7. Metrics Improvements

This release exposes new operations and metrics through JMX:

ClusteredLockManager component
  • forceRelease forces locks to be released.
  • isDefined returns true if the lock is defined.
  • isLocked returns true if the lock exists and is acquired.
  • remove removes locks from clusters. Locks must be recreated to access again.
Passivation component
  • passivateAll passivates all entries to the CacheStore.
CacheStore component
  • NumberOfPersistedEntries returns the number of entries currently persisted, excluding expired entries.

For more information, see jmxComponents.

12.8. Locked Streams

The invokeAll() method in the LockedStream interface now provides a way to execute code for entries where locks are held for the respective keys, allowing you to not have to acquire locks and be able to guarantee the state for values.

For more information, see org.infinispan.LockedStream.

12.9. Improvements to Configuration

12.9.1. Configuration Wildcards

You can now use wildcards in configuration template names so that Data Grid applies the template to any matching caches.

For more information, see Cache configuration wildcards.

12.9.2. Immutable Configuration

You can now create immutable configuration storage providers to prevent the creation or removal of caches.

Use the immutable-configuration-storage parameter when configuring Data Grid declaratively or programmatically use the IMMUTABLE configuration store in the global state.

12.10. Improved Security for Cache Operations

The AdvancedCache interface now includes a withSubject() method that performs operations with the specified subject when authorization is enabled on caches.

See the Javadocs for the AdvancedCache interface.

12.11. HTTP/2 Support

Data Grid now provides support for HTTP/2 with the REST endpoint.