Chapter 22. Protocol Interoperability

Clients exchange data with Red Hat Data Grid through endpoints such as REST or Hot Rod.

Each endpoint uses a different protocol so that clients can read and write data in a suitable format. Because Red Hat Data Grid can interoperate with multiple clients at the same time, it must convert data between client formats and the storage formats.

To configure Red Hat Data Grid endpoint interoperability, you should define the MediaType that sets the format for data stored in the cache.

22.1. Considerations with Media Types and Endpoint Interoperability

Configuring Red Hat Data Grid to store data with a specific media type affects client interoperability.

Although REST clients do support sending and receiving encoded binary data, they are better at handling text formats such as JSON, XML, or plain text.

Memcached text clients can handle String-based keys and byte[] values but cannot negotiate data types with the server. These clients do not offer much flexibility when handling data formats because of the protocol definition.

Java Hot Rod clients are suitable for handling Java objects that represent entities that reside in the cache. Java Hot Rod clients use marshalling operations to serialize and deserialize those objects into byte arrays.

Similarly, non-Java Hot Rod clients, such as the C++, C#, and Javascript clients, are suitable for handling objects in the respective languages. However, non-Java Hot Rod clients can interoperate with Java Hot Rod clients using platform independent data formats.

22.2. REST, Hot Rod, and Memcached Interoperability with Text-Based Storage

You can configure key and values with a text-based storage format.

For example, specify text/plain; charset=UTF-8, or any other character set, to set plain text as the media type. You can also specify a media type for other text-based formats such as JSON (application/json) or XML (application/xml) with an optional character set.

The following example configures the cache to store entries with the text/plain; charset=UTF-8 media type:

<cache>
   <encoding>
      <key media-type="text/plain; charset=UTF-8"/>
      <value media-type="text/plain; charset=UTF-8"/>
   </encoding>
</cache>

To handle the exchange of data in a text-based format, you must configure Hot Rod clients with the org.infinispan.commons.marshall.StringMarshaller marshaller.

REST clients must also send the correct headers when writing and reading from the cache, as follows:

  • Write: Content-Type: text/plain; charset=UTF-8
  • Read: Accept: text/plain; charset=UTF-8

Memcached clients do not require any configuration to handle text-based formats.

This configuration is compatible with…​

REST clients

Yes

Java Hot Rod clients

Yes

Memcached clients

Yes

Non-Java Hot Rod clients

No

Querying and Indexing

No

Custom Java objects

No

22.3. REST, Hot Rod, and Memcached Interoperability with Custom Java Objects

If you store entries in the cache as marshalled, custom Java objects, you should configure the cache with the MediaType of the marshalled storage.

Java Hot Rod clients use the JBoss marshalling storage format as the default to store entries in the cache as custom Java objects.

The following example configures the cache to store entries with the application/x-jboss-marshalling media type:

<distributed-cache name="my-cache">
   <encoding>
      <key media-type="application/x-jboss-marshalling"/>
      <value media-type="application/x-jboss-marshalling"/>
   </encoding>
</distributed-cache>

If you use the Protostream marshaller, configure the MediaType as application/x-protostream. For UTF8Marshaller, configure the MediaType as text/plain.

Tip

If only Hot Rod clients interact with the cache, you do not need to configure the MediaType.

Because REST clients are most suitable for handling text formats, you should use primitives such as java.lang.String for keys. Otherwise, REST clients must handle keys as bytes[] using a supported binary encoding.

REST clients can read values for cache entries in XML or JSON format. However, the classes must be available in the server.

To read and write data from Memcached clients, you must use java.lang.String for keys. Values are stored and returned as marshalled objects.

Some Java Memcached clients allow data transformers that marshall and unmarshall objects. You can also configure the Memcached server module to encode responses in different formats, such as 'JSON' which is language neutral. This allows non-Java clients to interact with the data even if the storage format for the cache is Java-specific. See Client Encoding details for the Red Hat Data Grid Memcached server module.

Note

Storing Java objects in the cache requires you to deploy entity classes to Data Grid. See Deploying Entity Classes.

This configuration is compatible with…​

REST clients

Yes

Java Hot Rod clients

Yes

Memcached clients

Yes

Non-Java Hot Rod clients

No

Querying and Indexing

No

Custom Java objects

Yes

22.4. Java and Non-Java Client Interoperability with Protobuf

Storing data in the cache as Protobuf encoded entries provides a platform independent configuration that enables Java and Non-Java clients to access and query the cache from any endpoint.

If indexing is configured for the cache, Red Hat Data Grid automatically stores keys and values with the application/x-protostream media type.

If indexing is not configured for the cache, you can configure it to store entries with the application/x-protostream media type as follows:

<distributed-cache name="my-cache">
   <encoding>
      <key media-type="application/x-protostream"/>
      <value media-type="application/x-protostream"/>
   </encoding>
</distributed-cache>

Red Hat Data Grid converts between application/x-protostream and application/json, which allows REST clients to read and write JSON formatted data. However REST clients must send the correct headers, as follows:

Read Header
Read: Accept: application/json
Write Header
Write: Content-Type: application/json
Important

The application/x-protostream media type uses Protobuf encoding, which requires you to register a Protocol Buffers schema definition that describes the entities and marshallers that the clients use. See Storing Protobuf Entities.

This configuration is compatible with…​

REST clients

Yes

Java Hot Rod clients

Yes

Non-Java Hot Rod clients

Yes

Querying and Indexing

Yes

Custom Java objects

Yes

22.5. Custom Code Interoperability

You can deploy custom code with Red Hat Data Grid. For example, you can deploy scripts, tasks, listeners, converters, and merge policies. Because your custom code can access data directly in the cache, it must interoperate with clients that access data in the cache through different endpoints.

For example, you might create a remote task to handle custom objects stored in the cache while other clients store data in binary format.

To handle interoperability with custom code you can either convert data on demand or store data as Plain Old Java Objects (POJOs).

22.5.1. Converting Data On Demand

If the cache is configured to store data in a binary format such as application/x-protostream or application/x-jboss-marshalling, you can configure your deployed code to perform cache operations using Java objects as the media type. See Overriding the MediaType Programmatically.

This approach allows remote clients to use a binary format for storing cache entries, which is optimal. However, you must make entity classes available to the server so that it can convert between binary format and Java objects.

Additionally, if the cache uses Protobuf (application/x-protostream) as the binary format, you must deploy protostream marshallers so that Data Grid can unmarshall data from your custom code. See Deploying Protostream Marshallers.

22.5.2. Storing Data as POJOs

Storing unmarshalled Java objects in the server is not recommended. Doing so requires Red Hat Data Grid to serialize data when remote clients read from the cache and then deserialize data when remote clients write to the cache.

The following example configures the cache to store entries with the application/x-java-object media type:

<distributed-cache name="my-cache">
   <encoding>
      <key media-type="application/x-java-object"/>
      <value media-type="application/x-java-object"/>
   </encoding>
</distributed-cache>

Hot Rod clients must use a supported marshaller when data is stored as POJOs in the cache, either the JBoss marshaller or the default Java serialization mechanism. You must also deploy the classes must be deployed in the server.

REST clients must use a storage format that Red Hat Data Grid can convert to and from Java objects, currently JSON or XML.

Note

Storing Java objects in the cache requires you to deploy entity classes to Red Hat Data Grid. See Deploying Entity Classes.

Memcached clients must send and receive a serialized version of the stored POJO, which is a JBoss marshalled payload by default. However if you configure the Client Encoding in the appropriate Memcached connector, you change the storage format so that Memcached clients use a platform neutral format such as JSON.

This configuration is compatible with…​

REST clients

Yes

Java Hot Rod clients

Yes

Non-Java Hot Rod clients

No

Querying and Indexing

Yes. However, querying and indexing works with POJOs only if the entities are annotated.

Custom Java objects

Yes

22.6. Deploying Entity Classes

If you plan to store entries in the cache as custom Java objects or POJOs, you must deploy entity classes to Red Hat Data Grid. Clients always exchange objects as bytes[]. The entity classes represent those custom objects so that Red Hat Data Grid can serialize and deserialize them.

To make entity classes available to the server, do the following:

  • Create a JAR file that contains the entities and dependencies.
  • Stop Red Hat Data Grid if it is running.

    Red Hat Data Grid loads entity classes during boot. You cannot make entity classes available to Red Hat Data Grid if the server is running.

  • Copy the JAR file to the $RHDG_HOME/standalone/deployments/ directory.
  • Specify the JAR file as a module in the cache manager configuration, as in the following example:
<cache-container name="local" default-cache="default">
   <modules>
     <module name="deployment.my-entities.jar"/>
   </modules>
   ...
</cache-container>