Menu Close
Settings Close

Language and Page Formatting Options

6.6.2 Release Notes

Red Hat JBoss Data Grid 6.6

Known and resolved issues for Red Hat JBoss Data Grid 6.6.2

Christian Huffman

Red Hat Engineering Content Services

Abstract

The Red Hat JBoss Data Grid 6.6.1 Release Notes list and provide descriptions for a series of bugzilla bugs. The bugs highlight issues that are known problems and resolved issues for the release.

Chapter 1. Introduction to Red Hat JBoss Data Grid 6.6.2

Welcome to Red Hat JBoss Data Grid 6.6.2. As you become familiar with the newest version of JBoss Data Grid these Release Notes provide you with information about new features, as well as known and resolved issues. Use this document in conjunction with the entire JBoss Data Grid documentation suite, available at the Red Hat Customer Service Portal's JBoss Data Grid documentation page.

1.1. About Red Hat JBoss Data Grid

Red Hat's JBoss Data Grid is an open source, distributed, in-memory key/value data store built from the Infinispan open source software project. Whether deployed in client/server mode or embedded in a Java Virtual Machine, it is built to be elastic, high performance, highly available and to scale linearly.
JBoss Data Grid is accessible for both Java and Non-Java clients. Using JBoss Data Grid, data is distributed and replicated across a manageable cluster of nodes, optionally written to disk and easily accessible using the REST, Memcached and Hot Rod protocols, or directly in process through a traditional Java Map API.

1.2. Overview

This document contains information about the known issues of Red Hat JBoss Data Grid version 6.6.2. Customers are requested to read this documentation prior to installing this version.

Chapter 2. Patching Existing Server Instances

Micro releases for the JBoss Data Grid server are distributed as a patch. This is for your convenience, as the new version will not completely replace the installation. The base version to be patched is the x.y.0 release, and the patches x.y.# (micro release or cumulative patch) may be applied to that instance. The patch can be rolled back (including the configuration) if there is any issue during the patching process, and any subsequent patches may be applied in the same way.
This process is similar to the already known procedure in JBoss Enterprise Application Platform (EAP).

Note

It is strongly recommended to back up your existing installation, including all configuration files, before applying the patch.

Procedure 2.1. Applying the JBoss Data Grid 6.6.2 Patch

  1. Download the patch from the Red Hat Customer Portal at https://access.redhat.com/downloads/
  2. Connect to the running instance to be patched using the JBoss CLI:
    $JDG_HOME/bin/cli.sh --connect=127.0.0.1:9999
  3. Ensure that there are no active connections to the server, and then apply the patch:
    patch apply /path/to/jboss-datagrid-6.6.2-server-patch.zip
  4. Restart the server:
     shutdown --restart=true
All other distributions, such as the EAP modules, client's, and standalone library mode are provided as a full release. Due to this there is no possibility to patch existing archives for these distributions.

Chapter 3. Supported Configurations

3.1. Supported configurations

For supported hardware and software configurations, see the Red Hat JBoss Data Grid Supported Configurations reference on the Customer Portal at https://access.redhat.com/site/articles/115883.

Chapter 4. Component Versions

4.1. Component Versions

The full list of component versions used in Red Hat JBoss Data Grid is available at the Customer Portal at https://access.redhat.com/site/articles/488833.

Chapter 5. Known and Resolved Issues

5.1. Known Issues

BZ-1200822 - JSR-107 Support for clustered caches in HotRod implementation
When creating a new cache (which is not defined in server configuration file) in HotRod implementation of JSR-107, the cache is created as local only in one of the servers. This behavior requires class org.jboss.as.controller.client.ModelControllerClient to be present on the classpath.
As a workaround use a clustered cache defined in the server configuration file. This still requires cacheManager.createCache(cacheName, configuration) to be invoked before accessing the cache for the first time.
BZ-1204813 - JSR-107 Support for cacheResolverFactory annotation property

JCache annotations provides a way to define a custom CacheResolverFactory, used to produce CacheResolver; this class's purpose is to decide which cache is used for storing results of annotated methods; however, the support for specifying a CacheResolver is not provided yet.

As a workaround, define a CDI ManagedCacheResolver which will be used instead.
BZ-1223290 - JPA Cache Store not working properly on Weblogic
A JPA Cache Store deployed to WebLogic servers throws a NullPointerException after the following error message:
Entity manager factory name (org.infinispan.persistence.jpa) is already registered
This is a known issue in Red Hat JBoss Data Grid 6.6.2, and no workaround exists at this time.
BZ-1158839 - Clustered cache with FileStore (shared=false) is inconsistent after restarting one node if entries are deleted during restart

In Red Hat JBoss Data Grid, when a node restarts, it does not automatically purge entries from its local cache store. As a result, the Administrator starting the node must change the node configuration manually to set the cache store to be purged when the node is starting. If the configuration is not changed, the cache may be inconsistent (removed entries can appear to be present).

This is a known issue in Red Hat JBoss Data Grid 6.6.2, and no workaround exists at this time.
BZ-1114080 - HR client SASL MD5 against LDAP fails

In Red Hat JBoss Data Grid, the server does not support pass-through MD5 authentication against LDAP. As a result, the Hot Rod client is unable to authenticate to the JBoss Data Grid server via MD5 is the authentication is backed by the LDAP server.

This is a known issue in Red Hat JBoss Data Grid 6.6.2 and a workaround is to use the PLAIN authentication over end-to-end SSL encryption.
BZ-1024373 - Default optimistic locking configuration leads to inconsistency

In Red Hat JBoss Data Grid, transactional caches are configured with optimistic locking by default. Concurrent replace() calls can return true under contention and transactions might unexpectedly commit.

Two concurrent commands, replace(key, A, B) and replace(key, A, C) may both overwrite the entry. The command which is finalized later wins, overwriting an unexpected value with new value.

This is a known issue in Red Hat JBoss Data Grid 6.6.2. As a workaround, enable write skew check and the REPEATABLE_READ isolation level. This results in concurrent replace operations working as expected.
BZ-1273411 - Cannot access cache with authorization enabled when using REST protocol

When authorization is configured for a cache, then any access to the cache via REST endpoint results in a security exception. A user is not able to access the cache since the security Subject representing the user is not properly defined, and the user cannot be authorized to access the cache.

This is a known issue in Red Hat JBoss Data Grid 6.6.2, and no workaround exists at this time.

5.2. Resolved Issues

BZ-1378498 - ClientListener stops working after connection failure

After a connection failure to a JDG server, normal operations like get and put recover and work as expected. But a ClientListener stops working. After registering the listener using cache.addClientListener(listener) again, it recovers and works as expected.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1382273 - NPE in CacheNotifierImpl by LIRS eviction listener

When inserting 20 key/values in a 10 sized cache (keys from key-1 to key-20) and then reinserting the first 10 keys (key-1 to key-10) while also using LIRS eviction strategy and listener, a NullPointerException is thrown in CacheNotifierImpl.notifyCacheEntriesEvicted. This issue existed in both Remote Client-Server mode and Library mode.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2
BZ-1383945 - Expiration is not working under some circumstances with AtomicMap

Previously, when using AtomicMaps and writeskew, lifespan as set in the configuration wasn't applied. This has been addressed in JBoss Data Grid 6.6.2 and now lifespan in the configuration is honored.
BZ-1388562 - Expiration is not applied to a repeatable read entry that was read as null prior

The expiration metadata is not applied to a newly created entry that was read in the same transaction as a null.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1379414 - @CacheEntryExpired not getting invoked for non auto-commit cache

@CacheEntryExpired listener method is not invoked for non auto-commit cache. When auto commit is true the listener method is invoked. @CacheEntryCreated is invoked in both configs.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1412752 - MissingFormatArgumentException thrown by PreferConsistencyStrategy if debug mode is enabled on state-transfer or merge

Methods PreferConsistencyStrategy and StateConsumerImpl contained non-typesafe printf errors.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1428027 - DMR operation register-proto-schemas fails with NPE if the proto file has syntax errors

If the proto file has syntax errors it isn't placed in the ___protobuf_metadata cache as it should be. Additionally, a myFileWithSyntaxErrors.proto.errors key fails to be created and an exception is thrown.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1431965- SimpleDateFormat used in REST server is not thread safe

org.infinispan.rest.Server has a static field of DatePatternRfc1123LocaleUS, wihich is an instance of SimpleDateFormat. This causes a java.lang.ArrayIndexOutOfBoundsException under load.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1425687 - JDG client unable to resolve the system property in external-host jdg6

If infinispan.external_addr is defined in a server configuration file like clustered.xml, the JBoss Data Grid client will be unable to resolve the address.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1440102 - Cache clear doesn't work when passivation is enabled

Clearing the cache map doesn't work under the following configuration: JBoss Data Grid is running in Library mode inside JBoss EAP 6.4.6 with passivation enabled. In this case the size will never be reduced to 0.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1388888 - LIRS Eviction with local cache under high load fail with a NullPointerException at BoundedEquivalentConcurrentHashMapV8.java:1414

When using LIRSEvictionPolicy in some situations a NullPointerException can occur in BoundedEquivalentConcurrentHashMapV8. The error is harmless but the issue has been fixed.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1448366 - HotRod client write buffer is too large

The Hot Rod client uses more memory than it should. The buffering implementation of TcpTransport.socketOutputStream does not use a fixed sized input buffer. Instead it grows as BufferedInputStream does. A growing buffer can lead to excessive memory consumption in the heap, as well as in the native memory of the operating system.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1435617 - Rolling upgrade fails with java.lang.ClassCastException

Previously, performing a rolling upgrade could fail with a java.lang.ClassCastException of either org.infinispan.container.entries.RepeatableReadEntry cannot be cast to org.infinispan.container.entries.InternalCacheEntry or SimpleClusteredVersion cannot be cast to NumericVersion.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1435618 - Hot Rod Rolling Upgrade throws TimeOutException

When doing a rolling upgrade that takes more than a few minutes to complete a TimeOutException could be thrown.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.
BZ-1435620 - Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies

Previously, during Hot Rod rolling upgrades, write operations executed by the client on the target cluster were ignored. This could cause unexpected results in the application, such as entries being deleted that weren't intentionally deleted.

This issue is resolved as of Red Hat JBoss Data Grid 6.6.2.

Appendix A. Revision History

Revision History
Revision 6.6.2-1Wed 31 May 2017John Brier
Resolved Issues populated with all fixed Bugzillas.
Revision 6.6.2-0Tue 16 May 2017John Brier
Initial draft for 6.6.2.

Legal Notice

Copyright © 2017 Red Hat, Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.