6.4.0 Release Notes
Known and resolved issues for Red Hat JBoss Data Grid 6.4.0
Abstract
Chapter 1. Introduction to Red Hat JBoss Data Grid 6.4
1.1. About Red Hat JBoss Data Grid
1.2. Overview
1.3. Packaging Revisions
infinispan-embedded-${VERSION}.jar
contains the core Infinispan components, including caches stores.infinispan-embedded-query-${VERSION}.jar
contains the Querying componentsinfinispan-cli-${VERSION}.jar
contains the command line interface- Transactions and Spring integration modules are packaged separately
infinispan-remote-${VERSION}.jar
under jboss-datagrid-${VERSION}-remote-java-client.zip
.
Note
1.4. Upgrading from JBoss Data Grid 6.3.x to 6.4
The Hot Rod size()
method in JBoss Data Grid 6.3 is used to obtain the size by looking at the numberOfEntries
statistic on the server. This required statistics to be enabled and the ADMIN
permission if security was in use (instead of the more appropriate BULK_READ
). It also only returned the number of entries in the node that responded to the request. This included potentially expired entries and did not take stores/loaders into account. In JBoss Data Grid 6.4, the size
method's behavior can be modified to include the size of the entire cluster (not just the local node). If the infinispan.accurate.bulk.ops system
property is set to true
then the size
method will use the new behavior by utilizing the entry iterator to count all entries.
The size()
method no longer returns the value of the currentNumberOfEntries
statistic. Instead it will retrieve all the keys using keySet()
and will then return the number of entries in this set.
keySet()
is a costly operation and because of this size()
is now a costly operation as well. Distributed caches it will retrieve all the keys from all the nodes in the cluster.
Chapter 2. New Features and Enhancements
2.1. Querying from Hot Rod Java Client (Remote Querying)
Note
2.2. Apache Camel Component for JBoss Fuse
2.3. Clustered Listeners
2.4. Remote Events and Listeners
2.5. Handling Network Partitions
2.6. Cross-site State Transfer
2.7. Spring Integration
Note
Chapter 3. Supported Configurations
Chapter 4. Component Versions
Chapter 5. Known and Resolved Issues
5.1. Known Issues
- BZ-1178965 - Fail fast when using two phase commit with ASYNC backup strategy
- In Red Hat JBoss Data Grid, using the two phase commit with the ASYNC backup strategy results in one phase commit unexpectedly being used instead.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1163665 - Node can temporarily read removed data when another node joins the cluster, leaves or crashes
- In Red Hat JBoss Data Grid, the distribution of entries in the cluster changes when a node joins, leaves or crashes during a brain split. During this brief period, a read on the previous node owner can return stale data. When the rebalance process is completed, further reads return up-to-date data.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1175272 - CDI fails when both remote and embedded uber-jar are present
- In Red Hat JBoss Data Grid, when both
infinispan-remote
andinfinispan-embedded
dependencies are on the classpath, the Infinispan CDI extension does not work as expected. This is due to bundling the Infinispan CDI extension in both jar files. As a result, CDI fails with ambiguous dependencies exception.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-1012036 - RELAY2 logs error when site unreachable
- In Red Hat JBoss Data Grid, when a site is unreachable, JGroups's
RELAY2
logs an error for each dropped message. Infinispan has configurable fail policies (ignore
/warn
/abort
), but the log is filled with errors despite theignore
policy.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-1024373 - Default optimistic locking configuration leads to inconsistency
- In Red Hat JBoss Data Grid, transactional caches are configured with optimistic locking by default. Concurrent
replace()
calls can return true under contention and transactions might unexpectedly commit.Two concurrent commands,replace(key, A, B)
andreplace(key, A, C)
may both overwrite the entry. The command which is finalized later wins, overwriting an unexpected value with new value.This is a known issue in JBoss Data Grid 6.4. As a workaround, enable write skew check and theREPEATABLE_READ
isolation level. This results in concurrent replace operations working as expected. - BZ-1107613 - SASL GSSAPI auth doesn't use principal configured login_module
- In Red Hat JBoss Data Grid, the server principal is always constructed as
jgroups/server_name
and is not loaded from the Kerberos login module. Using a different principal results in an authentication failure.This is a known issue in JBoss Data Grid 6.4 and the workaround for this issue is to usejgroups/server_name
as the server principal. - BZ-1114080 - HR client SASL MD5 against LDAP fails
- In Red Hat JBoss Data Grid, the server does not support pass-through MD5 authentication against LDAP. As a result, the Hot Rod client is unable to authenticate to the JBoss Data Grid server via MD5 is the authentication is backed by the LDAP server.This is a known issue in JBoss Data Grid 6.4 and a workaround is to use the PLAIN authentication over end-to-end SSL encryption.
- BZ-881791 - Special characters in file path to JDG server are causing problems
- In Red Hat JBoss Data Grid, when special characters are used in the directory path, the JBoss Data Grid server either fails to start or a configuration file used for logging cannot be loaded properly. Special characters that cause problems include spaces,
#
(hash sign),!
(exclamation mark),%
(percentage sign), and$
(dollar sign).This is a known issue in JBoss Data Grid 6.4. A workaround for this issue is to avoid using special characters in the directory path. - BZ-1092403 - JPA cachestore fails to guess dialect for Oracle12c and PostgresPlus 9
- In Red Hat JBoss Data Grid, JPA Cache Store does not work with Oracle12c and Postgres Plus 9 as Hibernate, an internal dependency of JPA Cache Store, is not able to determine which dialect to use for communication with the database.This is a known issue in JBoss Data Grid 6.4. As a workaround for around this issue, specify the Hibernate dialect directly by adding the following element in the
persistence.xml
file:<property name="hibernate.dialect" value="${hibernate.dialect}" />
Set ${hibernate.dialect} toorg.hibernate.dialect.Oracle10gDialect
ororg.hibernate.dialect.PostgresPlusDialect
for Oracle12c or Postgres Plus 9 respectively. - BZ-1101512 - CLI UPGRADE command fails when testing data stored via CLI with REST encoding
- In Red Hat JBoss Data Grid, the CLI
upgrade
command fails to migrate data from the old cluster to the new cluster if the data being migrated was stored in the old cluster via CLI with REST encoding (e.g by issuing a command such asput --codec=rest key1 val1
). This issue does not occur if data is stored via REST clients directly.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-1158559 - C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M
- In Red Hat JBoss Data Grid, when a cache contains a large number of entries, the clear() operation can take an unexpectedly long time and possibly result in communication timeouts. In this case, the exception is reported to the Hot Rod client.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-# - Title
- In Red Hat JBoss Data Grid, for a cache store where
sharing
is set tofalse
, thefetchInMemory
andfetchPersistence
parameters must also be set totrue
or different nodes may contain different copies of the same data.For a cache store where theshared
parameter is set totrue
, setfetchPersistence
must be set tofalse
because the persistence is shared and enabling it results in unnecessary state transfers.However, thefetchInMemory
parameter can be set to eithertrue
orfalse
. Setting this totrue
loads the in-memory state via the network and resulted in a faster start up. Setting the value tofalse
loads the data from persistence without transferring the data remotely from other nodes.This is a known issue in JBoss Data Grid 6.4 and the only workaround for this issue is to follow the stated guidelines above and to ensure that each node in the cluster uses the same configuration to prevent unexpected results based on the start up order. - BZ-881080 - Silence SuspectExceptions
- In Red Hat JBoss Data Grid,
SuspectExceptions
are routinely raised when nodes shut down because they are unresponsive as they shut down. As a result, aSuspectException
error is added to the logs. TheSuspectExceptions
do not affect data integrity.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue. - BZ-807674 - JDBC Cache Stores using a JTA Data Source do not participate in cache transactions
- In Red Hat JBoss Data Grid's library mode, JDBC cache stores can be configured to use a JTA-aware datasource. However, operations performed on a cache backed by such a store during a JTA transaction will be persisted to the store outside of the transaction's scope. This issue is not applicable to JBoss Data Grid's Remote Client-Server mode because all cache operations are non-transactional.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1088073 - Move rebalancing settings in JON from cache level to global cluster level
- In Red Hat JBoss Data Grid, it is possible to change rebalancing settings using the JBoss Operations Network UI when navigating to the JBoss Data Grid Server/Cache Container/Cache/Configuration (current)/Distributed Cache Attributes/Rebalancing.This operation is currently misrepresented as a cache-level operation when the changed rebalancing settings automatically apply for all the parent cache manager's caches and for all nodes in a particular cluster.This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
- BZ-1158839 - Clustered cache with FileStore (shared=false) is inconsistent after restarting one node if entries are deleted during restart
- In Red Hat JBoss Data Grid, when a node restarts, it does not automatically purge entries from its local cache store. As a result, the Administrator starting the node must change the node configuration manually to set the cache store to be purged when the node is starting. If the configuration is not changed, the cache may be inconsistent (removed entries can appear to be present).This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
5.2. Resolved Issues
- BZ-1156397 - File System Deployments are not enabled in standalone/clustered.xml by default
- Previously in Red Hat JBoss Data Grid, the server did not contain the deployment scanner which is responsible for handling deployments. As a result, it was not possible to deploy custom user filters, converters and marshallers for enhanced remote listeners by copying jar files with these pieces into the
standalone/deployments
folder in the server.This issue is now resolved in JBoss Data Grid 6.4 and the server now contains the deployment scanner and works as expected. - BZ-1115555 - JGroups configuration files might be shadowed if the JDG infinispan-core library is provided as EAP module
- Previously in Red Hat JBoss Data Grid, the default configuration files for JGroups were named
jgroups-udp.xml
andjgroups-tcp.xml
. When a user provided their own configuration file with the same name, it was unexpectedly shadowed with the default one.This issue is now resolved in JBoss Data Grid 6.4. The default configuration files are now moved to thedefault-configs
directory, renamed todefault-jgroups-udp.xml
anddefault-jgroups-tcp.xml
. Additionally, if a user chooses the same name for their customized configuration files, a warning is logged. - BZ-1156071 - Missing partition handling configuration in schema
- Previously in Red Hat JBoss Data Grid, the configuration schema for embedded (library) mode did not contain configuration elements for partition handling.This issue is now resolved in JBoss Data Grid 6.4 and the configurations for partition handling are included in the configuration schema.
- BZ-1172021 - Cross site state transfer - NPE on consumer site when backup cache is not present
- Previously in Red Hat JBoss Data Grid, when the backup cache was not defined in the consumer site (with either the same name as the main cache or specified via the <backup-for> option), the customer site would create a new local cache without the RPC manager. This resulted in a NullPointerException being thrown in the consumer site with the following error message also displayed in the producer site:
Unable to pushState to 'XYZ'. org.infinispan.commons.CacheException: Problems invoking command
This issue is now resolved in JBoss Data Grid 6.4 so that if the cache does not exist, it is created with an RPC manager as expected. - BZ-1161479 - HR size operation requires ADMIN permission
- Previously in Red Hat JBoss Data Grid, the Map/reduce task missed security actions. As a result, users could not use the Hot Rod size() operation via the map/reduce approach unless they had ADMIN permissions.This issue is now resolved in JBoss Data Grid 6.4 by adding the required map/reduce security actions. As a result, users with EXEC permissions can now execute map/reduce operations as expected.
- BZ-1161590 - Remove features file for JPACacheStore from JDG maven repository as it is unsupported
- Previously in Red Hat JBoss Data Grid, the JPACacheStore was not supported in OSGi/Karaf but the JBoss Data Grid Maven distribution unexpectedly contained a features file for JPACacheStore for deployment in Karaf.This issue is now resolved in JBoss Data Grid 6.4 and the features file is removed.
- BZ-1168329 - Discovery protocols have trouble finding coord in large clusters
- Previously in Red Hat JBoss Data Grid, for clusters with more than 16 nodes, node discovery would prevent a new node from joining an existing cluster.This issue is now resolved in JGroups 3.6.1, which is included in JBoss Data Grid 6.4 so that node discovery allows new nodes to join existing clusters as expected.
- BZ-1172038 - JGroups subsystem doesn't support Vault
- Previously in Red Hat JBoss Data Grid, the JGroups subsystem integration with Vault was missing. As a result, sensitive information contained in the JGroups part of configuration files, such as passwords, could not be stored securely using Vault encryption.This issue is now resolved in JBoss Data Grid 6.4 by adding the JGroups subsystem with Vault. As a result, all passwords, including JGroups passwords, can now be stored securely using Vault.
- BZ-1161529 - Possible false positive suspect of FD_HOST when the number of hosts is large
- Previously in Red Hat JBoss Data Grid, the JGroups FD_HOST protocol received an unexpectedly high number of false positive suspects within large clusters.This issue is now resolved in JBoss Data Grid 6.4 by improving the algorithm used to check the cluster for dead member nodes.
- BZ-1163573 - Coordinator promotion is keep failing successively, with massive GCs in a long loop
- Previously in Red Hat JBoss Data Grid, for a large cluster with many caches, the CacheStatusResponse map on the new coordinator consumed an unexpectedly large amount of memory. As a result, the JVM garbage collection has extra load when selecting a new coordinator for the cluster.This issue is now resolved in JBoss Data Grid 6.4 by the allowing the caching of various instances in the serializaton context. Reuse results in consistent hashes and repeated cache join information.
Appendix A. Revision History
Revision History | |||
---|---|---|---|
Revision 6.4.0-2 | Mon Jan 19 2015 | ||
| |||
Revision 6.4.0-1 | Wed Jul 23 2014 | ||
|