Chapter 7. Known issues
See Known Issues for Red Hat JBoss Enterprise Application Platform 8.0 to view the list of known issues for this release.
7.1. Infinispan
The /subsystem=distributable-web/infinispan-session-management=*:add
operation may fail when executed on a default non-HA server configuration
- Issue - JBEAP-24997
-
The
/subsystem=distributable-web/infinispan-session-management=*:add
operation automatically adds theaffinity=primary-owner
child resource, which requires therouting=infinispan
resource. The operation may fail because the requiredrouting=infinispan
resource is not defined in the default non-HA server configurations. - Workaround
To avoid this invalid intermediate state, execute both
infinispan-session-management:add
andaffinity=local:add
operations within a batch.Example:
batch /subsystem=distributable-web/infinispan-session-management=ism-0:add(cache-container=web,granularity=SESSION) /subsystem=distributable-web/infinispan-session-management=ism-0/affinity=local:add() run-batch -v
HotRod cannot create distributed sessions for externalization to Infinispan
- Issue - JBEAP-26062
An interoperability test involving Red Hat JBoss Enterprise Application Platform 8.0 and Red Hat Data Grid on OpenShift Container Platform shows an issue where writes to an Infinispan remote cache causes an internal server error. When the
remote-cache-container
is configured to use the default marshaller, JBoss Marshalling, cache writes cause HotRod to throw errors because only byte[] instances are supported.Example error message:
Caused by: java.lang.IllegalArgumentException: Only byte[] instances are supported currently! at org.infinispan.client.hotrod@14.0.17.Final-redhat-00002//org.infinispan.client.hotrod.marshall.BytesOnlyMarshaller.checkByteArray(BytesOnlyMarshaller.java:27)
- Workaround
Configure the
remote-cache-container
to use the ProtoStream marshallermarshaller=PROTOSTREAM
:Example configuration:
/subsystem=infinispan/remote-cache-container=<RHDG_REMOTE_CACHE_CONTAINER_RESOURCE_NAME>:write-attribute(name=marshaller,value=PROTOSTREAM)
7.2. Datasource configuration
MsSQL connection resiliency is not supported
- Issue - JBEAP-25585
- Red Hat JBoss Enterprise Application Platform 8.0 does not support connection resiliency of MsSQL JDBC driver version 10.2.0 and later. Connection resiliency causes the driver to be in an unexpected state for the recovery manager. By default, this driver has connection resiliency enabled and must be manually disabled by the user.
- Workaround
The
ConnectRetryCount
parameter controls the number of reconnection attempts when there is a connection failure. This parameter is set to1
by default, enabling connection resiliency.To disable connection resiliency, change the
ConnectRetryCount
parameter from1
to0
. You can set connection properties in the datasource configuration section of the server configuration filestandalone.xml
ordomain.xml
. For more information about how to configure datasource settings, see How to configure datasource settings in EAP for OpenShift and How to specify connection properties in the Datasource Configuration for JBoss EAP on the Red Hat Customer Portal.
7.3. Server Management
Liveness probe :9990/health/live
does not restart pod in case of Deployment Error
- Issue - JBEAP-24257
In JBoss EAP 7.4, the python liveness probe reports "not alive" when there are deployment errors that would result in restarting the container.
In JBoss EAP 8.0, the liveness probe
:9990/health/live
uses the server management model to determine readiness. If the server-state is running, and there are no boot or deployment errors, then the liveness check reportsUP
when the server process is running.Therefore, deployment errors can result in a pod that is running but is "not ready". This would only affect applications that have intermittent errors during deployment. If these errors always occur during deployment, the container will never be ready and the pod would be in a
CrashLoopBackoff
state.Note:9990/health/live
is the default liveness probe used by Helm charts and the JBoss EAP operator.- Workaround
If there are deployment errors that result in a pod that is running but is reporting "not ready", examine the server boot process, resolve the deployment issue causing the errors, and then verify that the server deploys correctly.
If the deployment errors cannot be fixed, change the startup probe to use the
/ready
HTTP endpoint so that boot errors will trigger a pod restart. For example, if you deploy a JBoss EAP application with Helm, configure the liveness probe by updating thedeploy.livenessProbe
field:deploy: livenessProbe: httpGet: path: /health/ready
7.4. Messaging framework
Deprecation of org.apache.activemq.artemis
module and warning messages
- Issue - JBEAP-26188
-
The
org.apache.activemq.artemis
module is deprecated in JBoss EAP 8.0. A warning message is triggered when deploying an application that include this module dependency in either theMANIFEST.MF
orjboss-deployment-structure.xml
configuration files. For more information, see Deprecated in Red Hat JBoss Enterprise Application Platform (EAP) 8. - Workaround
-
With JBoss EAP 8.0 Update 1, you can prevent logging of these warning messages by replacing the
org.apache.activemq.artemis
module in your configuration files with theorg.apache.activemq.artemis.client
public module. For more information, seeorg.jboss.as.dependency.deprecated
… is using a deprecated module ("org.apache.activemq.artemis") in EAP 8.
7.5. IBM MQ resource adapters
Limitations and known issues of IBM MQ resource adapters
IBM MQ resource adapters are supported with some limitations. See Deploying the IBM MQ Resource Adapter for more information.
Revised on 2024-02-27 10:08:25 UTC