Chapter 7. Known issues

See Known Issues for Red Hat JBoss Enterprise Application Platform 8.0 to view the list of known issues for this release.

7.1. Infinispan

The /subsystem=distributable-web/infinispan-session-management=*:add operation may fail when executed on a default non-HA server configuration

Issue - JBEAP-24997
The /subsystem=distributable-web/infinispan-session-management=*:add operation automatically adds the affinity=primary-owner child resource, which requires the routing=infinispan resource. The operation may fail because the required routing=infinispan resource is not defined in the default non-HA server configurations.
Workaround

To avoid this invalid intermediate state, execute both infinispan-session-management:add and affinity=local:add operations within a batch.

Example:

batch
/subsystem=distributable-web/infinispan-session-management=ism-0:add(cache-container=web,granularity=SESSION)
/subsystem=distributable-web/infinispan-session-management=ism-0/affinity=local:add()
run-batch -v

HotRod cannot create distributed sessions for externalization to Infinispan

Issue - JBEAP-26062

An interoperability test involving Red Hat JBoss Enterprise Application Platform 8.0 and Red Hat Data Grid on OpenShift Container Platform shows an issue where writes to an Infinispan remote cache causes an internal server error. When the remote-cache-container is configured to use the default marshaller, JBoss Marshalling, cache writes cause HotRod to throw errors because only byte[] instances are supported.

Example error message:

Caused by: java.lang.IllegalArgumentException: Only byte[] instances are supported currently!
	at org.infinispan.client.hotrod@14.0.17.Final-redhat-00002//org.infinispan.client.hotrod.marshall.BytesOnlyMarshaller.checkByteArray(BytesOnlyMarshaller.java:27)

Workaround

Configure the remote-cache-container to use the ProtoStream marshaller marshaller=PROTOSTREAM:

Example configuration:

/subsystem=infinispan/remote-cache-container=<RHDG_REMOTE_CACHE_CONTAINER_RESOURCE_NAME>:write-attribute(name=marshaller,value=PROTOSTREAM)

7.2. Datasource configuration

MsSQL connection resiliency is not supported

Issue - JBEAP-25585
Red Hat JBoss Enterprise Application Platform 8.0 does not support connection resiliency of MsSQL JDBC driver version 10.2.0 and later. Connection resiliency causes the driver to be in an unexpected state for the recovery manager. By default, this driver has connection resiliency enabled and must be manually disabled by the user.
Workaround

The ConnectRetryCount parameter controls the number of reconnection attempts when there is a connection failure. This parameter is set to 1 by default, enabling connection resiliency.

To disable connection resiliency, change the ConnectRetryCount parameter from 1 to 0. You can set connection properties in the datasource configuration section of the server configuration file standalone.xml or domain.xml. For more information about how to configure datasource settings, see How to configure datasource settings in EAP for OpenShift and How to specify connection properties in the Datasource Configuration for JBoss EAP on the Red Hat Customer Portal.

7.3. Server Management

Liveness probe :9990/health/live does not restart pod in case of Deployment Error

Issue - JBEAP-24257

In JBoss EAP 7.4, the python liveness probe reports "not alive" when there are deployment errors that would result in restarting the container.

In JBoss EAP 8.0, the liveness probe :9990/health/live uses the server management model to determine readiness. If the server-state is running, and there are no boot or deployment errors, then the liveness check reports UP when the server process is running.

Therefore, deployment errors can result in a pod that is running but is "not ready". This would only affect applications that have intermittent errors during deployment. If these errors always occur during deployment, the container will never be ready and the pod would be in a CrashLoopBackoff state.

Note

:9990/health/live is the default liveness probe used by Helm charts and the JBoss EAP operator.

Workaround

If there are deployment errors that result in a pod that is running but is reporting "not ready", examine the server boot process, resolve the deployment issue causing the errors, and then verify that the server deploys correctly.

If the deployment errors cannot be fixed, change the startup probe to use the /ready HTTP endpoint so that boot errors will trigger a pod restart. For example, if you deploy a JBoss EAP application with Helm, configure the liveness probe by updating the deploy.livenessProbe field:

deploy:
  livenessProbe:
    httpGet:
      path: /health/ready

7.4. Messaging framework

Deprecation of org.apache.activemq.artemis module and warning messages

Issue - JBEAP-26188
The org.apache.activemq.artemis module is deprecated in JBoss EAP 8.0. A warning message is triggered when deploying an application that include this module dependency in either the MANIFEST.MF or jboss-deployment-structure.xml configuration files. For more information, see Deprecated in Red Hat JBoss Enterprise Application Platform (EAP) 8.
Workaround
With JBoss EAP 8.0 Update 1, you can prevent logging of these warning messages by replacing the org.apache.activemq.artemis module in your configuration files with the org.apache.activemq.artemis.client public module. For more information, see org.jboss.as.dependency.deprecated …​ is using a deprecated module ("org.apache.activemq.artemis") in EAP 8.

7.5. IBM MQ resource adapters

Limitations and known issues of IBM MQ resource adapters

IBM MQ resource adapters are supported with some limitations. See Deploying the IBM MQ Resource Adapter for more information.





Revised on 2024-02-27 10:08:25 UTC