Chapter 9. Reference Information

Note

The content in this section is derived from the engineering documentation for this image. It is provided for reference as it can be useful for development purposes and for testing beyond the scope of the product documentation.

9.1. Persistent Templates

The JBoss EAP database templates, which deploy JBoss EAP and database pods, have both ephemeral and persistent variations.

Persistent templates include an environment variable to provision a persistent volume claim, which binds with an available persistent volume to be used as a storage volume for the JBoss EAP for OpenShift deployment. Information, such as timer schema, log handling, or data updates, is stored on the storage volume, rather than in ephemeral container memory. This information persists if the pod goes down for any reason, such as project upgrade, deployment rollback, or an unexpected error.

Without a persistent storage volume for the deployment, this information is stored in the container memory only, and is lost if the pod goes down for any reason.

For example, an EE timer backed by persistent storage continues to run if the pod is restarted. Any events triggered by the timer during the restart process are enacted when the application is running again.

Conversely, if the EE timer is running in the container memory, the timer status is lost if the pod is restarted, and starts from the beginning when the pod is running again.

9.2. Information Environment Variables

The following environment variables are designed to provide information to the image and should not be modified by the user:

Table 9.1. Information Environment Variables

Variable NameDescription and Value

JBOSS_IMAGE_NAME

The image names.

Values:

  • jboss-eap-7/eap73-openjdk8-openshift-rhel7 (JDK 8 / RHEL 7)
  • jboss-eap-7/eap73-openjdk11-openshift-rhel8 (JDK 11 / RHEL 8)

JBOSS_IMAGE_VERSION

The image version.

Value: This is the image version number. See the Red Hat Container Catalog for the latest values:

JBOSS_MODULES_SYSTEM_PKGS

A comma-separated list of JBoss EAP system modules packages that are available to applications.

Value: org.jboss.logmanager, jdk.nashorn.api

STI_BUILDER

Provides OpenShift S2I support for jee project types.

Value: jee

9.3. Configuration environment variables

You can configure the following environment variables to adjust the image without requiring a rebuild.

Note

See the JBoss EAP documentation for other environment variables that are not listed here.

Table 9.2. Configuration environment variables

Variable NameDescription

AB_JOLOKIA_AUTH_OPENSHIFT

Switch on client authentication for OpenShift TLS communication. The value of this parameter can be true, false, or a relative distinguished name, which must be contained in a presented client’s certificate. The default CA cert is set to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.

  • Set to false to disable client authentication for OpenShift TLS communication.
  • Set to true to enable client authentication for OpenShift TLS communication using the default CA certificate and client principal.
  • Set to a relative distinguished name, for example cn=someSystem, to enable client authentication for OpenShift TLS communication but override the client principal. This distinguished name must be contained in a presented client’s certificate.

AB_JOLOKIA_CONFIG

If set, uses this fully qualified file path for the Jolokia JVM agent properties, which are described in the Jolokia reference documentation. If you set your own Jolokia properties config file, the rest of the Jolokia settings in this document are ignored.

If not set, /opt/jolokia/etc/jolokia.properties is created using the settings as defined in the Jolokia reference documentation.

Example value: /opt/jolokia/custom.properties

AB_JOLOKIA_DISCOVERY_ENABLED

Enable Jolokia discovery.

Defaults to false.

AB_JOLOKIA_HOST

Host address to bind to.

Defaults to 0.0.0.0.

Example value: 127.0.0.1

AB_JOLOKIA_HTTPS

Switch on secure communication with HTTPS.

By default self-signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS.

Example value: true

AB_JOLOKIA_ID

Agent ID to use.

The default value is the $HOSTNAME, which is the container id.

Example value: openjdk-app-1-xqlsj

AB_JOLOKIA_OFF

If set to true, disables activation of Jolokia, which echos an empty value.

Jolokia is enabled by default.

AB_JOLOKIA_OPTS

Additional options to be appended to the agent configuration. They should be given in the format key=value, key=value, …​​.

Example value: backlog=20

AB_JOLOKIA_PASSWORD

The password for basic authentication.

By default, authentication is switched off.

Example value: mypassword

AB_JOLOKIA_PASSWORD_RANDOM

Determines if a random AB_JOLOKIA_PASSWORD should be generated.

Set to true to generate a random password. The generated value is saved in the /opt/jolokia/etc/jolokia.pw file.

AB_JOLOKIA_PORT

The port to listen to.

Defaults to 8778.

Example value: 5432

AB_JOLOKIA_USER

The name of the user to use for basic authentication.

Defaults to jolokia.

Example value: myusername

AB_PROMETHEUS_ENABLE

If set to true, this variable activates the jmx-exporter java agent that exposes Prometheus format metrics. Default is set to false.

Note

The MicroProfile Metrics subsystem is the preferred method to expose data in the Prometheus format. For more information about the MicroProfile Metrics susbsystem, see Eclipse MicroProfile in the Configuration Guide for JBoss EAP.

AB_PROMETHEUS_JMX_EXPORTER_CONFIG

The path within the container to a user-specified configuration.yaml for the jmx-exporter agent to use instead of the default configuration.yaml file. To find out more about the S2I mechanism to incorporate additional configuration files, see S2I Artifacts.

AB_PROMETHEUS_JMX_EXPORTER_PORT

The port on which the jmx-exporter agent listens for scrapes from the Prometheus server. Default is 9799. The agent listens on localhost. Metrics can be made available outside of the container by configuring the DeploymentConfig file for the application to include the service exposing this endpoint.

CLI_GRACEFUL_SHUTDOWN

If set to any non-zero length value, the image will prevent shutdown with the TERM signal and will require execution of the shutdown command using the JBoss EAP management CLI.

Example value: true

CONTAINER_HEAP_PERCENT

Set the maximum Java heap size, as a percentage of available container memory.

Example value: 0.5

CUSTOM_INSTALL_DIRECTORIES

A list of comma-separated directories used for installation and configuration of artifacts for the image during the S2I process.

Example value: custom,shared

DEFAULT_JMS_CONNECTION_FACTORY

This value is used to specify the default JNDI binding for the JMS connection factory, for example jms-connection-factory='java:jboss/DefaultJMSConnectionFactory'.

Example value: java:jboss/DefaultJMSConnectionFactory

DISABLE_EMBEDDED_JMS_BROKER

The use of an embedded messaging broker in OpenShift containers is deprecated. Support for an embedded broker will be removed in a future release.

If the following conditions are true, a warning is logged.

  • A container is configured to use an embedded messaging broker.
  • A remote broker is not configured for the container.
  • This variable is not set or is set with a value of false.

If this variable is included with the value set to true, the embedded messaging broker is disabled, and no warning is logged.

Include this variable set to true for any container that is not configured with remote messaging destinations.

ENABLE_ACCESS_LOG

Enable logging of access messages to the standard output channel.

Logging of access messages is implemented using following methods:

  • The JBoss EAP 6.4 OpenShift image uses a custom JBoss Web Access Log Valve.
  • The JBoss EAP for OpenShift image uses the Undertow AccessLogHandler.

Defaults to false.

INITIAL_HEAP_PERCENT

Set the initial Java heap size, as a percentage of the maximum heap size.

Example value: 0.5

JAVA_OPTS_APPEND

Server startup options.

Example value: -Dfoo=bar

JBOSS_MODULES_SYSTEM_PKGS_APPEND

A comma-separated list of package names that will be appended to the JBOSS_MODULES_SYSTEM_PKGS environment variable.

Example value: org.jboss.byteman

JGROUPS_CLUSTER_PASSWORD

Password used to authenticate the node so it is allowed to join the JGroups cluster. Required, when using ASYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, authentication is disabled, cluster communication is not encrypted and a warning is issued. Optional, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol.

Example value: mypassword

JGROUPS_ENCRYPT_KEYSTORE

Name of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued.

Example value: jgroups.jceks

JGROUPS_ENCRYPT_KEYSTORE_DIR

Directory path of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued.

Example value: /etc/jgroups-encrypt-secret-volume

JGROUPS_ENCRYPT_NAME

Name associated with the server’s certificate, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued.

Example value: jgroups

JGROUPS_ENCRYPT_PASSWORD

Password used to access the keystore and the certificate, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued.

Example value: mypassword

JGROUPS_ENCRYPT_PROTOCOL

JGroups protocol to use for encryption of cluster traffic. Can be either SYM_ENCRYPT or ASYM_ENCRYPT.

Defaults to SYM_ENCRYPT.

Example value: ASYM_ENCRYPT

JGROUPS_ENCRYPT_SECRET

Name of the secret that contains the JGroups keystore file used for securing the JGroups communications when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued.

Example value: eap7-app-secret

JGROUPS_PING_PROTOCOL

JGroups protocol to use for node discovery. Can be either dns.DNS_PING or kubernetes.KUBE_PING.

MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION

For backwards compatibility, set to true to use MyQueue and MyTopic as physical destination name defaults instead of queue/MyQueue and topic/MyTopic.

OPENSHIFT_DNS_PING_SERVICE_NAME

Name of the service exposing the ping port on the servers for the DNS discovery mechanism.

Example value: eap-app-ping

OPENSHIFT_DNS_PING_SERVICE_PORT

The port number of the ping port for the DNS discovery mechanism. If not specified, an attempt is made to discover the port number from the SRV records for the service, otherwise the default 8888 is used.

Defaults to 8888.

OPENSHIFT_KUBE_PING_LABELS

Clustering labels selector for the Kubernetes discovery mechanism.

Example value: app=eap-app

OPENSHIFT_KUBE_PING_NAMESPACE

Clustering project namespace for the Kubernetes discovery mechanism.

Example value: myproject

SCRIPT_DEBUG

If set to true, ensures that the Bash scripts are executed with the -x option, printing the commands and their arguments as they are executed.

9.4. Application Templates

Table 9.3. Application Templates

Variable NameDescription

AUTO_DEPLOY_EXPLODED

Controls whether exploded deployment content should be automatically deployed.

Example value: false

9.5. Exposed Ports

Table 9.4. Exposed Ports

Port NumberDescription

8443

HTTPS

8778

Jolokia Monitoring

9.6. Datasources

Datasources are automatically created based on the value of some of the environment variables.

The most important environment variable is DB_SERVICE_PREFIX_MAPPING, as it defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX triplets, where:

  • POOLNAME is used as the pool-name in the datasource.
  • DATABASETYPE is the database driver to use.
  • PREFIX is the prefix used in the names of environment variables that are used to configure the datasource.

9.6.1. JNDI Mappings for Datasources

For each POOLNAME-DATABASETYPE=PREFIX triplet defined in the DB_SERVICE_PREFIX_MAPPING environment variable, the launch script creates a separate datasource, which is executed when running the image.

Note

The first part (before the equal sign) of the DB_SERVICE_PREFIX_MAPPING should be lowercase.

The DATABASETYPE determines the driver for the datasource.

For more information about configuring a driver, see Modules, Drivers, and Generic Deployments. The JDK 8 image has drivers for postgresql and mysql configured by default.

Warning

Do not use any special characters for the POOLNAME parameter.

Database drivers

Support for using the Red Hat-provided internal datasource drivers with the JBoss EAP for OpenShift image is now deprecated. Red Hat recommends that you use JDBC drivers obtained from your database vendor for your JBoss EAP applications.

The following internal datasources are no longer provided with the JBoss EAP for OpenShift image:

  • MySQL
  • PostgreSQL

For more information about installing drivers, see Modules, Drivers, and Generic Deployments.

For more information on configuring JDBC drivers with JBoss EAP, see JDBC drivers in the JBoss EAP Configuration Guide.

Note that you can also create a custom layer to install these drivers and datasources if you want to add them to a provisioned server.

9.6.1.1. Datasource Configuration Environment Variables

To configure other datasource properties, use the following environment variables.

Important

Be sure to replace the values for POOLNAME, DATABASETYPE, and PREFIX in the following variable names with the appropriate values. These replaceable values are described in this section and in the Datasources section.

Variable NameDescription

POOLNAME_DATABASETYPE_SERVICE_HOST

Defines the database server’s host name or IP address to be used in the datasource’s connection-url property.

Example value: 192.168.1.3

POOLNAME_DATABASETYPE_SERVICE_PORT

Defines the database server’s port for the datasource.

Example value: 5432

PREFIX_BACKGROUND_VALIDATION

When set to true database connections are validated periodically in a background thread prior to use. Defaults to false, meaning the validate-on-match method is enabled by default instead.

PREFIX_BACKGROUND_VALIDATION_MILLIS

Specifies frequency of the validation, in milliseconds, when the background-validation database connection validation mechanism is enabled (PREFIX_BACKGROUND_VALIDATION variable is set to true). Defaults to 10000.

PREFIX_CONNECTION_CHECKER

Specifies a connection checker class that is used to validate connections for the particular database in use.

Example value: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker

PREFIX_DATABASE

Defines the database name for the datasource.

Example value: myDatabase

PREFIX_DRIVER

Defines Java database driver for the datasource.

Example value: postgresql

PREFIX_EXCEPTION_SORTER

Specifies the exception sorter class that is used to properly detect and clean up after fatal database connection exceptions.

Example value: org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter

PREFIX_JNDI

Defines the JNDI name for the datasource. Defaults to java:jboss/datasources/POOLNAME_DATABASETYPE, where POOLNAME and DATABASETYPE are taken from the triplet described above. This setting is useful if you want to override the default generated JNDI name.

Example value: java:jboss/datasources/test-postgresql

PREFIX_JTA

Defines Jakarta Transactions option for the non-XA datasource. The XA datasources are already Jakarta Transactions capable by default.

Defaults to true.

PREFIX_MAX_POOL_SIZE

Defines the maximum pool size option for the datasource.

Example value: 20

PREFIX_MIN_POOL_SIZE

Defines the minimum pool size option for the datasource.

Example value: 1

PREFIX_NONXA

Defines the datasource as a non-XA datasource. Defaults to false.

PREFIX_PASSWORD

Defines the password for the datasource.

Example value: password

PREFIX_TX_ISOLATION

Defines the java.sql.Connection transaction isolation level for the datasource.

Example value: TRANSACTION_READ_UNCOMMITTED

PREFIX_URL

Defines connection URL for the datasource.

Example value: jdbc:postgresql://localhost:5432/postgresdb

PREFIX_USERNAME

Defines the username for the datasource.

Example value: admin

When running this image in OpenShift, the POOLNAME_DATABASETYPE_SERVICE_HOST and POOLNAME_DATABASETYPE_SERVICE_PORT environment variables are set up automatically from the database service definition in the OpenShift application template, while the others are configured in the template directly as env entries in container definitions under each pod template.

9.6.1.2. Examples

These examples show how value of the DB_SERVICE_PREFIX_MAPPING environment variable influences datasource creation.

9.6.1.2.1. Single Mapping

Consider value test-postgresql=TEST.

This creates a datasource with java:jboss/datasources/test_postgresql name. Additionally, all the required settings like password and username are expected to be provided as environment variables with the TEST_ prefix, for example TEST_USERNAME and TEST_PASSWORD.

9.6.1.2.2. Multiple Mappings

You can specify multiple datasource mappings.

Note

Always separate multiple datasource mappings with a comma.

Consider the following value for the DB_SERVICE_PREFIX_MAPPING environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL.

This creates the following two datasources:

  1. java:jboss/datasources/test_mysql
  2. java:jboss/datasources/cloud_postgresql

Then you can use TEST_MYSQL prefix for configuring things like the username and password for the MySQL datasource, for example TEST_MYSQL_USERNAME. And for the PostgreSQL datasource, use the CLOUD_ prefix, for example CLOUD_USERNAME.

9.7. Clustering

9.7.1. Configuring a JGroups Discovery Mechanism

To enable JBoss EAP clustering on OpenShift, configure the JGroups protocol stack in your JBoss EAP configuration to use either the kubernetes.KUBE_PING or the dns.DNS_PING discovery mechanism.

Although you can use a custom standalone-openshift.xml configuration file, it is recommended that you use environment variables to configure JGroups in your image build.

The instructions below use environment variables to configure the discovery mechanism for the JBoss EAP for OpenShift image.

Important

If you use one of the available application templates to deploy an application on top of the JBoss EAP for OpenShift image, the default discovery mechanism is dns.DNS_PING.

The dns.DNS_PING and kubernetes.KUBE_PING discovery mechanisms are not compatible with each other. It is not possible to form a supercluster out of two independent child clusters, with one using the dns.DNS_PING mechanism for discovery and the other using the kubernetes.KUBE_PING mechanism. Similarly, when performing a rolling upgrade, the discovery mechanism needs to be identical for both the source and the target clusters.

9.7.1.1. Configuring KUBE_PING

To use the KUBE_PING JGroups discovery mechanism:

  1. The JGroups protocol stack must be configured to use KUBE_PING as the discovery mechanism.

    You can do this by setting the JGROUPS_PING_PROTOCOL environment variable to kubernetes.KUBE_PING:

    JGROUPS_PING_PROTOCOL=kubernetes.KUBE_PING
  2. The KUBERNETES_NAMESPACE environment variable must be set to your OpenShift project name. If not set, the server behaves as a single-node cluster (a "cluster of one"). For example:

    KUBERNETES_NAMESPACE=PROJECT_NAME
  3. The KUBERNETES_LABELS environment variable should be set. This should match the label set at the service level. If not set, pods outside of your application (albeit in your namespace) will try to join. For example:

    KUBERNETES_LABELS=application=APP_NAME
  4. Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST API. This is done using the OpenShift CLI. The following example uses the default service account in the current project’s namespace:

    oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)

    Using the eap-service-account in the project namespace:

    oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)
    Note

    See Prepare OpenShift for Application Deployment for more information on adding policies to service accounts.

9.7.1.2. Configuring DNS_PING

To use the DNS_PING JGroups discovery mechanism:

  1. The JGroups protocol stack must be configured to use DNS_PING as the discovery mechanism.

    You can do this by setting the JGROUPS_PING_PROTOCOL environment variable to dns.DNS_PING:

    JGROUPS_PING_PROTOCOL=dns.DNS_PING
  2. The OPENSHIFT_DNS_PING_SERVICE_NAME environment variable must be set to the name of the ping service for the cluster.

    OPENSHIFT_DNS_PING_SERVICE_NAME=PING_SERVICE_NAME
  3. The OPENSHIFT_DNS_PING_SERVICE_PORT environment variable should be set to the port number on which the ping service is exposed. The DNS_PING protocol attempts to discern the port from the SRV records, otherwise it defaults to 8888.

    OPENSHIFT_DNS_PING_SERVICE_PORT=PING_PORT
  4. A ping service which exposes the ping port must be defined. This service should be headless (ClusterIP=None) and must have the following:

    1. The port must be named.
    2. The service must be annotated with the service.alpha.kubernetes.io/tolerate-unready-endpoints and the publishNotReadyAddresses properties, both set to true.

      Note
      • Use both the service.alpha.kubernetes.io/tolerate-unready-endpoints and the publishNotReadyAddresses properties to ensure that the ping service works in both the older and newer OpenShift releases.
      • Omitting these annotations result in each node forming its own "cluster of one" during startup. Each node then merges its cluster into the other nodes' clusters after startup, because the other nodes are not detected until after they have started.
      kind: Service
      apiVersion: v1
      spec:
          publishNotReadyAddresses: true
          clusterIP: None
          ports:
          - name: ping
            port: 8888
          selector:
              deploymentConfig: eap-app
      metadata:
          name: eap-app-ping
          annotations:
              service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
              description: "The JGroups ping port for clustering."
Note

DNS_PING does not require any modifications to the service account and works using the default permissions.

9.7.2. Configuring JGroups to Encrypt Cluster Traffic

To encrypt cluster traffic for JBoss EAP on OpenShift, you must configure the JGroups protocol stack in your JBoss EAP configuration to use either the SYM_ENCRYPT or ASYM_ENCRYPT protocol.

Although you can use a custom standalone-openshift.xml configuration file, it is recommended that you use environment variables to configure JGroups in your image build.

The instructions below use environment variables to configure the protocol for cluster traffic encryption for the JBoss EAP for OpenShift image.

Important

The SYM_ENCRYPT and ASYM_ENCRYPT protocols are not compatible with each other. It is not possible to form a supercluster out of two independent child clusters, with one using the SYM_ENCRYPT protocol for the encryption of cluster traffic and the other using the ASYM_ENCRYPT protocol. Similarly, when performing a rolling upgrade, the protocol needs to be identical for both the source and the target clusters.

9.7.2.1. Configuring SYM_ENCRYPT

To use the SYM_ENCRYPT protocol to encrypt JGroups cluster traffic:

  1. The JGroups protocol stack must be configured to use SYM_ENCRYPT as the encryption protocol.

    You can do this by setting the JGROUPS_ENCRYPT_PROTOCOL environment variable to SYM_ENCRYPT:

    JGROUPS_ENCRYPT_PROTOCOL=SYM_ENCRYPT
  2. The JGROUPS_ENCRYPT_SECRET environment variable must be set to the name of the secret containing the JGroups keystore file used for securing the JGroups communications. If not set, cluster communication is not encrypted and a warning is issued. For example:

    JGROUPS_ENCRYPT_SECRET=eap7-app-secret
  3. The JGROUPS_ENCRYPT_KEYSTORE_DIR environment variable must be set to the directory path of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable. If not set, cluster communication is not encrypted and a warning is issued. For example:

    JGROUPS_ENCRYPT_KEYSTORE_DIR=/etc/jgroups-encrypt-secret-volume
  4. The JGROUPS_ENCRYPT_KEYSTORE environment variable must be set to the name of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable. If not set, cluster communication is not encrypted and a warning is issued. For example:

    JGROUPS_ENCRYPT_KEYSTORE=jgroups.jceks
  5. The JGROUPS_ENCRYPT_NAME environment variable must be set to the name associated with the server’s certificate. If not set, cluster communication is not encrypted and a warning is issued. For example:

    JGROUPS_ENCRYPT_NAME=jgroups
  6. The JGROUPS_ENCRYPT_PASSWORD environment variable must be set to the password used to access the keystore and the certificate. If not set, cluster communication is not encrypted and a warning is issued. For example:

    JGROUPS_ENCRYPT_PASSWORD=mypassword

9.7.2.2. Configuring ASYM_ENCRYPT

Note

JBoss EAP 7.3 includes a new version of the ASYM_ENCRYPT protocol. The previous version of the protocol is deprecated. If you specify the JGROUPS_CLUSTER_PASSWORD environment variable, the deprecated version of the protocol is used and a warning is printed in the pod log.

To use the ASYM_ENCRYPT protocol to encrypt JGroups cluster traffic, specify ASYM_ENCRYPT as the encryption protocol, and configure it to use a keystore configured in the elytron subsystem.

-e JGROUPS_ENCRYPT_PROTOCOL="ASYM_ENCRYPT" \
-e JGROUPS_ENCRYPT_SECRET="encrypt_secret" \
-e JGROUPS_ENCRYPT_NAME="encrypt_name" \
-e JGROUPS_ENCRYPT_PASSWORD="encrypt_password" \
-e JGROUPS_ENCRYPT_KEYSTORE="encrypt_keystore" \
-e JGROUPS_CLUSTER_PASSWORD="cluster_password"

9.8. Health Checks

The JBoss EAP for OpenShift image utilizes the liveness and readiness probes included in OpenShift by default. In addition, this image includes Eclipse MicroProfile Health, as discussed in the Configuration Guide.

The following table demonstrates the values necessary for these health checks to pass. If the status is anything other than the values found below, then the check is failed and the image is restarted per the image’s restart policy.

Table 9.5. Liveness and Readiness Checks

Performed TestLivenessReadiness

Server Status

Any status

Running

Boot Errors

None

None

Deployment Status [a]

N/A or no failed entries

N/A or no failed entries

Eclipse MicroProfile Health [b]

N/A or UP

N/A or UP

[a] N/A is only a valid state when no deployments are present.
[b] N/A is only a valid state when the microprofile-health-smallrye subsystem has been disabled.

9.9. Messaging

9.9.1. Configuring External Red Hat AMQ Brokers

You can configure the JBoss EAP for OpenShift image with environment variables to connect to external Red Hat AMQ brokers.

Example OpenShift Application Definition

The following example uses a template to create a JBoss EAP application connected to an external Red Hat AMQ 7 broker.

Example: JDK 8

oc new-app eap73-amq-s2i \
-p APPLICATION_NAME=eap73-mq \
-p MQ_USERNAME=MY_USERNAME \
-p MQ_PASSWORD=MY_PASSWORD

Example: JDK 11

oc new-app eap73-openjdk11-amq-s2i \
-p APPLICATION_NAME=eap73-mq \
-p MQ_USERNAME=MY_USERNAME \
-p MQ_PASSWORD=MY_PASSWORD

Important

The template used in this example provides valid default values for the required parameters. If you do not use a template and provide your own parameters, be aware that the MQ_SERVICE_PREFIX_MAPPING name must match the APPLICATION_NAME name, appended with "-amq7=MQ".

9.10. Security Domains

To configure a new Security Domain, the user must define the SECDOMAIN_NAME environment variable.

This results in the creation of a security domain named after the environment variable. The user may also define the following environment variables to customize the domain:

Table 9.6. Security Domains

Variable nameDescription

SECDOMAIN_NAME

Defines an additional security domain.

Example value: myDomain

SECDOMAIN_PASSWORD_STACKING

If defined, the password-stacking module option is enabled and set to the value useFirstPass.

Example value: true

SECDOMAIN_LOGIN_MODULE

The login module to be used.

Defaults to UsersRoles

SECDOMAIN_USERS_PROPERTIES

The name of the properties file containing user definitions.

Defaults to users.properties

SECDOMAIN_ROLES_PROPERTIES

The name of the properties file containing role definitions.

Defaults to roles.properties

9.11. HTTPS Environment Variables

Variable nameDescription

HTTPS_NAME

If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE, enables HTTPS and sets the SSL name.

This should be the value specified as the alias name of your keystore if you created it with the keytool -genkey command.

Example value: example.com

HTTPS_PASSWORD

If defined along with HTTPS_NAME and HTTPS_KEYSTORE, enables HTTPS and sets the SSL key password.

Example value: passw0rd

HTTPS_KEYSTORE

If defined along with HTTPS_PASSWORD and HTTPS_NAME, enables HTTPS and sets the SSL certificate key file to a relative path under EAP_HOME/standalone/configuration

Example value: ssl.key

9.12. Administration Environment Variables

Table 9.7. Administration Environment Variables

Variable nameDescription

ADMIN_USERNAME

If both this and ADMIN_PASSWORD are defined, used for the JBoss EAP management user name.

Example value: eapadmin

ADMIN_PASSWORD

The password for the specified ADMIN_USERNAME.

Example value: passw0rd

9.13. S2I

The image includes S2I scripts and Maven.

Maven is currently only supported as a build tool for applications that are supposed to be deployed on JBoss EAP-based containers (or related/descendant images) on OpenShift.

Only WAR deployments are supported at this time.

9.13.1. Custom Configuration

It is possible to add custom configuration files for the image. All files put into configuration/ directory will be copied into EAP_HOME/standalone/configuration/. For example to override the default configuration used in the image, just add a custom standalone-openshift.xml into the configuration/ directory. See example for such a deployment.

9.13.1.1. Custom Modules

It is possible to add custom modules. All files from the modules/ directory will be copied into EAP_HOME/modules/. See example for such a deployment.

9.13.2. Deployment Artifacts

By default, artifacts from the source target directory will be deployed. To deploy from different directories set the ARTIFACT_DIR environment variable in the BuildConfig definition. ARTIFACT_DIR is a comma-delimited list. For example: ARTIFACT_DIR=app1/target,app2/target,app3/target

9.13.3. Artifact Repository Mirrors

A repository in Maven holds build artifacts and dependencies of various types, for example, all of the project JARs, library JARs, plug-ins, or any other project specific artifacts. It also specifies locations from where to download artifacts while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom mirror repository.

Benefits of using a mirror are:

  • Availability of a synchronized mirror, which is geographically closer and faster.
  • Ability to have greater control over the repository content.
  • Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories.
  • Improved build times.

Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at https://10.0.0.1:8443/repository/internal/, the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL environment variable to the build configuration of the application as follows:

  1. Identify the name of the build configuration to apply MAVEN_MIRROR_URL variable against.

    oc get bc -o name
    buildconfig/eap
  2. Update build configuration of eap with a MAVEN_MIRROR_URL environment variable.

    oc env bc/eap MAVEN_MIRROR_URL="https://10.0.0.1:8443/repository/internal/"
    buildconfig "eap" updated
  3. Verify the setting.

    oc env bc/eap --list
    # buildconfigs eap
    MAVEN_MIRROR_URL=https://10.0.0.1:8443/repository/internal/
  4. Schedule new build of the application.
Note

During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build.

9.13.3.1. Secure Artifact Repository Mirror URLs

To prevent "man-in-the-middle" attacks through the Maven repository, JBoss EAP requires the use of secure URLs for artifact repository mirror URLs.

The URL should specify a secure http ("https") and a secure port.

By default, if you specify an unsecure URL, an error will be returned. You can override this behavior using the the property -Dinsecure.repositories=WARN.

9.13.4. Scripts

run
This script uses the openshift-launch.sh script that configures and starts JBoss EAP with the standalone-openshift.xml configuration.
assemble
This script uses Maven to build the source, create a package (WAR), and move it to the EAP_HOME/standalone/deployments directory.

9.13.5. Custom Scripts

You can add custom scripts to run when starting a pod, before JBoss EAP is started.

You can add any script valid to run when starting a pod, including CLI scripts.

Two options are available for including scripts when starting JBoss EAP from an image:

  • Mount a configmap to be executed as postconfigure.sh
  • Add an install.sh script in the nominated installation directory

9.13.5.1. Mounting a configmap to execute custom scripts

Mount a configmap when you want to mount a custom script at runtime to an existing image (in other words, an image that has already been built).

To mount a configmap:

  1. Create a configmap with content you want to include in the postconfigure.sh.

    For example, create a directory called extensions in the project root directory to include the scripts postconfigure.sh and extensions.cli and run the following command:

    $ oc create configmap jboss-cli --from-file=postconfigure.sh=extensions/postconfigure.sh --from-file=extensions.cli=extensions/extensions.cli
  2. Mount the configmap into the pods via the deployment controller (dc).

    $ oc set volume dc/eap-app --add --name=jboss-cli -m /opt/eap/extensions -t configmap --configmap-name=jboss-cli --default-mode='0755' --overwrite

Example postconfigure.sh

#!/usr/bin/env bash
set -x
echo "Executing postconfigure.sh"
$JBOSS_HOME/bin/jboss-cli.sh --file=$JBOSS_HOME/extensions/extensions.cli

Example extensions.cli

embed-server --std-out=echo  --server-config=standalone-openshift.xml
:whoami
quit

9.13.5.2. Using install.sh to execute custom scripts

Use install.sh when you want to include the script as part of the image when it is built.

To execute custom scripts using install.sh:

  1. In the git repository of the project that will be used during s2i build, create a directory called .s2i.
  2. Inside the s2i directory, add a file called environment, with the following content:

    $ cat .s2i/environment
    CUSTOM_INSTALL_DIRECTORIES=extensions
  3. Create a directory called extensions.
  4. In the extensions directory, create the file postconfigure.sh with contents similar to the following (replace placeholder code with appropriate code for your environment):

    $ cat extensions/postconfigure.sh
    #!/usr/bin/env bash
    echo "Executing patch.cli"
    $JBOSS_HOME/bin/jboss-cli.sh --file=$JBOSS_HOME/extensions/some-cli-example.cli
  5. In the extensions directory, create the file install.sh with contents similar to the following (replace placeholder code with appropriate code for your environment):

    $ cat extensions/install.sh
    #!/usr/bin/env bash
    set -x
    echo "Running $PWD/install.sh"
    injected_dir=$1
    # copy any needed files into the target build.
    cp -rf ${injected_dir} $JBOSS_HOME/extensions

9.13.6. Environment Variables

You can influence the way the build is executed by supplying environment variables to the s2i build command. The environment variables that can be supplied are:

Table 9.8. s2i Environment Variables

Variable nameDescription

ARTIFACT_DIR

The .war, .ear, and .jar files from this directory will be copied into the deployments/ directory.

Example value: target

ENABLE_GENERATE_DEFAULT_DATASOURCE

Optional. When included with the value true, the server is provisioned with the default datasource. Otherwise, the default datasource is not included.

GALLEON_PROVISION_DEFAULT_FAT_SERVER

Optional. When included with the value true, and no galleon layers have been set, a default JBoss EAP server is provisioned.

GALLEON_PROVISION_LAYERS

Optional. Instructs the S2I process to provision the specified layers. The value is a comma-separated list of layers to provision, including one base layer and any number of decorator layers.

Example value: jaxrs, sso

HTTP_PROXY_HOST

Host name or IP address of a HTTP proxy for Maven to use.

Example value: 192.168.1.1

HTTP_PROXY_PORT

TCP Port of a HTTP proxy for Maven to use.

Example value: 8080

HTTP_PROXY_USERNAME

If supplied with HTTP_PROXY_PASSWORD, use credentials for HTTP proxy.

Example value: myusername

HTTP_PROXY_PASSWORD

If supplied with HTTP_PROXY_USERNAME, use credentials for HTTP proxy.

Example value: mypassword

HTTP_PROXY_NONPROXYHOSTS

If supplied, a configured HTTP proxy will ignore these hosts.

Example value: some.example.org|*.example.net

MAVEN_ARGS

Overrides the arguments supplied to Maven during build.

Example value: -e -Popenshift -DskipTests -Dcom.redhat.xpaas.repo.redhatga package

MAVEN_ARGS_APPEND

Appends user arguments supplied to Maven during build.

Example value: -Dfoo=bar

MAVEN_MIRROR_URL

URL of a Maven Mirror/repository manager to configure.

Example value: https://10.0.0.1:8443/repository/internal/

Note that the specified URL should be secure. For details see Section 9.13.3.1, “Secure Artifact Repository Mirror URLs”.

MAVEN_CLEAR_REPO

Optionally clear the local Maven repository after the build.

If the server present in the image is strongly coupled to the local cache, the cache is not deleted and a warning is printed.

Example value: true

APP_DATADIR

If defined, directory in the source from where data files are copied.

Example value: mydata

DATA_DIR

Directory in the image where data from $APP_DATADIR will be copied.

Example value: EAP_HOME/data

Note

For more information, see Build and Run a Java Application on the JBoss EAP for OpenShift Image, which uses Maven and the S2I scripts included in the JBoss EAP for OpenShift image.

9.14. Single Sign-On image

This image includes the Red Hat Single Sign-On-enabled applications.

For more information on deploying the Red Hat Single Sign-On for OpenShift image with the JBoss EAP for OpenShift image, see Deploy the Red Hat Single Sign-On-enabled JBoss EAP Image on the Red Hat Single Sign-On for OpenShift guide.

Table 9.9. Single Sign-On environment variables

Variable nameDescription

SSO_URL

URL of the Single Sign-On server.

SSO_REALM

Single Sign-On realm for the deployed applications.

SSO_PUBLIC_KEY

Public key of the Single Sign-On realm. This field is optional but if omitted can leave the applications vulnerable to man-in-middle attacks.

SSO_USERNAME

Single Sign-On user required to access the Single Sign-On REST API.

Example value: mySsoUser

SSO_PASSWORD

Password for the Single Sign-On user defined by the SSO_USERNAME variable.

Example value: 6fedmL3P

SSO_SAML_KEYSTORE

Keystore location for SAML. Defaults to /etc/sso-saml-secret-volume/keystore.jks.

SSO_SAML_KEYSTORE_PASSWORD

Keystore password for SAML. Defaults to mykeystorepass.

SSO_SAML_CERTIFICATE_NAME

Alias for keys/certificate to use for SAML. Defaults to jboss.

SSO_BEARER_ONLY

Single Sign-On client access type. (Optional)

Example value: true

SSO_CLIENT

Path for Single Sign-On redirects back to the application. Defaults to match module-name.

SSO_ENABLE_CORS

If true, enable CORS for Single Sign-On applications. (Optional)

SSO_SECRET

The Single Sign-On client secret for confidential access.

Example value: KZ1QyIq4

SSO_DISABLE_SSL_CERTIFICATE_VALIDATION

If true the SSL/TLS communication between JBoss EAP and the RH Single Sign-On server is unsecure, for example, the certificate validation is disabled with curl. Not set by default.

Example value: true

9.15. Transaction Recovery

When a cluster is scaled down, it is possible for transaction branches to be in doubt. There is a technology preview automated recovery pod that is meant to complete these branches, but there are rare scenarios, such as a network split, where the recovery may fail. In these cases, manual transaction recovery might be necessary.

Important

The Automated Transaction Recovery feature is only supported for OpenShift 3 and the feature is provided as Technology Preview only.

For OpenShift 4 you can use the EAP Operator to safely recovery transactions. See EAP Operator for Safe Transaction Recovery.

9.15.1. Unsupported Transaction Recovery Scenarios

  • JTS transactions

    Because the network endpoint of the parent is encoded in recovery coordinator IORs, recovery cannot work reliably if either the child or parent node recovers with either a new IP address, or if it is intended to be accessed using a virtualized IP address.

  • XTS transactions

    XTS does not work in a clustered scenario for recovery purposes. See JBTM-2742 for details.

  • Transactions propagated over JBoss Remoting is unsupported with OpenShift 3.
Note

Transactions propagated over JBoss Remoting is supported with OpenShift 4 and the EAP operator.

  • Transactions propagated over XATerminator

    Because the EIS is intended to be connected to a single instance of a Java EE application server, there are no well-defined ways to couple these processes.

9.15.2. Manual Transaction Recovery Process

The goal of the following procedure is to find and manually resolve in-doubt branches in cases where automated recovery has failed.

9.15.2.1. Caveats

This procedure only describes how to manually recover transactions that were wholly self-contained within a single JVM. The procedure does not describe how to recover JTA transactions that have been propagated to other JVMs.

Important

There are various network partition scenarios in which OpenShift might start multiple instances of the same pod with the same IP address and same node name and where, due to the partition, the old pod is still running. During manual recovery, this might result in a situation where you might be connected to a pod that has a stale view of the object store. If you think you are in this scenario, it is recommended that all JBoss EAP pods be shut down to ensure that none of the resource managers or object stores are in use.

When you enlist a resource in an XA transaction, it is your responsibility to ensure that each resource type is supported for recovery. For example, it is known that PostgreSQL and MySQL are well-behaved with respect to recovery, but for others, such as A-MQ and JDV resource managers, you should check documentation of the specific OpenShift release.

The deployment must use a JDBC object store.

Important

The transaction manager relies on the uniqueness of node identifiers. The maximum byte length of an XID is set by the XA specification and cannot be changed. Due to the data that the JBoss EAP for OpenShift image must include in the XID, this leaves room for 23 bytes in the node identifier.

OpenShift coerces the node identifier to fit this 23 byte limit:

  • For all node names, even those under 23 bytes, the - (dash) character is stripped out.
  • If the name is still over 23 bytes, characters are truncated from the beginning of the name until length of the name is within the 23 byte limit.

However, this process might impact the uniqueness of the identifier. For example, the names aaa123456789012345678m0jwh and bbb123456789012345678m0jwh are both truncated to 123456789012345678m0jwh, which breaks the uniqueness of the names that are expected. In another example, this-pod-is-m0jwh and thispod-is-m0jwh are both truncated to thispodism0jwh, again breaking the uniqueness of the names.

It is your responsibility to ensure that the node names you configure are unique, keeping in mind the above truncation process.

9.15.2.2. Prerequisite

It is assumed the OpenShift instance has been configured with a JDBC store, and that the store tables are partitioned using a table prefix corresponding to the pod name. This should be automatic whenever a JBoss EAP deployment is in use. This is different from the automated recovery example, which uses a file store with split directories on a shared volume. You can verify that the JBoss EAP instance is using a JDBC object store by looking at the configuration of the transactions subsystem in a running pod:

  1. Determine if the /opt/eap/standalone/configuration/openshift-standalone.xml configuration file contains an element for the transaction subsystem:

    <subsystem xmlns="urn:jboss:domain:transactions:3.0">
  2. If the JDBC object store is in use, then there is an entry similar to the following:

    <jdbc-store datasource-jndi-name="java:jboss/datasources/jdbcstore_postgresql"/>
    Note

    The JNDI name identifies the datasource used to store the transaction logs.

9.15.2.3. Procedure

Important

The following procedure details the process of manual transaction recovery solely for datasources.

  1. Use the database vendor tooling to list the XIDs (transaction branch identifiers) for in-doubt branches. It is necessary to list XIDs for all datasources that were in use by any deployments running on the pod that failed or was scaled down. Refer to the vendor documentation for the database product in use.
  2. For each such XID, determine which pod created the transaction and check to see if that pod is still running.

    1. If it is running, then leave the branch alone.
    2. If the pod is not running, assume it was removed from the cluster and you must apply the manual resolution procedure described here. Look in the transaction log storage that was used by the failed pod to see if there is a corresponding transaction log:

      1. If there is a log, then manually commit the XID using the vendor tooling.
      2. If there is not a log, assume it is an orphaned branch and roll back the XID using the vendor tooling.

The rest of this procedure explains in detail how to carry out each of these steps.

9.15.2.3.1. Resolving In-doubt Branches

First, find all the resources that the deployment is using.

It is recommended that you do this using the JBoss EAP managagement CLI. Although the resources should be defined in the JBoss EAP standalone-openshift.xml configuration file, there are other ways they can be made available to the transaction subsystem within the application server. For example, this can be done using a file in a deployment, or dynamically using the management CLI at runtime.

  1. Open a terminal on a pod running a JBoss EAP instance in the cluster of the failed pod. If there is no such pod, scale up to one.
  2. Create a management user using the /opt/eap/bin/add-user.sh script.
  3. Log into the management CLI using the /opt/eap/bin/jboss-cli.sh script.
  4. List the datasources configured on the server. These are the ones that may contain in-doubt transaction branches.

    /subsystem=datasources:read-resource
    {
        "outcome" => "success",
        "result" => {
        	"data-source" => {
            	"ExampleDS" => undefined,
            	...
        	},
      ...
    }
  5. Once you have the list, find the connection URL for each of the datasources. For example:

    /subsystem=datasources/data-source=ExampleDS:read-attribute(name=connection-url)
    {
        "outcome" => "success",
        "result" => "jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE",
        "response-headers" => {"process-state" => "restart-required"}
    }
  6. Connect to each datasource and list any in-doubt transaction branches.

    Note

    The table name that stores in-doubt branches will be different for each datasource vendor.

    JBoss EAP has a default SQL query tool (H2) that you can use to check each database. For example:

    java -cp /opt/eap/modules/system/layers/base/com/h2database/h2/main/h2-1.3.173.jar \
    -url "jdbc:postgresql://localhost:5432/postgres" \
    -user sa \
    -password sa \
    -sql "select gid from pg_prepared_xacts;"

    Alternatively, you can use the resource’s native tooling. For example, for a PostGreSQL datasource called sampledb, you can use the OpenShift client tools to remotely log in to the pod and query the in-doubt transaction table:

    $ oc rsh postgresql-2-vwf9n # rsh to the named pod
    sh-4.2$ psql sampledb
    psql (9.5.7)
    Type "help" for help.
    
    sampledb=# select gid from pg_prepared_xacts;
    131077_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGHAtanRhLWNyYXNoLXJlYy0zLXAyY2N3_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGgAAAAEAAAAA
9.15.2.3.2. Extract the Global Transaction ID and Node Identifier from Each XID

When all XIDs for in-doubt branches are identified, convert the XIDs into a format that you can compare to the logs stored in the transaction tables of the transaction manager.

For example, the following Bash script can be used to perform this conversion. Assuming that $PG_XID holds the XID from the select statement above, then the JBoss EAP transaction ID can be obtained as follows:

PG_XID="$1"
IFS='_' read -ra lines <<< "$PG_XID"
[[ "${lines[0]}" = 131077 ]] || exit 0; # this script only works for our own FORMAT ID
PG_TID=${lines[1]}

a=($(echo "$PG_TID"| base64 -d  | xxd -ps |tr -d '\n' | while read -N16 i ; do echo 0x$i ; done))
b=($(echo "$PG_TID"| base64 -d  | xxd -ps |tr -d '\n' | while read -N8 i ; do echo 0x$i ; done))
c=("${b[@]:4}") # put the last 3 32-bit hexadecimal numbers into array c
# the negative elements of c need special handling since printf below only works with positive
# hexadecimal numbers
for i in "${!c[@]}"; do
  arg=${c[$i]}
  # inspect the MSB to see if arg is negative - if so convert it from a 2’s complement number
  [[ $(($arg>>31)) = 1 ]] && x=$(echo "obase=16; $(($arg - 0x100000000 ))" | bc) || x=$arg
  if [[ ${x:0:1} = \- ]] ; then # see if the first character is a minus sign
     neg[$i]="-";
     c[$i]=0x${x:1} # strip the minus sign and make it hex for use with printf below
  else
     neg[$i]=""
     c[$i]=$x
  fi
done
EAP_TID=$(printf %x:%x:${neg[0]}%x:${neg[1]}%x:${neg[2]}%x ${a[0]} ${a[1]} ${c[0]} ${c[1]} ${c[2]})

After completion, the $EAP_TID variable holds the global transaction ID of the transaction that created this XID. The node identifier of the pod that started the transaction is given by the output of the following bash command:

echo "$PG_TID"| base64 -d | tail -c +29
Note

The node identifier starts from the 29th character of the PostgreSQL global transaction ID field.

  • If this pod is still running, then leave this in-doubt branch alone since the transaction is still in flight.
  • If this pod is not running, then you need to search the relevant transaction log storage for the transaction log. The log storage is located in a JDBC table, which is named following the os<node-identifier>jbosststxtable pattern.

    • If there is no such table, leave the branch alone as it is owned by some other transaction manager. The URL for the datasource containing this table is defined in the transaction subsystem description shown below.
    • If there is such a table, look for an entry that matches the global transaction ID.

      • If there is an entry in the table that matches the global transaction ID, then the in-doubt branch needs to be committed using the datasource vendor tooling as described below.
      • If there is no such entry, then the branch is an orphan and can safely be rolled back.

An example of how to commit an in-doubt PostgreSQL branch is shown below:

$ oc rsh postgresql-2-vwf9n
sh-4.2$ psql sampledb
psql (9.5.7)
Type "help" for help.
psql sampledb
commit prepared '131077_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGHAtanRh
 ----
LWNyYXNoLXJlYy0zLXAyY2N3_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGgAAAAEAAAAA';
Important

Repeat this procedure for all datasources and in-doubt branches.

9.15.2.3.3. Obtain the List of Node Identifiers of All Running JBoss EAP Instances in Any Cluster that Can Contact the Resource Managers

Node identifiers are configured to be the same name as the pod name. You can obtain the pod names in use using the oc command. Use the following command to list the running pods:

$ oc get pods | grep Running
eap-manual-tx-recovery-app-4-26p4r   1/1       Running     0          23m
postgresql-2-vwf9n                   1/1       Running     0          41m

For each running pod, look in the output of the pod’s log and obtain the node name. For example, for first pod shown in the above output, use the following command:

$ oc logs eap-manual-tx-recovery-app-4-26p4r | grep "jboss.node.name" | head -1
jboss.node.name = tx-recovery-app-4-26p4r
Important

The aforementioned JBoss node name identifier will always be truncated to the maximum length of 23 characters in total by removing characters from the beginning and retaining the trailing characters until the maximum length of 23 characters is reached.

9.15.2.3.4. Find the Transaction Logs
  1. The transaction logs reside in a JDBC-backed object store. The JNDI name of this store is defined in the transaction subsystem definition of the JBoss EAP configuration file.
  2. Look in the configuration file to find the datasource definition corresponding to the above JNDI name.
  3. Use the JNDI name to derive the connection URL.
  4. You can use the URL to connect to the database and issue a select query on the relevant in-doubt transaction table.

    Alternatively, if you know which pod the database is running on, and you know the name of the database, it might be easier to open an OpenShift remote shell into the pod and use the database tooling directly.

    For example, if the JDBC store is hosted by a PostgreSQL database called sampledb running on pod postgresql-2-vwf9n, then you can find the transaction logs using the following commands:

    Note

    The ostxrecoveryapp426p4rjbosststxtable table name listed in the following command has been chosen since it follows the pattern for JDBC table names holding the log storage entries. In your environment the table name will have similar form:

    • Starting with os prefix.
    • The part in the middle is derived from the JBoss node name above, possibly deleting the "-" (dash) character if present.
    • Finally the jbosststxtable suffix is appended to create the final name of the table.
    $ oc rsh postgresql-2-vwf9n
    sh-4.2$ psql sampledb
    psql (9.5.7)
    Type "help" for help.
    
    sampledb=# select uidstring from ostxrecoveryapp426p4rjbosststxtable where TYPENAME='StateManager/BasicAction/TwoPhaseCoordinator/AtomicAction'
    ;
                  uidstring
     -------------------------------------
     0:ffff0a81009d:33789827:5a68b2bf:40
     (1 row)
9.15.2.3.5. Cleaning Up the Transaction Logs for Reconciled In-doubt Branches
Warning

Do not delete the log unless you are certain that there are no remaining in-doubt branches.

When all the branches for a given transaction are complete, and all potential resources managers have been checked, including A-MQ and JDV, it is safe to delete the transaction log.

Issue the following command, specify the transaction log to be removed using the appropriate uidstring:

DELETE FROM ostxrecoveryapp426p4rjbosststxtable where uidstring = UIDSTRING
Important

If you do not delete the log, then completed transactions which failed after prepare, but which have now been resolved, will never be removed from the transaction log storage. The consequence of this is that unnecessary storage is used and future manual reconciliation will be more difficult.

9.16. Included JBoss Modules

The table below lists included JBoss Modules in the JBoss EAP for OpenShift image.

Table 9.10. Included JBoss Modules

JBoss Module

org.jboss.as.clustering.common

org.jboss.as.clustering.jgroups

org.jboss.as.ee

org.jboss.logmanager.ext

org.jgroups

org.openshift.ping

net.oauth.core

9.17. EAP Operator: API Information

The EAP operator introduces the following APIs:

9.17.1. WildFlyServer

WildFlyServer defines a custom JBoss EAP resource.

Table 9.11. WildFlyServer

FieldDescriptionSchemeRequired

metadata

Standard object’s metadata

ObjectMeta v1 meta

false

spec

Specification of the desired behaviour of the JBoss EAP deployment.

WildFlyServerSpec

true

status

Most recent observed status of the JBoss EAP deployment. Read-only.

WildFlyServerStatus

false

9.17.2. WildFlyServerList

WildFlyServerList defines a list of JBoss EAP deployments.

Table 9.12. Table

FieldDescriptionSchemeRequired

metadata

Standard list’s metadata

metav1.ListMeta

false

items

List of WildFlyServer

WildFlyServer

true

9.17.3. WildFlyServerSpec

WildFlyServerSpec is a specification of the desired behavior of the JBoss EAP resource.

It uses a StatefulSet with a pod spec that mounts the volume specified by storage on /opt/jboss/wildfly/standalone/data.

Table 9.13. WildFlyServerSpec

FieldDescriptionSchemeRequired

applicationImage

Name of the application image to be deployed

string

false

replicas

the desired number of replicas for the application

int32]

true

standaloneConfigMap

Spec to specify how a standalone configuration can be read from a ConfigMap.

StandaloneConfigMapSpec

false

storage

Storage spec to specify how storage should be used. If omitted, an EmptyDir is used (that does not persist data across pod restart)

StorageSpec

false

serviceAccountName

Name of the ServiceAccount to use to run the JBoss EAP pods

string

false

envFrom

List of environment variables present in the containers from configMap or secret

corev1.EnvFromSource

false

env

List of environment variable present in the containers

corev1.EnvVar

false

secrets

List of secret names to mount as volumes in the containers. Each secret is mounted as a read-only volume at /etc/secrets/<secret name>

string

false

configMaps

List of ConfigMap names to mount as volumes in the containers. Each ConfigMap is mounted as a read-only volume under /etc/configmaps/<config map name>

string

false

disableHTTPRoute

Disable the creation a route to the HTTP port of the application service (false if omitted)

boolean

false

sessionAffinity

If connections from the same client IP are passed to the same JBoss EAP instance/pod each time (false if omitted)

boolean

false

9.17.4. StorageSpec

StorageSpec defines the configured storage for a WildFlyServer resource. If neither an EmptyDir nor a volumeClaimTemplate is defined, a default EmptyDir is used.

The EAP Operator configures the StatefulSet using information from this StorageSpec to mount a volume dedicated to the standalone/data directory used by JBoss EAP to persist its own data. For example, transaction log). If an EmptyDir is used, the data does not survive a pod restart. If the application deployed on JBoss EAP relies on transaction, specify a volumeClaimTemplate, so that the same persistent volume can be reused upon pod restarts.

Table 9.14. Table

FieldDescriptionSchemeRequired

emptyDir

EmptyDirVolumeSource to be used by the JBoss EAP StatefulSet

corev1.EmptyDirVolumeSource

false

volumeClaimTemplate

A PersistentVolumeClaim spec to configure Resources requirements to store JBoss EAP standalone data directory. The name of the template is derived from the WildFlyServer name. The corresponding volume is mounted in ReadWriteOnce access mode.

corev1.PersistentVolumeClaim

false

9.17.5. StandaloneConfigMapSpec

StandaloneConfigMapSpec defines how JBoss EAP standalone configuration can be read from a ConfigMap. If omitted, JBoss EAP uses its standalone.xml configuration from its image.

Table 9.15. StandaloneConfigMapSpec

FieldDescriptionSchemeRequired

name

Name of the ConfigMap containing the standalone configuration XML file.

string

true

key

Key of the ConfigMap whose value is the standalone configuration XML file. If omitted, the spec finds the standalone.xml key.

string

false

9.17.6. WildFlyServerStatus

WildFlyServerStatus is the most recent observed status of the JBoss EAP deployment. Read-only.

Table 9.16. WildFlyServerStatus

FieldDescriptionSchemeRequired

replicas

The actual number of replicas for the application

int32

true

hosts

Hosts that route to the application HTTP service

string

true

pods

Status of the pods

PodStatus

true

scalingdownPods

Number of pods that are under scale down cleaning process

int32

true

9.17.7. PodStatus

PodStatus is the most recent observed status of a pod running the JBoss EAP application.

Table 9.17. PodStatus

FieldDescriptionSchemeRequired

name

Name of the pod

string

true

podIP

IP address allocated to the pod

string

true

state

State of the pod in the scale down process. The state is ACTIVE by default, which means it serves requests.

string

false





Revised on 2021-12-27 11:43:21 UTC