Red Hat JBoss Enterprise Application Platform for OpenShift

Red Hat JBoss Middleware for OpenShift 3

Guide to developing with Red Hat JBoss Enterprise Application Platform for OpenShift

Red Hat JBoss Middleware for OpenShift Documentation Team


Guide to using Red Hat JBoss Enterprise Application Platform for OpenShift

Chapter 1. Introduction

1.1. What is JBoss Enterprise Application Platform (JBoss EAP)?

Red Hat JBoss Enterprise Application Platform 7.0 (JBoss EAP 7) is an application server that works as a middleware platform, is built on open standards, and is compliant with the Java EE 7 specification.

It is based on Wildfly 10, and provides preconfigured options for features such as high-availability clustering, messaging, and distributed caching.

With JBoss EAP you can develop, deploy, and run applications using the various APIs and services that JBoss EAP provides. JBoss EAP includes a modular structure that allows you to enable services only when required, which results in improved startup speed. The web-based management console and management command line interface (CLI) make editing XML configuration files unnecessary and add the ability to script and automate tasks. In addition, JBoss EAP includes APIs and development frameworks for quickly developing secure and scalable Java EE applications. JBoss EAP 7 is a certified implementation of the Java EE 7 full and web profile specifications.

1.2. How Does JBoss EAP Work on OpenShift?

Red Hat offers a containerized image for the Red Hat JBoss Enterprise Application Platform that is designed for use with OpenShift. Using this image, developers can quickly and easily build, scale, and test applications that are deployed across hybrid environments.

1.3. Comparison: JBoss EAP and EAP for OpenShift

There are some notable differences when comparing the JBoss EAP product with the available EAP for OpenShift image. The following table describes these differences and notes which features are included or supported in the current version of EAP for OpenShift.

Table 1.1. Differences between JBoss EAP and EAP for OpenShift


JBoss EAP Management Console

Not included

The JBoss EAP Management Console is not included in this release of EAP for OpenShift.

Domain mode

Not supported

Although domain mode is not supported, creation and distribution of applications are managed in the containers on OpenShift.

Default root page


The default root page is disabled, but you can deploy your own application to the root context as ROOT.war.

Remote messaging


A-MQ for inter-pod and remote messaging is supported. HornetQ is only supported for intra-pod messaging and only enabled when A-MQ is absent. EAP for OpenShift 7 includes Artemis as a replacement for HornetQ.

1.4. Comparison: EAP for OpenShift 6.4 and 7.0

Red Hat offers two EAP for OpenShift images. The first is based on JBoss EAP 6.4 and the second is based on JBoss EAP 7. There are several differences between the two images:

JBoss Web is replaced by Undertow

  • JBoss EAP 6.4 image uses JBoss Web.
  • JBoss EAP 7 image uses Undertow instead of JBoss Web. This change only affects users implementing custom JBoss Web Valves in their applications. Affected users must refer to the Red Hat JBoss EAP 7 documentation for details about migrating JBoss EAP Web Valve handlers.

HornetQ is replaced by Artemis

  • The JBoss EAP 6.4 image only uses HornetQ for intra-pod messaging when A-MQ is absent.
  • The EAP for OpenShift 7 image uses Artemis instead of HornetQ. This change resulted in renaming the HORNETQ_QUEUES and HORNETQ_TOPICS environment variables to MQ_QUEUES and MQ_TOPICS respectively. For complete instructions to deal with migrating applications from JBoss EAP 6.4 to 7, see the JBoss EAP 7 Migration Guide.

1.5. Version Compatibility and Support

EAP for OpenShift is updated frequently. Therefore, it is important to understand which versions of the images are compatible with which versions of OpenShift. Not all images are compatible with all OpenShift 3.x versions. Visit the Red Hat Customer Portal and see the OpenShift and Atomic Platform Tested Integrations page for more information on version compatibility and support.

1.6. Persistent Templates

The EAP database templates, which deploy EAP and database pods, have both ephemeral and persistent variations. For example, for an EAP application backed by a MongoDB database, there are eap70-mongodb-s2i and eap70-mongodb-persistent-s2i templates.

Persistent templates include an environment variable to provision a persistent volume claim, which will bind with an available persistent volume to be used as a storage volume for the EAP for OpenShift deployment. Information, such as timer schema, log handling, or data updates, is stored on the storage volume, rather than in ephemeral container memory. This information will persist if the pod goes down for any reason, such as project upgrade, deployment rollback, or unexpected error.

Without a persistent storage volume for the deployment, this information is stored in the container memory only, and is lost if the pod goes down for any reason.

For example, an EE timer backed by persistent storage will continue to run if the pod is restarted. When the pod is running again, any events triggered by the timer will be enacted when the application is running again.
Conversely, if the EE timer is running in the container memory, the timer status would be lost if the pod restarted, and would start from the beginning when the pod is running again.

Chapter 2. Installation and Configuration

2.1. Overview

Before installing EAP for OpenShift on your OpenShift instance, you must first determine whether you are installing EAP for OpenShift in a production or a non-production environment. Production environments require Secure Sockets Layer (SSL) encryption for network communication for general public access, which is also known as a HTTPS connection. In this case you must use a signed certificate from a Certificate Authority (CA).

However, if you are installing EAP for OpenShift for demonstration purposes, proof-of-concept (POC) designs, or environments with internal access only, unencrypted and insecure communication may be sufficient. The instructions referenced here describe how to create the required keystore for EAP for OpenShift with a self-signed or a purchased SSL certificate.


Using a self-signed SSL certificate to create a keystore is not intended for production environments. For production environments or where SSL encrypted communication is required, you must use a SSL certificate that is purchased from a verified CA.

2.2. Key Terms

The following table describes the various terms that are used within the context of this topic.

Table 2.1. Terminology used in this topic

Key termDescription


Secure Sockets Layer encrypts network traffic between the client and the EAP web server, providing a HTTPS connection between them.


HTTPS is a protocol that provides an SSL-encrypted connection between a client and a server.


A Java keystore is a repository to store SSL/TLS certificates and distribute them to applications for encrypted communication.


A secret contains the Java keystore that gets passed to EAP for OpenShift along with a password to access it. This then gets used in scripts to configure HTTPS access.

2.3. Initial Setup

The instructions in this guide follow on from and assume an OpenShift instance similar to that created in the OpenShift Primer.

2.4. Getting Started

After you have completed the Section 2.3, “Initial Setup” instructions, this topic helps you get started with EAP for OpenShift by performing the required preliminary steps before you can install the image on OpenShift. This process consists of the following steps:

  • Step 1: Create project
  • Step 2: Create service account
  • Step 3: Create keystore from SSL certificate
  • Step 4: Create secret from keystore
  • Step 5: Add secret to service account
  • Step 6: Create and deploy EAP application

The following instructions describe how to perform each step.

Step 1: Create a new project in OpenShift

A project allows a group of users to organize and manage content separately from other groups. Create a project in OpenShift with the following command.

$ oc create project <project-name>

You can then make this new project to be the current project with the following command:

$ oc project <project-name>

Step 2: Create an EAP service account in your project

Service accounts are API objects that exist within each project. Create a service account named eap-service-account in the OpenShift project that you created in step 1. For the EAP 7 image specify the service account name to be eap7-service-account.

$ oc create serviceaccount eap-service-account -n <project-name>

After creating the service account, configure the access permissions for it with the following command, specifying the correct name depending on the EAP image version.

$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)

The service account that you create must be configured with the correct permissions with the ability to view pods in Kubernetes. This is required in order for clustering with to work. You can view the top of the log files to see whether the correct service account permissions have been configured.

Step 3: Create a keystore from SSL certificate

EAP for OpenShift requires a keystore to be imported to properly install and configure the image on your OpenShift instance. Note that self-signed certificates do not provide secure communication and are intended for internal testing purposes.


For production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS).

See Generate a SSL Encryption Key and Certificate for more information on how to create a keystore with self-signed or purchased SSL certificates.

Step 4: Create a secret from the keystore

Next, create a secret from the keystore that you created in step 1 with the following command.

$ oc secret new <secret-name> <keystore-filename>.jks

Step 5: Add the secret to your service account

Now add the secret created in step 3 to the eap-service-account that was created in step 2. You can do this with the following command.

$ oc secrets add serviceaccount/eap-service-account secret/<secret-name>

Step 6: Create and deploy the EAP application

You can now create an EAP application using the defined image, or you can use the basic S2I template.

To create an EAP application using the defined image, run the following command.

$ oc new-app <jboss-eap-7/eap70-openshift>

Alternatively, you can create an EAP application using the basic S2I template with the following command.

$ oc new-app <eap7-basic-s2i>

2.5. Configuring EAP for OpenShift

The recommended method to run and configure EAP for OpenShift is to use the OpenShift S2I process together with the application template parameters and environment variables.


The variable EAP_HOME is used to denote the path to the JBoss EAP installation. Replace this variable with the actual path to your JBoss EAP installation.

The S2I process for EAP for OpenShift works as follows:

  1. If a pom.xml file is present in the source repository, a Maven build process is triggered that uses the contents of the $MAVEN_ARGS environment variable. Although you can specify arguments or options with the $MAVEN_ARGS environment variable, Red Hat recommends that you use the $MAVEN_ARGS_APPEND environment variable to do this. The $MAVEN_ARGS_APPEND variable takes the default arguments from $MAVEN_ARGS and appends the options from $MAVEN_ARGS_APPEND to it. By default, the OpenShift profile uses the Maven package goal which includes system properties for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo). The results of a successful Maven build are copied to EAP_HOME/standalone/deployments. This includes all JAR, WAR, and EAR files from the source repository specified by the $ARTIFACT_DIR environment variable. The default value of $ARTIFACT_DIR is the target directory.

To use Maven behind a proxy on EAP for OpenShift image, set the $HTTP_PROXY_HOST and $HTTP_PROXY_PORT environment variables. Optionally, you can also set the $HTTP_PROXY_USERNAME, HTTP_PROXY_PASSWORD, and HTTP_PROXY_NONPROXYHOSTS variables.

  1. EAP_HOME/standalone/deployments is the artifacts directory, which is specified with the $ARTIFACT_DIR environment variable.
  2. All files in the configuration source repository directory are copied to EAP_HOME/standalone/configuration. If you want to use a custom JBoss EAP configuration file, it should be named standalone-openshift.xml.
  3. All files in the modules source repository directory are copied to EAP_HOME/modules.

See the Artifact Repository Mirrors section for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror.

2.6. Build Extensions and Project Artifacts

The EAP for OpenShift image extends database support in OpenShift using various artifacts. These artifacts are included in the built image through different mechanisms:
- S2I artifacts that are injected into the image during the S2I process, and
- Runtime artifacts from environment files provided through the OpenShift Secret mechanism.

Build Extensions Process

2.6.1. S2I Artifacts

The S2I artifacts include modules, drivers, and additional generic deployments that provide the necessary configuration infrastructure required for the deployment. This configuration is built into the image during the S2I process so that only the datasources and associated resource adapters need to be configured at runtime.

Refer to the Artifact Repository Mirrors section for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror. Modules, Drivers, and Generic Deployments

There are three options for including these S2I artifacts in the EAP for OpenShift image:

  1. Include the artifact in the application source deployment directory. The artifact is downloaded during the build and injected into the image. This is similar to deploying an application on the EAP for OpenShift image.
  2. Include the CUSTOM_INSTALL_DIRECTORIES environment variable, a list of comma-separated list of directories used for installation and configuration of artifacts for the image during the S2I process. There are two methods for including this information in the S2I:
    2a) An script in the nominated installation directory. The install script executes during the S2I process and operates with impunity. script example:

    source /usr/local/s2i/
    install_deployments ${injected_dir}/injected-deployments.war
    install_modules ${injected_dir}/modules
    configure_drivers ${injected_dir}/drivers.env

    The script is responsible for customizing the base image using APIs provided by the This shell script contains functions that are used by the script to install and configure the modules, drivers, and generic deployments.

    Functions contained within

    • install_modules
    • configure_drivers
    • install_deployments


      A module is a logical grouping of classes used for class loading and dependency management. Modules are defined in the EAP_HOME/modules/ directory of the application server. Each module exists as a subdirectory, for example EAP_HOME/modules/org/apache/. Each module directory then contains a slot subdirectory, which defaults to main and contains the module.xml configuration file and any required JAR files.

      module.xml example:

      <?xml version="1.0" encoding="UTF-8"?>
      <module xmlns="urn:jboss:module:1.0" name="org.apache.derby">
              <resource-root path="derby-"/>
              <resource-root path="derbyclient-"/>
              <module name="javax.api"/>
              <module name="javax.transaction.api"/>

      The install_modules function in the copies the respective JAR files to the modules directory in EAP, along with the module.xml.


      Drivers are installed as modules. The driver is then configured in the by the configure_drivers function, the configuration properties for which are defined in a runtime artifact environment file.

      drivers.env example:


      Generic Deployments

      Deployable archive files, such as JARs, WARs, RARs, or EARs, can be deployed from an injected image using the install_deployments function supplied by the API in the contained in the EAP for OpenShift image.

      2b) If the CUSTOM_INSTALL_DIRECTORIES environment variable has been declared but no scripts are found in the custom installation directories, the following artifact directories will be copied to their respective destinations in the built image:
      - modules/* copied to $JBOSS_HOME/modules/system/layers/openshift
      - configuration/* copied to $JBOSS_HOME/standalone/configuration
      - deployments/* copied to $JBOSS_HOME/standalone/deployments
      This is a basic configuration approach compared to the alternative, and requires the artifacts to be structured appropriately.

2.6.2. Runtime Artifacts Datasources

There are three types of datasources:
1. Default internal datasources. These are PostgreSQL, MySQL, and MongoDB. These datasources are available on OpenShift by default through the Red Hat Registry and do not require additional environment files to be configured. Set the DB_SERVICE_PREFIX_MAPPING to the name of the OpenShift service for the database to be discovered and used as a datasource. 2. Other internal datasources. These are datasources not available by default through the Red Hat Registry but run on OpenShift. Configuration of these datasources is provided by environment files added to OpenShift Secrets. 3. External datasources that are not run on OpenShift. Configuration of external datasources is provided by environment files added to OpenShift Secrets.

Datasource environment file example:

# derby datasource

# Connection info for xa datasource

# _HOST and _PORT are required, but not used

The DATASOURCES property is a comma-separated list of datasource property prefixes. These prefixes are then appended to all properties for that datasource. Multiple datasources can then be included in a single environment file. Alternatively, each datasource can be provided in separate environment files.

Datasources contain two types of properties: connection pool-specific properties and database driver-specific properties. Database driver-specific properties use the generic XA_CONNECTION_PROPERTY, because the driver itself is configured as a driver S2I artifact. The suffix of the driver property is specific to the particular driver for the datasource.
In the above example, ACCOUNTS is the datasource prefix, XA_CONNECTION_PROPERTY is the generic driver property, and DatabaseName is the property specific to the driver.

The datasources environment files are added to the OpenShift Secret for the project. These environment files are then called within the template using the ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files.

For example:

    “Name”: “ENV_FILES”,
    “Value”: “/etc/extensions/datasources1.env,/etc/extensions/datasources2.env”
} Resource Adapters

Configuration of resource adapters is provided by environment files added to OpenShift Secrets.

Table 2.2. Resource Adapter Properties



The identifier of the resource adapter as specified in the server configuration file.


The resource adapter archive.


The slot subdirectory, which contains the module.xml configuration file and any required JAR files.


The JBoss Module ID where the object factory Java class can be loaded from.


The fully qualified class name of a managed connection factory or admin object.


The JNDI name for the connection factory.


Directory where the data files are stored.


Set AllowParentPaths to false to disallow '..' in paths. This prevents requesting files that are not contained in the parent directory.


The maximum number of connections for a pool. No more connections will be created in each sub-pool.


The minimum number of connections for a pool.


Specifies if the pool should be prefilled. Changing this value requires a server restart.


How the pool should be flushed in case of an error. Valid values are: FailingConnectionOnly (default), IdleConnections, and EntirePool.

The RESOURCE_ADAPTERS property is a comma-separated list of resource adapter property prefixes. These prefixes are then appended to all properties for that resource adapter. Multiple resource adapter can then be included in a single environment file. Alternatively, each resource adapter can be provided in separate environment files.

Resource adapter environment file example:


The resource adapter environment files are added to the OpenShift Secret for the project namespace. These environment files are then called within the template using the ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files.

For example:

    “Name”: “ENV_FILES”,
    “Value”: “/etc/extensions/resourceadapter1.env,/etc/extensions/resourceadapter2.env”

Chapter 3. Tutorials

3.1. Example Workflow: Using Maven to build and run a Java EE 7 application on EAP for OpenShift Image

This tutorial focuses on building and running a Java EE 7 application on OpenShift using the EAP for OpenShift image. The kitchensink quickstart example is used here, which demonstrates a Java EE 7 web-enabled database application using JSF, CDI, EJB, JPA and Bean Validation. See JBoss EAP Quickstarts for more information.


The kitchensink quickstart is a lightweight, relational example datasource that is used for examples only. It is not robust or scalable, is not supported, and should NOT be used in a production environment!

3.1.1. Prepare for Deployment

  1. Log in to the OpenShift instance by running following the command and providing credentials.

    $ oc login
  2. Create a new project.

    $ oc new-project eap-demo
  3. Create a service account to be used for this deployment.

    $ oc create serviceaccount eap-service-account
  4. Add the view role to the service account. This enables the service account to view all the resources in the eap-demo namespace, which is necessary for managing the cluster.

    $ oc policy add-role-to-user view system:serviceaccount:eap-demo:eap-service-account
  5. Generate a self-signed certificate keystore. This example uses ‘keytool’, a package included with the Java Development Kit, to generate dummy credentials for use with the keystore:

    $ keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -validity 360 -keysize 2048

    OpenShift does not permit login authentication from self-signed certificates. For demonstration purposes, this example uses ‘openssl‘ to generate a CA certificate to sign the SSL keystore and create a truststore. This truststore is also included in the creation of the secret, and specified in the SSO template.


    For production environments, its recommended that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS).

  6. Use the generated keystore file to create the secret.

    $ oc secrets new eap-app-secret keystore.jks
  7. Add the secret to the service account created earlier.

    $ oc secrets link eap-service-account eap-app-secret

3.1.2. Deployment

  1. Create a new application using the EAP for OpenShift image and Java source code.

    $ oc new-app jboss-eap70-openshift~ --context-dir=kitchensink
  2. Retrieve the name of the build config.

    $ oc get bc -o name
  3. View the Maven build logs for the example repository by running the following command:

    $ oc logs -f buildconfig/jboss-eap-quickstarts

3.1.3. Post Deployment

  1. Get the service name.

    $ oc get service
  2. Expose the service as a route to be able to use it from the browser.

    $ oc expose service/jboss-eap-quickstarts --port=8080
  3. Get the route.

    $ oc get route
  4. Access the application in your browser using the URL (value of HOST/PORT field from previous command output).
  5. Optionally, you can also scale up the application instance by running the following command:

    $ oc scale dc eap-demo --replicas=3 Using the Kitchensink Application

  1. Navigate to the service address for the kitchensink application (value of the HOST/PORT field from oc get route command output). The title of the page reads Welcome to JBoss!.
  2. Use the Member Registration section to add members to the database. The application provides some constraints to the Name:, Email:, and Phone #: fields. Once completed, the user will have an Id generated and will appear in the Members table.
  3. Click the /rest/members link beneath the table to display the REST API response information for the registered members.

You can close the browser and re-open it later, or open the application in a different browser, and the member data is retained as long as the pod remains active.

Chapter 4. Reference Information


The content in this section is derived from the engineering documentation for this image. It is provided for reference as it can be useful for development purposes, and for testing beyond the scope of the product documentation

4.1. Information Environment Variables

The following information environment variables are designed to convey information about the image and should not be modified by the user:

Table 4.1. Information Environment Variables

Variable NameDescriptionExample Value











Image name



Image release label



Image version




org.jboss.logmanager, jdk.nashorn.api


Provides OpenShift S2I support for jee project types.


4.2. Configuration Environment Variables

Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired.

Table 4.2. Configuration Environment Variables

Variable nameDescriptionValue


Switch on client authentication for OpenShift TLS communication. The value of this parameter can be a relative distinguished name which must be contained in a presented client’s certificate. Enabling this parameter will automatically switch Jolokia into https communication mode. The default CA cert is set to /var/run/secrets/



If set, uses this file (including path) as Jolokia JVM agent properties (as described in Jolokia’s reference manual). If not set, the /opt/jolokia/etc/ will be created using the settings as defined in the manual. Otherwise the rest of the settings in this document are ignored.



Enable Jolokia discovery. Defaults value is false.



Host address to bind to. Defaults value is


Switch on secure communication with https. By default self signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS.



Agent ID to use ($HOSTNAME by default, which is the container id).



If set, disables activation of Joloka (i.e. echos an empty value). By default, Jolokia is enabled.



Additional options to be appended to the agent configuration. They should be given in the format key=value, key=value, …​.



Password for basic authentication. By default authentication is switched off.



Determines if a random AB_JOLOKIA_PASSWORD be generated. Set to true to generate random password. Generated value is saved in the /opt/jolokia/etc/ file.



Port to listen to. Defaults to 8778.



User for basic authentication. Defaults to jolokia.



If set to any non zero length value then the image will prevent shutdown with the TERM signal and will require execution of the shutdown command through jboss-cli.



Set the maximum Java heap size, as a percentage of available container memory.



A list of comma-separated directories used for installation and configuration of artifacts for the image during the S2I process.



Specify the default JNDI binding for the JMS connection factory (jms-connection-factory='java:jboss/DefaultJMSConnectionFactory').



Set the initial Java heap size, as a percentage of the maximum heap size.



Server startup options.



Comma-separated list of package names that will be appended to the JBOSS_MODULES_SYSTEM_PKGS environment variable.



For backwards compatability, set to true to use MyQueue and MyTopic as physical destination name defaults instead of queue/MyQueue and topic/MyTopic.



Clustering labels selector.



Clustering project namespace.



If set to true, ensurses that the bash scripts are executed with the -x option, printing the commands and their arguments as they are executed.



Other environment variables not listed above that can influence the product can be found in JBOSS documentation.

4.3. Application Templates

Variable nameDescriptionValue


Controls whether exploded deployment content should be automatically deployed.


4.4. Exposed Ports

Table 4.3. Exposed Ports

Port NumberDescription




Jolokia Monitoring

4.5. Datasources

Datasources are automatically created based on the value of some of the environment variables.

The most important environment variable is DB_SERVICE_PREFIX_MAPPING, as it defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of <name>-<database_type>=<PREFIX> triplets, where:

  • name is used as the pool-name in the data source,
  • database_type is the database driver to use, and
  • PREFIX is the prefix used in the names of environment variables which are used to configure the datasource.

4.5.1. JNDI mappings for datasources

For each <name>-<database_type>=PREFIX triplet in the DB_SERVICE_PREFIX_MAPPING environment variable, launch script creates a separate datasource, which is executed when running the image.


The first part (before the equal sign) of the DB_SERVICE_PREFIX_MAPPING should be lowercase.

The <database_type> determines the driver for the datasource. Currently, only postgresql and mysql are supported.


Do not use any special characters for the <name> parameter. Database drivers

Every image contains Java drivers for MySQL, PostgreSQL and MongoDB databases deployed. Datasources are generated only for MySQL and PostgreSQL databases.


For MongoDB database there are no JNDI mappings created because MongoDB is not a SQL database. Datasource configuration environment variables

To configure other datasource properties, use the following environment variables:

Variable nameDescriptionExample value


Defines the database server’s host name or IP to be used in the datasource’s connection-url property.


Defines the database server’s port for the datasource.



Defines the JNDI name for the datasource. Defaults to java:jboss/datasources/<name>_<database_type>, where name and database_type are taken from the triplet described above. This setting is useful if you want to override the default generated JNDI name.



Defines the username for the datasource.



Defines the password for the datasource.



Defines the database name for the datasource.



Defines the java.sql.Connection transaction isolation level for the datasource.



Defines the minimum pool size option for the datasource.



Defines the maximum pool size option for the datasource.


When running this image in OpenShift, the <NAME>_<DATABASE_TYPE>_SERVICE_HOST and <NAME>_<DATABASE_TYPE>_SERVICE_PORT environment variables are set up automatically from the database service definition in the OpenShift application template, while the others are configured in the template directly (as env entries in container definitions under each pod template). Examples

These examples show how value of the DB_SERVICE_PREFIX_MAPPING environment variable influences datasource creation. Single mapping

Consider value test-postgresql=TEST.

This creates a datasource with java:jboss/datasources/test_postgresql name. Additionally, all the required settings like password and username are expected to be provided as environment variables with the TEST_ prefix, for example TEST_USERNAME and TEST_PASSWORD. Multiple mappings

You can specify multiple database mappings.


Always separate multiple datasource mappings with a comma.

Consider the following value for the DB_SERVICE_PREFIX_MAPPING environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL.

This creates two datasources:

  1. java:jboss/datasources/test_mysql, and
  2. java:jboss/datasources/cloud_postgresql.

Then you can use TEST_MYSQL prefix for configuring (username, password, etc.) for MySQL datasource, for example TEST_MYSQL_USERNAME. And for PostgreSQL datasource use the CLOUD_ prefix, for example CLOUD_USERNAME.

4.6. Clustering

Clustering is achieved through one of two discovery mechanisms: Kubernetes or DNS. This is done by configuring the JGroups protocol stack in standalone-openshift.xml with either the <openshift.KUBE_PING/> or <openshift.DNS_PING/> elements. Out of the box, KUBE_PING is the pre-configured and supported protocol.

For KUBE_PING to work, however, the following steps must be taken:

  1. The OPENSHIFT_KUBE_PING_NAMESPACE environment variable must be set. If not set, the server will behave as a single-node cluster (a "cluster of one").
    For example:

  2. The OPENSHIFT_KUBE_PING_LABELS environment variables should be set. This should match the label set at the service level. If not set, pods outside of your application (albeit in your namespace) will try to join.
    For example:

  3. Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST api. This is done on the command line.
    For example:
    Using the default service account in the myproject namespace:

    oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)

    Using the eap-service-account in the project namespace:

    oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)

    See Section 2.4, “Getting Started” for more information on adding policies to service accounts.

4.7. Security Domains

To configure a new Security Domain, the user must define the SECDOMAIN_NAME environment variable.

This will result in the creation of a security domain named after the environment variable. The user may also define the following environment variables to customize the domain:

Variable nameDescriptionExample value


Define in order to enable the definition of an additional security domain.



If defined, the password-stacking module option is enabled and set to the value useFirstPass.



The login module to be used.
Defaults to UsersRoles



The name of the properties file containing user definitions.
Defaults to


The name of the properties file containing role definitions.
Defaults to

4.8. HTTPS

4.8.1. Environment variables

Variable nameDescriptionExample value


If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE, enable HTTPS and set the SSL name.


If defined along with HTTPS_NAME and HTTPS_KEYSTORE, enable HTTPS and set the SSL key password.



If defined along with HTTPS_PASSWORD and HTTPS_NAME, enable HTTPS and set the SSL certificate key file to a relative path under $JBOSS_HOME/standalone/configuration


4.9. Administration

4.9.1. Environment Variables

Variable nameDescriptionExample value


If both this and ADMIN_PASSWORD are defined, used for the EAP management port user name.



If defined, an admin user is defined for accessing the management port, with this value as password.


4.10. S2I

The image includes S2I scripts and Maven.

Maven is currently only supported as a build tool for applications that are supposed to be deployed on JBoss EAP-based containers (or related/descendant images) on OpenShift.

Only WAR deployments are supported at this time.

4.10.1. Custom configuration

It is possible to add custom configuration files for the image. All files put into configuration/ directory will be copied into $JBOSS_HOME/standalone/configuration/. For example to override the default configuration used in the image, just add a custom standalone-openshift.xml into the configuration/ directory. See example for such a deployment. Custom modules

It is possible to add custom modules. All files from the modules/ directory will be copied into $JBOSS_HOME/modules/. See example for such a deployment.

4.10.2. Deployment Artifacts

By default, artifacts from the source target directory will be deployed. To deploy from different directories set the ARTIFACT_DIR environment variable in the BuildConfig definition. ARTIFACT_DIR is a comma-delimited list. For example: ARTIFACT_DIR=app1/target,app2/target,app3/target

4.10.3. Artifact Repository Mirrors

A repository in Maven holds build artifacts and dependencies of various types (all the project jars, library jar, plugins or any other project specific artifacts). It also specifies locations from where to download artifacts from, while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom repository (mirror).

Benefits of using a mirror are:

  • Availability of a synchronized mirror, which is geographically closer and faster.
  • Ability to have greater control over the repository content.
  • Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories.
  • Improved build times.

Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at, the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL environment variable to the build configuration of the application as follows:

  1. Identify the name of the build configuration to apply MAVEN_MIRROR_URL variable against:

    oc get bc -o name
  2. Update build configuration of eap with a MAVEN_MIRROR_URL environment variable

    oc env bc/eap MAVEN_MIRROR_URL=""
    buildconfig "eap" updated
  3. Verify the setting

    oc env bc/eap --list
    # buildconfigs eap
  4. Schedule new build of the application

During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build.

4.10.4. Scripts

this script uses the script that configures and starts EAP with the standalone-openshift.xml configuration.
uses Maven to build the source, create a package (war) and move it to the $JBOSS_HOME/standalone/deployments directory.

4.10.5. Environment variables

You can influence the way the build is executed by supplying environment variables to the s2i build command. The environment variables that can be supplied are:

Variable nameDescriptionExample value


.war, .ear, and .jar files from this directory will be copied into the deployments directory.



Hostname or IP address of a HTTP proxy for Maven to use.


TCP Port of a HTTP proxy for Maven to use.



If supplied with HTTP_PROXY_PASSWORD, use credentials for HTTP proxy.



If supplied with HTTP_PROXY_USERNAME, use credentials for HTTP proxy.



If supplied, a configured HTTP proxy will ignore these hosts.|*


Overrides the arguments supplied to maven during build.

-e -Popenshift -DskipTests -Dcom.redhat.xpaas.repo.redhatga package


Appends user arguments supplied to maven during build.



URL of a Maven Mirror/repository manager to configure.


Optionally clear the local maven repository after the build.



If defined, directory in the source from where data files are copied.



Directory in the image where data from $APP_DATADIR will be copied.



See the Example Workflow: Using Maven to build and run a Java EE 7 application section, which uses Maven and the S2I scripts included in the EAP for OpenShift image.

4.11. SSO

This image contains support for Red Hat SSO-enabled applications.


See Red Hat SSO for OpenShift Image documentation for more information on how to deploy the Red Hat SSO for OpenShift image with the EAP for OpenShift image.

4.11.1. Environment variables

Variable nameDescriptionExample valueDefault value


URL of the SSO server




SSO realm for the deployed application(s)




Public key of the SSO Realm. This field is optional but if omitted can leave the applications vulnerable to man-in-middle attacks




SSO User required to access the SSO REST API




Password for SSO_USERNAME




Keystore location for SAML




Keystore password for SAML




Alias for keys/certificate to use for SAML




Optional. SSO Client Access Type




Path for SSO redirects back to the application

Defaults to match module-name



Optionally enable CORS for SSO applications




The SSO Client Secret for Confidential Access




If true SSL communication between EAP and the SSO Server will be secure (i.e. certificate validation is enabled with curl)



4.12. Included JBoss Modules

The table below lists included JBoss modules in the EAP for OpenShift image.

Table 4.4. Included JBoss Modules

JBoss Module







Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.