Red Hat JBoss Web Server for OpenShift

Red Hat JBoss Web Server 5.2

Installing and using Red Hat JBoss Web Server for OpenShift

Red Hat Customer Content Services

Abstract

Guide to using Red Hat JBoss Web Server for OpenShift

Chapter 1. Introduction

1.1. What is Red Hat JBoss Web Server for OpenShift?

The Apache Tomcat 9 component of Red Hat JBoss Web Server (JWS) 5.2 is available as a containerized image designed for OpenShift. Developers can use this image to build, scale, and test Java web applications for deployment across hybrid cloud environments.

Chapter 2. Before You Begin

2.1. The difference between Red Hat JBoss Web Server and JWS for OpenShift

The differences between the JWS for OpenShift images and the regular release of JWS are:

  • The location of JWS_HOME/tomcat<version>/ inside a JWS for OpenShift image is: /opt/webserver/.
  • The JWS for OpenShift images do not contain Apache HTTP Server. All load balancing is handled by the OpenShift router, not Apache HTTP Server mod_cluster or mod_jk connectors.

Documentation for JWS functionality not specific to JWS for OpenShift images is found in the Red Hat JBoss Web Server documentation.

2.2. Version compatibility and support

See the xPaaS table on the OpenShift Container Platform Tested Integrations page for details about OpenShift image version compatibility.

Important

The 5.2 version of JWS for OpenShift images and application templates should be used for deploying new applications.

The 5.1 version of JWS for OpenShift images and application templates are deprecated and no longer receives updates.

2.3. Health checks for Red Hat container images

All container images available for OpenShift have a health rating associated with it. You can find the health rating for Red Hat JBoss Web Server by navigating to the catalog of container images, searching for JBoss Web Server and selecting the 5.2 version.

For more information on how OpenShift container can be tested for liveliness and readiness, please refer to the following documentation

Chapter 3. Get Started

3.1. Initial setup

The instructions in this guide follow on from the OpenShift Primer, assuming a supported OpenShift configuration or a non-production OpenShift instance like that described in the OpenShift Primer.

The JWS for OpenShift images are automatically created during the installation of OpenShift, along with the other default image streams and templates.

Note

The JWS for OpenShift application templates are distributed for Tomcat 9.

3.2. Using the JWS for OpenShift Source-to-Image (S2I) process

To run and configure the JWS for OpenShift images, use the OpenShift S2I process with the application template parameters and environment variables.

The S2I process for the JWS for OpenShift images works as follows:

  • If there is a Maven settings.xml file in the configuration/ source directory, it is moved to $HOME/.m2/ of the new image.

    See the Apache Maven Project website for more information on Maven and the Maven settings.xml file.

  • If there is a pom.xml file in the source repository, a Maven build is triggered using the contents of the $MAVEN_ARGS environment variable.

    By default, the package goal is used with the openshift profile, including the arguments for skipping tests (-DskipTests) and enabling the Red Hat GA repository (-Dcom.redhat.xpaas.repo.redhatga).

  • The results of a successful Maven build are copied to /opt/webserver/webapps/. This includes all WAR files from the source directory specified by the $ARTIFACT_DIR environment variable. The default value of $ARTIFACT_DIR is the target/ directory.

    Use the MAVEN_ARGS_APPEND environment variable to modify the Maven arguments.

  • All WAR files from the deployments/ source directory are copied to /opt/webserver/webapps/.
  • All files in the configuration/ source directory are copied to /opt/webserver/conf/ (excluding the Maven settings.xml file).
  • All files in the lib/ source directory are copied to /opt/webserver/lib/.

    Note

    If you want to use custom Tomcat configuration files, the file names should be the same as for a normal Tomcat installation. For example, context.xml and server.xml.

See the Artifact Repository Mirrors section for guidance on configuring the S2I process to use a custom Maven artifacts repository mirror.

3.2.1. Create a JWS for OpenShift application using existing maven binaries

Existing applications are deployed on OpenShift using the oc start-build command.

Prerequisite: An existing .war, .ear, or .jar of the application to deploy on JWS for OpenShift.

  1. Prepare the directory structure on the local file system.

    Create a source directory containing any content required by your application not included in the binary (if required, see Using the JWS for OpenShift Source-to-Image (S2I) process), then create a subdirectory deployments/:

    $ mkdir -p <build_dir>/deployments
  2. Copy the binaries (.war,.ear,.jar) to deployments/:

    $ cp /path/to/binary/<filenames_with_extensions> <build_dir>/deployments/
    Note

    Application archives in the deployments/ subdirectory of the source directory are copied to the $JWS_HOME/webapps/ directory of the image being built on OpenShift. For the application to deploy, the directory hierarchy containing the web application data must be structured correctly (see Section 3.2, “Using the JWS for OpenShift Source-to-Image (S2I) process”).

  3. Log in to the OpenShift instance:

    $ oc login <url>
  4. Create a new project if required:

    $ oc new-project <project-name>
  5. Identify the JWS for OpenShift image stream to use for your application with oc get is -n openshift:

    $ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '
    
    jboss-webserver50-tomcat9-openshift
    Note

    The option -n openshift specifies the project to use. oc get is -n openshift retrieves (get) the image stream resources (is) from the openshift project.

  6. Create the new build configuration, specifying image stream and application name:

    $ oc new-build --binary=true \
     --image-stream=jboss-webserver50-tomcat9-openshift \
     --name=<my-jws-on-openshift-app>
  7. Instruct OpenShift to use the source directory created previously for binary input of the OpenShift image build:

    $ oc start-build <my-jws-on-openshift-app> --from-dir=./<build_dir> --follow
  8. Create a new OpenShift application based on the image:

    $ oc new-app <my-jws-on-openshift-app>
  9. Expose the service to make the application accessible to users:

    # to check the name of the service to expose
    $ oc get svc -o name
    
    service/<my-jws-on-openshift-app>
    
    # to expose the service
    $ oc expose svc/my-jws-on-openshift-app
    
    route "my-jws-on-openshift-app" exposed
  10. Retrieve the address of the exposed route:

    oc get routes --no-headers -o custom-columns='host:spec.host' my-jws-on-openshift-app
  11. To access the application in your browser: http://<address_of_exposed_route> / <my-war-ear-jar-filename-without-extension>

3.2.2. Example: Creating a JWS for OpenShift application using existing maven binaries

The example below uses the tomcat-websocket-chat quickstart using the procedure from Section 3.2.1, “Create a JWS for OpenShift application using existing maven binaries”.

3.2.2.1. Prerequisites:

  1. Get the WAR application archive or build the application locally.

    • Clone the source code:

      $ git clone https://github.com/jboss-openshift/openshift-quickstarts.git
    • Configure the Red Hat JBoss Middleware Maven Repository

    • Build the application:

      $ cd openshift-quickstarts/tomcat-websocket-chat/
      $ mvn clean package
      
      [INFO] Scanning for projects...
      [INFO]
      [INFO] ------------------------------------------------------------------------
      [INFO] Building Tomcat websocket example 1.2.0.Final
      [INFO] ------------------------------------------------------------------------
      ...
      [INFO] ------------------------------------------------------------------------
      [INFO] BUILD SUCCESS
      [INFO] ------------------------------------------------------------------------
      [INFO] Total time: 01:28 min
      [INFO] Finished at: 2018-01-16T15:59:16+10:00
      [INFO] Final Memory: 19M/271M
      [INFO] ------------------------------------------------------------------------
  1. Prepare the directory structure on the local file system.

    Create the source directory for the binary build on your local file system and the deployments/ subdirectory. Copy the WAR archive to deployments/:

    [tomcat-websocket-chat]$ ls
    
    pom.xml  README.md  src/  target/
    $ mkdir -p ocp/deployments
    $ cp target/websocket-chat.war ocp/deployments/

3.2.2.2. To setup the example application on OpenShift

  1. Log in to the OpenShift instance:

    $ oc login <url>
  2. Create a new project if required:

    $ oc new-project jws-bin-demo
  3. Identify the JWS for OpenShift image stream to use for your application with oc get is -n openshift:

    $ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '
    
    jboss-webserver50-tomcat9-openshift
  4. Create new build configuration, specifying image stream and application name:

    $ oc new-build --binary=true \
     --image-stream=jboss-webserver50-tomcat9-openshift \
     --name=jws-wsch-app
    
    --> Found image 8c3b85b (4 weeks old) in image stream "openshift/jboss-webserver50-tomcat9-openshift" under tag "latest" for "jboss-webserver50-tomcat9-openshift"
    
        JBoss Web Server 5.0
        --------------------
        Platform for building and running web applications on JBoss Web Server 5.0 - Tomcat v9
    
        Tags: builder, java, tomcat9
    
        * A source build using binary input will be created
          * The resulting image will be pushed to image stream "jws-wsch-app:latest"
          * A binary build was created, use 'start-build --from-dir' to trigger a new build
    
    --> Creating resources with label build=jws-wsch-app ...
        imagestream "jws-wsch-app" created
        buildconfig "jws-wsch-app" created
    --> Success
  5. Start the binary build. Instruct OpenShift to use source directory for the binary input for the OpenShift image build:

    $ oc start-build jws-wsch-app --from-dir=./ocp --follow
    
    Uploading directory "ocp" as binary input for the build ...
    build "jws-wsch-app-1" started
    Receiving source from STDIN as archive ...
    
    Copying all deployments war artifacts from /home/jboss/source/deployments directory into /opt/webserver/webapps for later deployment...
    '/home/jboss/source/deployments/websocket-chat.war' -> '/opt/webserver/webapps/websocket-chat.war'
    
    
    Pushing image 172.30.202.111:5000/jws-bin-demo/jws-wsch-app:latest ...
    Pushed 0/7 layers, 7% complete
    Pushed 1/7 layers, 14% complete
    Pushed 2/7 layers, 29% complete
    Pushed 3/7 layers, 49% complete
    Pushed 4/7 layers, 62% complete
    Pushed 5/7 layers, 92% complete
    Pushed 6/7 layers, 100% complete
    Pushed 7/7 layers, 100% complete
    Push successful
  6. Create a new OpenShift application based on the image:

    $ oc new-app jws-wsch-app
    
    --> Found image e5f3a6b (About a minute old) in image stream "jws-bin-demo/jws-wsch-app" under tag "latest" for "jws-wsch-app"
    
        JBoss Web Server 5.0
        --------------------
        Platform for building and running web applications on JBoss Web Server 5.0 - Tomcat v9
    
        Tags: builder, java, tomcat9
    
        * This image will be deployed in deployment config "jws-wsch-app"
        * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service "jws-wsch-app"
          * Other containers can access this service through the hostname "jws-wsch-app"
    
    --> Creating resources ...
        deploymentconfig "jws-wsch-app" created
        service "jws-wsch-app" created
    --> Success
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
         'oc expose svc/jws-wsch-app'
        Run 'oc status' to view your app.
  7. Expose the service to make the application accessible to users:

    # to check the name of the service to expose
    $ oc get svc -o name
    
    service/jws-wsch-app
    
    # to expose the service
    $ oc expose svc/jws-wsch-app
    
    route "jws-wsch-app" exposed
  8. Retrieve the address of the exposed route:

    oc get routes --no-headers -o custom-columns='host:spec.host' jws-wsch-app
  9. Access the application in your browser: http://<address_of_exposed_route>/websocket-chat

3.2.3. Create a JWS for OpenShift application from source code

For detailed instructions on creating new OpenShift applications from source code, see OpenShift.com - Creating an Application From Source Code.

Note

Before proceeding, ensure that the applications' data is structured correctly (see Section 3.2, “Using the JWS for OpenShift Source-to-Image (S2I) process”).

  1. Log in to the OpenShift instance:

    $ oc login <url>
  2. Create a new project if required:

    $ oc new-project <project-name>
  3. Identify the JWS for OpenShift image stream to use for your application with oc get is -n openshift:

    $ oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '
    
    jboss-webserver50-tomcat9-openshift
  4. Create the new OpenShift application from source code using Red Hat JBoss Web Server for OpenShift images, use the --image-stream option:

    $ oc new-app \
     <source_code_location> \
     --image-stream=jboss-webserver50-tomcat9-openshift \
     --name=<openshift_application_name>

    For Example:

    $ oc new-app \
     https://github.com/jboss-openshift/openshift-quickstarts.git#master \
     --image-stream=jboss-webserver50-tomcat9-openshift \
     --context-dir='tomcat-websocket-chat' \
     --name=jws-wsch-app

    The source code is added to the image and the source code is compiled. The build configuration and services are also created.

  5. To expose the application:

    # to check the name of the service to expose
    $ oc get svc -o name
    
    service/<openshift_application_name>
    
    # to expose the service
    $ oc expose svc/<openshift_application_name>
    
    route "<openshift_application_name>" exposed
  6. To retrieve the address of the exposed route:

    oc get routes --no-headers -o custom-columns='host:spec.host' <openshift_application_name>
  7. To access the application in your browser: http://<address_of_exposed_route>/<java_application_name>

Chapter 4. Reference

4.1. Source-to-Image (S2I)

The Red Hat JBoss Web Server for OpenShift image includes S2I scripts and Maven.

4.1.1. Using maven artifact repository mirrors with JWS for OpenShift

A Maven repository holds build artifacts and dependencies, such as the project jars, library jars, plugins or any other project specific artifacts. It also defines locations to download artifacts from while performing the S2I build. Along with using the Maven Central Repository, some organizations also deploy a local custom repository (mirror).

Benefits of using a local mirror are:

  • Availability of a synchronized mirror, which is geographically closer and faster.
  • Greater control over the repository content.
  • Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories.
  • Improved build times.

A Maven repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at http://10.0.0.1:8080/repository/internal/, the S2I build can use this repository. To use an internal Maven repository, add the MAVEN_MIRROR_URL environment variable to the build configuration of the application.

For a new build configuration, use the --build-env option with oc new-app or oc new-build:

$ oc new-app \
 https://github.com/jboss-openshift/openshift-quickstarts.git#master \
 --image-stream=jboss-webserver50-tomcat9-openshift \
 --context-dir='tomcat-websocket-chat' \
 --build-env MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/ \
 --name=jws-wsch-app

For an existing build configuration:

  1. Identify the build configuration which requires the MAVEN_MIRROR_URL variable:

    $ oc get bc -o name
    
    buildconfig/jws
  2. Add the MAVEN_MIRROR_URL environment variable to buildconfig/jws:

    $ oc env bc/jws MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/"
    
    buildconfig "jws" updated
  3. Verify the build configuration has updated:

    $ oc env bc/jws --list
    
    # buildconfigs jws
    MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/
  4. Schedule a new build of the application using oc start-build
Note

During application build, Maven dependencies are download from the repository manager, instead of the default public repositories. Once the build has finished, the mirror contains all the dependencies retrieved and used during the build.

4.1.2. Scripts included on the Red Hat JBoss Web Server for OpenShift image

run
runs Catalina (Tomcat)
assemble
uses Maven to build the source, create package (.war) and move it to the $JWS_HOME/webapps directory.

4.1.3. JWS for OpenShift datasources

There are 3 types of data sources:

  1. Default Internal Datasources: These are PostgreSQL, MySQL, and MongoDB. These datasources are available on OpenShift by default through the Red Hat Registry and do not require additional environment files to be configured for image streams. To make a database discoverable and used as a datasource, set the DB_SERVICE_PREFIX_MAPPING environment variable to the name of the OpenShift service.
  2. Other Internal Datasources: These are datasources not available by default through the Red Hat Registry but run on OpenShift. Configuration of these datasources is provided by environment files added to OpenShift Secrets.
  3. External Datasources: Datasources not run on OpenShift.Configuration of external datasources is provided by environment files added to OpenShift Secrets.

The datasources environment files are added to the OpenShift Secret for the project. These environment files are then called within the template using the ENV_FILES environment property.

Datasources are automatically created based on the value of certain environment variables.The most important environment variable is DB_SERVICE_PREFIX_MAPPING. DB_SERVICE_PREFIX_MAPPING defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX triplets, where:

The build configuration can be modified by including environment variables to the Source-to-Image build command (see Section 4.1.1, “Using maven artifact repository mirrors with JWS for OpenShift”). The valid environment variables for the Red Hat JBoss Web Server for OpenShift images are:

  • POOLNAME is used as the pool-name in the datasource.
  • DATABASETYPE is the database driver to use.
  • PREFIX is the prefix used in the names of environment variables that are used to configure the datasource.

For each POOLNAME-DATABASETYPE=PREFIX triplet defined in the DB_SERVICE_PREFIX_MAPPING environment variable, the launch script creates a separate datasource, which is executed when running the image.

For a full listing of datasource configuration environment variables, please see the Datasource Configuration Environment Variables list given here.

4.1.4. JWS for OpenShift compatible environment variables

The build configuration can be modified by including environment variables to the Source-to-Image build command (see Section 4.1.1, “Using maven artifact repository mirrors with JWS for OpenShift”). The valid environment variables for the Red Hat JBoss Web Server for OpenShift images are:

Variable NameDisplay NameDescriptionExample Value

ARTIFACT_DIR

N/A

.war, .ear, and .jar files from this directory will be copied into the deployments directory

target

APPLICATION_NAME

Application Name

The name for the application

jws-app

CONTEXT_DIR

Context Directory

Path within Git project to build; empty for root project directory

tomcat-websocket-chat

GITHUB_WEBHOOK_SECRET

Github Webhook Secret

Github trigger secret

Expression from: [a-zA-Z0-9]{8}

GENERIC_WEBHOOK_SECRET

Generic Webhook Secret

Generic build trigger secret

Expression from: [a-zA-Z0-9]{8}

HOSTNAME_HTTP

Custom HTTP Route Hostname

Custom hostname for http service route. Leave blank for default hostname

<application-name>-<project>.<default-domain-suffix>

HOSTNAME_HTTPS

Custom HTTPS Route Hostname

Custom hostname for https service route. Leave blank for default hostname

<application-name>-<project>.<default-domain-suffix>

IMAGE_STREAM_NAMESPACE

Imagestream Namespace

Namespace in which the ImageStreams for Red Hat Middleware images are installed

openshift

JWS_HTTPS_SECRET

Secret Name

The name of the secret containing the certificate files

jws-app-secret

JWS_HTTPS_CERTIFICATE

Certificate Name

The name of the certificate file within the secret

server.crt

JWS_HTTPS_CERTIFICATE_KEY

Certificate Key Name

The name of the certificate key file within the secret

server.key

JWS_HTTPS_CERTIFICATE_PASSWORD

Certificate Password

The Certificate Password

P5ssw0rd

JWS_ADMIN_USERNAME

JWS Admin Username

JWS Admin account username

ADMIN

JWS_ADMIN_PASSWORD

JWS Admin Password

JWS Admin account password

P5sw0rd

SOURCE_REPOSITORY_URL

Git Repository URL

Git source URI for Application

https://github.com/jboss-openshift/openshift-quickstarts.git

SOURCE_REPOSITORY_REFERENCE

Git Reference

Git branch/tag reference

1.2

IMAGE_STREAM_NAMESPACE

Imagestream Namespace

Namespace in which the ImageStreams for Red Hat Middleware images are installed

openshift

MAVEN_MIRROR_URL

Maven Mirror URL

URL of a Maven mirror/repository manager to configure.

http://10.0.0.1:8080/repository/internal/

4.2. Valves on JWS for OpenShift

4.2.1. JWS for OpenShift compatible environmental variables (valve component)

You can define the following environment variables to insert the valve component into the request processing pipeline for the associated Catalina container.

Variable NameDescriptionExample ValueDefault Value

ENABLE_ACCESS_LOG

Enable the Access Log Valve to log access messages to the standard output channel.

true

false

4.3. Checking Logs

To view the OpenShift logs or the logs provided by a running container’s console:

$ oc logs -f <pod_name> <container_name>

Access logs are stored in /opt/webserver/logs/.

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.