WildFly Swarm Runtime Guide

Red Hat OpenShift Application Runtimes 1

For Use with Red Hat OpenShift Application Runtimes

Red Hat Customer Content Services

Abstract

This guide provides details on using the WildFly Swarm runtime with Red Hat OpenShift Application Runtimes.

Preface

This guide covers concepts as well as practical details needed by developers to use the WildFly Swarm runtime.

Chapter 1. Runtime Details

WildFly Swarm deconstructs the features in Red Hat JBoss EAP and allows them to be selectively reconstructed based on the needs of your application. This allows you to create microservices that run on a just-enough-appserver that supports the exact subset of APIs you need. Check out Additional Resources for further reading on WildFly Swarm.

The WildFly Swarm runtime enables you to run WildFly Swarm applications and services in OpenShift while providing all the advantages and conveniences of the OpenShift platform such as rolling updates, service discovery, and canary deployments. OpenShift also makes it easier for your applications to implement common microservice patterns such as ConfigMap, Health Check, Circuit Breaker, and Failover.

WildFly Swarm has a product version of its runtime that runs on OpenShift and is provided as part of a Red Hat subscription.

Chapter 2. Building your application

This sections contains information about configuring the WildFly Swarm runtime.

2.1. Maven Plugin

WildFly Swarm provides a Maven plugin to accomplish most of the work of building uberjar packages.

General Usage

The WildFly Swarm Maven plugin is used like any other Maven plugin, that is through editing the pom.xml file in your application and adding a <plugin> section:

<plugin>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>wildfly-swarm-plugin</artifactId>
  <version>${version.wildfly-swarm}</version>
  <executions>
    ...
    <configuration>
      ...
    <configuration/>
  </executions>
</plugin>

2.1.1. Goals

The WildFly Swarm Maven plugin provides several goals:

package
Creates the executable package (see Section 3.2, “Creating an Uberjar”).
run
Executes your application in the Maven process. The application is stopped if the Maven build is interrupted, for example when you press Ctrl + C.
start and multistart
Executes your application in a forked process. Generally, it is only useful for running integration tests using a plugin, such as the maven-failsafe-plugin. The multistart variant allows starting multiple WildFly Swarm–built applications using Maven GAVs to support complex testing scenarios.
stop

Stops any previously started applications.

Note

The stop goal can only stop applications that were started in the same Maven execution.

2.1.2. Configuration

The plugin accepts the following options:

NamePropertyDescriptionDefaultUsed by

bundleDependencies

swarm.bundleDependencies

If true, dependencies will be included in the -swarm.jar file. Otherwise, they will be resolved from $M2_REPO or the network at runtime.

true

package

debug

none

The port to use for debugging. If set, the swarm process will suspend on start and open a debugger on this port.

 

run, start

environment

none

A properties-style list of environment variables to use when executing the application.

 

multistart, run, start

environmentFile

swarm.environmentFile

A .properties file with environment variables to use when executing the application.

 

multistart, run, start

fractionDetectMode

swarm.fractionDetectMode

The mode of fraction detection. The available options are:

  • when_missing: Runs only when no WildFly Swarm dependencies are found.
  • force: Always run, and merge any detected fractions with the existing dependencies. Existing dependencies take precedence.
  • never: Disable fraction detection.

when_missing

package, run, start

fractions

none

A list of extra fractions to include when auto-detection is used. It is useful for fractions that cannot be detected or user-provided fractions.

The format of specifying a fraction can be: * group:name:version * name:version * name

If no group is provided, org.wildfly.swarm is assumed.

If no version is provided, the version is taken from the WildFly Swarm BOM for the version of the plugin you are using.

 

package, run, start

jvmArguments

swarm.jvmArguments

A list of <jvmArgument> elements specifying additional JVM arguments (such as -Xmx32m).

 

multistart, run, start

modules

none

Paths to a directory containing additional module definitions.

 

package, run, start

processes

none

Application configurations to start (see multistart).

 

multistart

properties

none

See Section 2.1.2.1, “Properties”.

 

package, run, start

propertiesFile

swarm.propertiesFile

See Section 2.1.2.1, “Properties”.

 

package, run, start

stderrFile

swarm.stderr

A file path where to store the stderr output instead of sending it to the stderr output of the launching process.

 

run, start

stdoutFile

swarm.stdout

A file path where to store the stdout output instead of sending it to the stdout output of the launching process.

 

run, start

useUberJar

swarm.useUberJar

If true, the -swarm.jar file specified at ${project.build.directory} is used. This JAR is not created automatically, so make sure you execute the package goal first.

false

run, start

2.1.2.1. Properties

Properties can be used to configure execution and affect the packaging or running of your application.

If you add a <properties> or <propertiesFile> section to the <configuration> of the plugin, the properties are used when executing your application using the mvn wildfly-swarm:run command. In addition to that, the same properties are added to your myapp-swarm.jar file to affect subsequent executions of the uberjar. Any properties loaded from the <propertiesFile> override identically-named properties in the <properties> section.

Any properties added to the uberjar can be overridden at runtime using the traditional -Dname=value mechanism of the java binary, or using the YAML-based configuration files.

Only the following properties are added to the uberjar at package time:

  • The properties specified outside of the <properties> section or the <propertiesFile>, whose path starts with one of the following:

    • jboss.
    • wildfly.
    • swarm.
    • maven.
  • The properties that override a property specified in the <properties> section or the <propertiesFile>.

2.2. Fractions

WildFly Swarm is defined by an unbounded set of capabilities. Each piece of functionality is called a fraction. Some fractions provide only access to APIs, such as JAX-RS or CDI; other fractions provide higher-level capabilities, such as integration with RHSSO (Keycloak).

The typical method for consuming WildFly Swarm fractions is through Maven coordinates, which you add to the pom.xml file in your application. The functionality the fraction provides is then packaged with your application (see Section 3.2, “Creating an Uberjar”).

To enable easier consumption of WildFly Swarm fractions, a bill of materials (BOM) is available. For more information, see Section 2.3, “Using a BOM”.

2.2.1. Auto-detecting fractions

Migrating existing legacy applications to benefit from WildFly Swarm is simple when using fraction auto-detection. If you enable the WildFly Swarm Maven plugin in your application, WildFly Swarm detects which APIs you use, and includes the appropriate fractions at build time.

Note

By default, WildFly Swarm only auto-detects if you do not specify any fractions explicitly. This behavior is controlled by the fractionDetectMode property. For more information, see the Maven plugin configuration reference.

For example, consider your pom.xml already specifies the API .jar file for a specification such as JAX-RS:

<dependencies>
    <dependency>
      <groupId>org.jboss.spec.javax.ws.rs</groupId>
      <artifactId>jboss-jaxrs-api_2.0_spec</artifactId>
      <version>${version.jaxrs-api}</version>
      <scope>provided</scope>
    </dependency>
</dependencies>

WildFly Swarm then includes the jaxrs fraction during the build automatically.

Prerequisites

  • An existing Maven-based application with a pom.xml file.

Procedure

  1. Add the wildfly-swarm-plugin to your pom.xml in a <plugin> block, with an <execution> specifying the package goal.

    <plugins>
      <plugin>
        <groupId>org.wildfly.swarm</groupId>
        <artifactId>wildfly-swarm-plugin</artifactId>
        <version>${version.wildfly-swarm}</version>
        <executions>
          <execution>
            <id>package</id>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  2. Perform a normal Maven build:

    $ mvn package
  3. Execute the resulting uberjar:

    $ java -jar ./target/myapp-swarm.jar

2.2.2. Using explicit fractions

When writing your application from scratch, ensure it compiles correctly and uses the correct version of APIs by explicitly selecting which fractions are packaged with it.

Prerequisites

  • A Maven-based application with a pom.xml file.

Procedure

  1. Add the BOM to your pom.xml. For more information, see Section 2.3, “Using a BOM”.
  2. Add the WildFly Swarm Maven plugin to your pom.xml. For more information, see Section 3.2, “Creating an Uberjar”.
  3. Add one or more dependencies on WildFly Swarm fractions to the pom.xml file:

    <dependencies>
      <dependency>
        <groupId>org.wildfly.swarm</groupId>
        <artifactId>jaxrs</artifactId>
      </dependency>
    </dependencies>
  4. Perform a normal Maven build:

    $ mvn package
  5. Execute the resulting uberjar:

    $ java -jar ./target/myapp-swarm.jar

2.3. Using a BOM

To explicitly specify the WildFly Swarm fractions your application uses, instead of relying on auto-detection, WildFly Swarm includes a set of BOMs (bill of materials) which you can use instead of having to track and update Maven artifact versions in several places.

2.3.1. Available BOMs

You can use the following Maven BOMs:

bom
All fractions available in the product.
bom-certified
All community fractions that have been certified against the product. Any fraction used from bom-certified is unsupported.

Prerequisites

  • Your application as a Maven-based project with a pom.xml file.

Procedure

  1. Include a bom artifact in your pom.xml.

    Tracking the current version of WildFly Swarm through a property in your pom.xml is recommended.

    <properties>
      <version.wildfly-swarm>7.0.0.redhat-8</version.wildfly-swarm>
    </properties>

    Import BOMs in the <dependencyManagement> section. Specify the <type>pom</type> and <scope>import</scope>.

    <dependencyManagement>
      <dependencies>
        <dependency>
          <groupId>org.wildfly.swarm</groupId>
          <artifactId>bom</artifactId>
          <version>${version.wildfly-swarm}</version>
          <type>pom</type>
          <scope>import</scope>
        </dependency>
      </dependencies>
    </dependencyManagement>

    In the example above, the bom artifact is imported to ensure that only stable fractions are available.

    By including the BOMs of your choice in the <dependencyManagement> section, you have:

    • Provided version-management for any WildFly Swarm artifacts you subsequently choose to use.
    • Provided support to your IDE for auto-completing known artifacts when you edit your the pom.xml file of your application.
  2. Include WildFly Swarm dependencies.

    Even though you imported the WildFly Swarm BOMs in the <dependencyManagement> section, your application still has no dependencies on WildFly Swarm artifacts.

    To include WildFly Swarm artifact dependencies based on the capabilities your application, enter the relevant artifacts as <dependency> elements:

    Note

    You do not have to specify the version of the artifacts because the BOM imported in <dependencyManagement> handles that.

    <dependencies>
      <dependency>
        <groupId>org.wildfly.swarm</groupId>
        <artifactId>jaxrs</artifactId>
      </dependency>
      <dependency>
        <groupId>org.wildfly.swarm</groupId>
        <artifactId>datasources</artifactId>
      </dependency>
    </dependencies>

    In the example above, we include explicit dependencies on the jaxrs and datasources fractions, which will provide transitive inclusion of others, for example undertow.

2.3.2. Caveats

One shortcoming of importing a Maven BOM import is that it does not handle the configuration on the level of <pluginManagement>. When you use the WildFly Swarm Maven Plugin, you must specify the version of the plugin to use.

Thanks to the property you use in your pom.xml file, you can easily ensure that your plugin usage matches the release of WildFly Swarm that you are targeting with the BOM import.

<plugins>
  <plugin>
    <groupId>org.wildfly.swarm</groupId>
    <artifactId>wildfly-swarm-plugin</artifactId>
    <version>${version.wildfly-swarm}</version>
      ...
  </plugin>
</plugins>

2.4. Configuring a WildFly Swarm application

You can configure numerous options with applications built with WildFly Swarm. For all options, reasonable defaults are already applied, so you do not have to change any options unless you explicitly want to.

This reference is a complete list of all configurable items, grouped by the fraction that introduces them. Only the items related to the fractions that your application uses are relevant to you.

2.4.1. Configuring an application using properties

Here, configuration items are presented using dotted notation, and are suitable for use as Java property names, which your application consumes through explicit setting in the Maven plugin configuration, or through the command line when your application is being executed.

Any property that has the KEY parameter in its name indicates that you must supply a key or identifier in that segment of the name.

Example 2.1. Using configuration items with the KEY parameter

The configuration item documented as swarm.undertow.servers.KEY.default-host indicates that the configuration applies to a particular named server.

In practical usage, the property would be, for example, swarm.undertow.servers.default.default-host for a server known as default.

Maven plugin properties

If you want to set explicit configuration values as defaults through the Maven plugin, add a <properties> section to the <configuration> block of the plugin in the pom.xml file in your application.

Example 2.2. Setting a configuration item through the Maven Plugin

An example Maven plugin configuration for the swarm.bind.address item:

<build>
  <plugins>
    <plugin>
      <groupId>org.wildfly.swarm</groupId>
      <artifactId>wildfly-swarm-plugin</artifactId>
      <version>{version}</version>
      <configuration>
        <properties>
          <swarm.bind.address>127.0.0.1</swarm.bind.address>
          <java.net.preferIPv4Stack>true</java.net.preferIPv4Stack>
        </properties>
      </configuration>
    </plugin>
  </plugins>
</build>
Note

It is not recommended to set properties using the Maven plugin to control the configuration in the long term. Instead, use the YAML method.

Setting properties using the Command Line

Setting properties using the Maven plugin is useful for temporarily changing a configuration item for a single execution of your application. You can customize an environment-specific setting or experiment with configuration items before setting them in a YAML configuration file.

To use a property on the command line, pass it as a command-line parameter to the Java binary:

$ java -Dswarm.bind.address=127.0.0.1 -jar myapp-swarm.jar

2.4.2. Configuring an application using YAML

YAML is the preferred method for long-term configuration of your application. In addition to that, the YAML strategy provides grouping of environment-specific configurations, which you can selectively enable when executing the application.

The General YAML format

The configuration names correspond to the YAML configuration structure.

Example 2.3. YAML configuration

For example, the item documented as swarm.undertow.servers.KEY.default-host translates to the following YAML structure, substituting the KEY segment with the default identifier:

swarm:
  undertow:
    servers:
      default:
        default-host: <myhost>
Default YAML Files

If the original .war file with your application contains a file named project-defaults.yml, that file represents the defaults applied over the absolute defaults that WildFly Swarm provides.

In addition to the project-defaults.yml file, you can provide specific configuration files using the -S <name> command-line option. The specified files are loaded, in the order you provided them, before project-defaults.yml. A name provided in the -S <name> argument specifies the project-<name>.yml file on your classpath.

Example 2.4. Specifying configuration files on the command line

Consider the following application execution:

$ java -jar myapp-swarm.jar -Stesting -Scloud

The following YAML files are loaded, in this order. The first file containing a given configuration item takes precedence over others:

  1. project-testing.yml
  2. project-cloud.yml
  3. project-defaults.yml
Non-default YAML Files

If you want to reference a YAML file that exists outside of your application, use the -s <path> command-line option.

Both the -s <path> and -S <name> command-line options can be used at the same time, but files specified using the -s <path> option take precedence over YAML files contained in your application.

Example 2.5. Specifying configuration files inside and outside of the application

Consider the following application execution:

$ java -jar myapp-swarm.jar -s/home/app/openshift.yml -Scloud -Stesting

The following YAML files are loaded, in this order:

  1. /home/app/openshift.yml
  2. project-cloud.yml
  3. project-testing.yml
  4. project-defaults.yml

The same order of preference is applied even if you invoke the application as follows:

$ java -jar myapp-swarm.jar -Scloud -Stesting -s/home/app/openshift.yml

Chapter 3. Packaging your application

This sections contains information about packaging your WildFly Swarm–based application for deployment and execution.

3.1. Packaging Types

When using WildFly Swarm, there are the following ways to package your runtime and application, depending on how you intend to use and deploy it:

3.1.1. Uberjar

An uberjar is a single Java .jar file that includes everything you need to execute your application. This means both the runtime components you have selected—​you can understand that to as the app server-- along with the application components (your .war file).

An uberjar is useful for many continuous integration and continuous deployment (CI/CD) pipeline styles, in which a single executable binary artifact is produced and moved through the testing, validation, and production environments in your organization.

The names of the uberjars that WildFly Swarm produces include the name of your application and the -swarm.jar suffix.

An uberjar can be executed like any executable JAR:

$ java -jar myapp-swarm.jar

3.2. Creating an Uberjar

One method of packaging an application for execution with WildFly Swarm is as an uberjar.

Prerequisites

  • A Maven-based application with a pom.xml file.

Procedure

  1. Add the wildfly-swarm-plugin to your pom.xml in a <plugin> block, with an <execution> specifying the package goal.

    <plugins>
      <plugin>
        <groupId>org.wildfly.swarm</groupId>
        <artifactId>wildfly-swarm-plugin</artifactId>
        <version>${version.wildfly-swarm}</version>
        <executions>
          <execution>
            <id>package</id>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  2. Perform a normal Maven build:

    $ mvn package
  3. Execute the resulting uberjar:

    $ java -jar ./target/myapp-swarm.jar

Chapter 4. Missions

Missions are working applications that showcase different fundamental pieces of building cloud native applications and services, such as:

  • Creating REST APIs
  • Interoperating with a database
  • Implementing the Health Check pattern

Missions can be used as:

  • A proof of technology demonstration.
  • A teaching tool, or even a sandbox for understanding how to develop applications for your project.
  • They can also be updated or extended for your own use case.

A booster is the implementation of a mission in a specific runtime. Boosters are preconfigured, functioning applications based on a mission that demonstrate a fundamental aspect of modern application development running in an environment similar to production.

Note

Each mission has different boosters that show how to implement the same mission in different runtimes. For example, the REST API Level 0 mission has a Spring Boot booster, a Eclipse Vert.x booster, and a WildFly Swarm booster.

4.1. REST API Level 0 Mission - WildFly Swarm Booster

Mission proficiency level: Foundational.

The REST API Level 0 Mission provides a basic example of mapping business operations to a remote procedure call endpoint over HTTP using a REST framework. This corresponds to Level 0 in the Richardson Maturity Model. Creating an HTTP endpoint using REST and its underlying principles to define your API enables you to quickly prototype and design your API in a flexible manner. More background information on REST is available in Section 4.1.6, “REST Resources”.

This is an introduction to the mechanics of interacting with a remote service using the HTTP protocol. Specifically, this booster is an application that allows a user to:

  • Execute an HTTP GET request on the api/greeting endpoint.
  • Receive a response in JSON format with a payload consisting of the Hello, World! String.
  • Execute an HTTP GET request on the api/greeting endpoint while passing in a String argument. This uses the name request parameter in the query string.
  • Receive a response in JSON format with a payload of Hello, $name! with $name replaced by the value of the name parameter passed into the request.
Note

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, follow the instructions in the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Red Hat OpenShift Application Runtimes.

Table 4.1. Design Tradeoffs

ProsCons
  • Fast prototyping
  • Flexible API Design
  • HTTP endpoints allow clients to be language agnostic
  • As an application or service matures, the REST API Level 0 approach may not scale well to properly support a clean API design or use cases involving database interactions. Any operations involving shared, mutable state must be integrated with an appropriate backing datastore. All requests handled by an API designed in this manner will be scoped only to the container servicing the request. Therefore there is no guarantee that subsequent requests will be served by the same container.

4.1.1. Building and Deploying the REST API Level 0 Booster to OpenShift Online

You have two options for executing the REST API Level 0 booster on OpenShift Online:

There is no functional difference between these methods, choose the one you prefer.

4.1.1.1. Deploying the Booster Using developers.redhat.com/launch

Prerequisites

Procedure

  • Navigate to the OpenShift Online URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.1.1.2. Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites

Procedure

  1. Navigate to the OpenShift Online URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your OpenShift Online account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.1.1.3. Deploying the REST API Level 0 Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    You MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.1.2. Building and Deploying the REST API Level 0 Booster to Single-node OpenShift Cluster

You have two options for executing the REST API Level 0 booster locally on Single-node OpenShift Cluster:

There is no functional difference between these methods, choose the one you prefer.

4.1.2.1. Getting the Fabric8 Launcher Tool URL and Credentials on Single-node OpenShift Cluster

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. These data are provided by the Single-node OpenShift Cluster binary when you start it.

Prerequisites

Procedure

  1. Navigate to the console where you started Single-node OpenShift Cluster.
  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup

    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin

4.1.2.2. Deploying the Booster Using the Fabric8 Launcher Tool

Prerequisites

Procedure

  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.1.2.3. Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites

Procedure

  1. Navigate to the Single-node OpenShift Cluster URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your Single-node OpenShift Cluster account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.1.2.4. Deploying the REST API Level 0 Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    You MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.1.3. Building and Deploying the REST API Level 0 Booster to OpenShift Container Platform

The process of building and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites

Procedure

4.1.4. Interacting with the Unmodified WildFly Swarm Booster

The booster provides a default HTTP endpoint that accepts GET requests.

  1. Use curl to execute a GET request against the booster. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello, World!"}
  2. Use curl to execute a GET request with the name URL parameter against the booster. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting?name=Sarah
    {"content":"Hello, Sarah!"}
Note

From a browser you can also use a form provided by the booster to perform these same interactions. The form is located at the root of the project http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME.

4.1.5. Running the REST API Level 0 Booster Integration Tests on WildFly Swarm

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,
  • execute the individual tests on that instance,
  • remove all instances of the application from the project when the testing is done.
Warning

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites

  • the oc client authenticated
  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

4.1.6. REST Resources

More background and related information on REST can be found here:

4.2. ConfigMap Mission - WildFly Swarm Booster

Mission proficiency level: Foundational.

The ConfigMap Mission provides a basic example of using a ConfigMap to externalize configuration. This mission shows you how to:

  • Set up and configure a ConfigMap.
  • Use the configuration provided by the ConfigMap within an application.
  • Deploy changes to the ConfigMap configuration of running applications.

About ConfigMap

ConfigMap is an object used by OpenShift to inject configuration data as simple key and value pairs into one or more Linux containers while keeping the containers agnostic of OpenShift. You can create a ConfigMap object in a variety of different ways, including using a YAML file, and inject it into the Linux container. You can find more information about ConfigMap in the OpenShift documentation.

Why ConfigMap is Important

It is important for an application’s configuration to be externalized and separate from its code. This allows for the application’s configuration to change as it moves through different environments while leaving the code unchanged. This also keeps sensitive or internal information out of your codebase and version control. Many languages and application servers provide environment variables to support externalizing an application’s configuration. Microservices and Linux containers increase the complexity of this by adding pods, or groups of containers representing a deployment, and polyglot environments. ConfigMaps enable application configuration to be externalized and used in individual Linux containers and pods in a language agnostic way. ConfigMaps also allow sets of configuration data to be easily grouped and scaled, which enables you to configure an arbitrarily large number of environments beyond the basic Dev, Stage, and Production.

Note

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, follow the instructions in the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Red Hat OpenShift Application Runtimes.

Table 4.2. Design Tradeoffs

ProsCons
  • Configuration is separate from deployments
  • Can be updated independently
  • Can be shared across services
  • Configuration is separate from deployments
  • Has to be maintained separately
  • Requires coordination beyond the scope of a service

4.2.1. Building and Deploying the ConfigMap Booster to OpenShift Online

You have two options for executing the ConfigMap booster on OpenShift Online:

There is no functional difference between these methods, choose the one you prefer.

4.2.1.1. Deploying the Booster Using developers.redhat.com/launch

Prerequisites

Procedure

  • Navigate to the OpenShift Online URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.2.1.2. Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites

Procedure

  1. Navigate to the OpenShift Online URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your OpenShift Online account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.2.1.3. Deploying the ConfigMap Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Deploy your ConfigMap configuration to OpenShift using app-config.yml in the root of the booster.

    $ oc create configmap app-config --from-file=app-config.yml
  4. Verify your ConfigMap configuration has been deployed.

    $ oc get configmap app-config -o yaml
    
    apiVersion: v1
    data:
      app-config.yml: |-
        greeting:
          message: Hello %s from a ConfigMap!
    ...
  5. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift -DskipTests

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  6. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                                       READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  7. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.2.2. Building and Deploying the ConfigMap Booster to Single-node OpenShift Cluster

You have two options for executing the ConfigMap booster locally on Single-node OpenShift Cluster:

There is no functional difference between these methods, choose the one you prefer.

4.2.2.1. Getting the Fabric8 Launcher Tool URL and Credentials on Single-node OpenShift Cluster

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. These data are provided by the Single-node OpenShift Cluster binary when you start it.

Prerequisites

Procedure

  1. Navigate to the console where you started Single-node OpenShift Cluster.
  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup

    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin

4.2.2.2. Deploying the Booster Using the Fabric8 Launcher Tool

Prerequisites

Procedure

  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.2.2.3. Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites

Procedure

  1. Navigate to the Single-node OpenShift Cluster URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your Single-node OpenShift Cluster account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.2.2.4. Deploying the ConfigMap Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Deploy your ConfigMap configuration to OpenShift using app-config.yml in the root of the booster.

    $ oc create configmap app-config --from-file=app-config.yml
  4. Verify your ConfigMap configuration has been deployed.

    $ oc get configmap app-config -o yaml
    
    apiVersion: v1
    data:
      app-config.yml: |-
        greeting:
          message: Hello %s from a ConfigMap!
    ...
  5. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift -DskipTests

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  6. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                                       READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  7. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.2.3. Building and Deploying the ConfigMap Booster to OpenShift Container Platform

The process of building and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites

Procedure

4.2.4. Interacting with the Unmodified WildFly Swarm Booster

The booster provides a default HTTP endpoint that accepts GET requests.

  1. Use curl to execute a GET request against the booster. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello World from a ConfigMap!"}
  2. Update the deployed ConfigMap configuration.

    $ oc edit configmap app-config

    Change the value for the greeting.message key to Bonjour %s from a ConfigMap! and save the file. After you save this, the changes will be propagated to your OpenShift instance.

  3. Rollout the new version of your application so the ConfigMap configuration changes are picked up.

    $ oc rollout latest dc/MY_APP_NAME
  4. Check the status of your booster and ensure your new pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa       1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build   0/1       Completed   0          2m

    You MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started.

  5. Execute a GET request using curl against the booster with the updated ConfigMap configuration to see your updated greeting. You can also do this from your browser using the web form provided by the application.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Bonjour World from a ConfigMap!"}

4.2.5. Running the ConfigMap Booster Integration Tests on WildFly Swarm

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,
  • execute the individual tests on that instance,
  • remove all instances of the application from the project when the testing is done.
Warning

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites

  • the oc client authenticated
  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

4.2.6. ConfigMap Resources

More background and related information on ConfigMap can be found here:

4.3. Relational Database Backend Mission - WildFly Swarm Booster

Important

This booster is not currently available on OpenShift Online Starter. You can still run it using a Single-node OpenShift Cluster. You can also use a manual workflow to deploy this booster to OpenShift Online Pro and OpenShift Container Platform.

Mission proficiency level: Foundational.

The Relational Database Backend booster expands on the REST API Level 0 booster to provide a basic example of performing create, read, update and delete (CRUD) operations on a PostgreSQL database using a simple HTTP API. CRUD operations are the four basic functions of persistent storage, widely used when developing an HTTP API dealing with a database.

The booster also demonstrates the ability of the HTTP application to locate and connect to a database in OpenShift. Each runtime determines in an opinionated manner how to implement the connectivity solution that is best suited in the given case. The runtime can choose between using JDBC, JPA, or access ORM APIs directly.

The booster application exposes an HTTP API, which provides endpoints that allow you to manipulate data by performing CRUD operations over HTTP. The CRUD operations are mapped to HTTP Verbs. The API uses JSON formatting to receive requests and return responses to the user. The user can also use an UI provided by the booster to use the application. Specifically, this booster provides an application that allows you to:

  • Navigate to the application web interface in your browser. This exposes a simple website allowing you to perform CRUD operations on the data in the my_data database.
  • Execute an HTTP GET request on the api/fruits endpoint.
  • Receive a response formatted as a JSON array containing the list of all fruits in the database.
  • Execute an HTTP GET request on the api/fruits/* endpoint while passing in a valid item ID as an argument.
  • Receive a response in JSON format containing the name of the fruit with the given ID. If no item matches the specified ID, the call results in an HTTP error 404.
  • Execute an HTTP POST request on the api/fruits endpoint passing in a valid name value to create a new entry in the database.
  • Execute an HTTP PUT request on the api/fruits/* endpoint passing in a valid ID and a name as an argument. This updates the name of the item with the given ID to match the name specified in your request.
  • Execute an HTTP DELETE request on the api/fruits/* endpoint, passing in a valid ID as an argument. This removes the item with the specified ID from the database and returns an HTTP code 204 (No Content) as a response. If you pass in an invalid ID, the call results in an HTTP error 404.

This booster also contains a set of automated integration tests that can be used to verify that the application is fully integrated with the database.

This booster does not showcase a fully matured RESTful model (level 3), but it does use compatible HTTP verbs and status, following the recommended HTTP API practices.

Note

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, follow the instructions in the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Red Hat OpenShift Application Runtimes.

Table 4.3. Design Tradeoffs

ProsCons
  • Each runtime decides how the database interactions are implemented. One can use JDBC while others can use JPA or access ORM APIs directly. Each runtime decides what would be the best way.
  • Each runtime decides how the schema is going to be created.
  • The example PostgreSQL database provided with the Relational Database Backend mission is not backed up with persistent storage. Changes to the database are lost if you stop or redeploy the database pod. To use an external database with your mission’s pod in order to preserve changes, see the Integrating External Services chapter of the OpenShift Documentation. It is also possible to set up persistent storage with database containers on OpenShift. For more details about using persistent storage with OpenShift and containers, see the see the Persistent Storage, Managing Volumes and Persistent Volumes chapters of the OpenShift Documentation.

4.3.1. Building and Deploying the Relational Database Backend Booster to OpenShift Online

You have two options for executing the Relational Database Backend booster on OpenShift Online:

There is no functional difference between these methods, choose the one you prefer.

4.3.1.1. Deploying the Booster Using developers.redhat.com/launch

Prerequisites

Procedure

  • Navigate to the OpenShift Online URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.3.1.2. Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites

Procedure

  1. Navigate to the OpenShift Online URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your OpenShift Online account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.3.1.3. Deploying the Relational Database Backend Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Deploy the PostgreSQL database to OpenShift.

    $ oc new-app -e POSTGRESQL_USER=luke -ePOSTGRESQL_PASSWORD=secret -ePOSTGRESQL_DATABASE=my_data openshift/postgresql-92-centos7 --name=my-database
  4. Check the status of your database and ensure the pod is running.

    $ oc get pods -w
    my-database-1-aaaaa   1/1       Running   0         45s
    my-database-1-deploy   0/1       Completed   0         53s

    Your my-database-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  5. Use maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  6. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa       1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build   0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  7. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                     PATH      SERVICES             PORT      TERMINATION
    MY_APP_NAME   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME   8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.3.2. Building and Deploying the Relational Database Backend Booster to Single-node OpenShift Cluster

You have two options for executing the Relational Database Backend booster locally on Single-node OpenShift Cluster:

There is no functional difference between these methods, choose the one you prefer.

4.3.2.1. Getting the Fabric8 Launcher Tool URL and Credentials on Single-node OpenShift Cluster

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. These data are provided by the Single-node OpenShift Cluster binary when you start it.

Prerequisites

Procedure

  1. Navigate to the console where you started Single-node OpenShift Cluster.
  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup

    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin

4.3.2.2. Deploying the Booster Using the Fabric8 Launcher Tool

Prerequisites

Procedure

  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.3.2.3. Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites

Procedure

  1. Navigate to the Single-node OpenShift Cluster URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your Single-node OpenShift Cluster account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.3.2.4. Deploying the Relational Database Backend Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Deploy the PostgreSQL database to OpenShift.

    $ oc new-app -e POSTGRESQL_USER=luke -ePOSTGRESQL_PASSWORD=secret -ePOSTGRESQL_DATABASE=my_data openshift/postgresql-92-centos7 --name=my-database
  4. Check the status of your database and ensure the pod is running.

    $ oc get pods -w
    my-database-1-aaaaa   1/1       Running   0         45s
    my-database-1-deploy   0/1       Completed   0         53s

    Your my-database-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  5. Use maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  6. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa       1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build   0/1       Completed   0          2m

    Your MY_APP_NAME-1-aaaaa pod should have a status of Running and should be indicated as ready once it is fully deployed and started.

  7. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                     PATH      SERVICES             PORT      TERMINATION
    MY_APP_NAME   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME   8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.3.3. Building and Deploying the Relational Database Backend Booster to OpenShift Container Platform

The process of building and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites

Procedure

4.3.4. Interacting with the Application API

  1. Once the application is running, you can access it using the application URL. To obtain the URL, execute the following command:

    oc get route MY_APP_NAME
    NAME                 HOST/PORT                                         PATH      SERVICES             PORT      TERMINATION
    MY_APP_NAME           MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME              MY_APP_NAME           8080
  2. To access the web interface of the database application, navigate to the application URL in your browser:

    http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME

    Alternatively, you can make requests directly on the api/fruits/* endpoint using curl:

    List all entries in the database:

    curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits

    [ {
      "id" : 1,
      "name" : "Cherry",
    }, {
      "id" : 2,
      "name" : "Apple",
    }, {
      "id" : 3,
      "name" : "Banana",
    } ]

    Retrieve an entry with a specific ID

    curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits/3

    {
      "id" : 3,
      "name" : "Banana",
    }

    Create a new entry:

    curl -H "Content-Type: application/json" -X POST -d '{"name":"pear"}'  http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits

    {
      "id" : 4,
      "name" : "pear",
    }

    Update an Entry

    curl -H "Content-Type: application/json" -X PUT -d '{"name":"pineapple"}'  http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits/1

    {
      "id" : 1,
      "name" : "pineapple",
    }

    Delete an Entry:

    curl -X DELETE http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/fruits/1

If you receive an HTTP Error code 503 as a response after executing these commands, it means that the application is not ready yet.

4.3.5. Running the Relational Database Backend Booster Integration Tests on WildFly Swarm

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,
  • execute the individual tests on that instance,
  • remove all instances of the application from the project when the testing is done.
Warning

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites

  • the oc client authenticated
  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

4.3.6. Relational Database Resources

More background and related information on running relational databases in OpenShift, CRUD, HTTP API and REST can be found here:

4.4. Health Check Mission - WildFly Swarm Booster

Mission proficiency level: Foundational.

When you deploy an application, its important to know if it is available and if it can start handling incoming requests. Implementing the health check pattern allows you to monitor the health of an application, which includes if an application is available and whether it is able to service requests.

In order to understand the health check pattern, you need to first understand the following concepts:

Liveness
Liveness defines whether an application is running or not. Sometimes a running application moves into an unresponsive or stopped state and needs to be restarted. Checking for liveness helps determine whether or not an application needs to be restarted.
Readiness
Readiness defines whether a running application can service requests. Sometimes a running application moves into an error or broken state where it can no longer service requests. Checking readiness helps determine whether or not requests should continue to be routed to that application.
Fail-over
Fail-over enables failures in servicing requests to be handled gracefully. If an application fails to service a request, that request and future requests can then fail-over or be routed to another application, which is usually a redundant copy of that same application.
Resilience and Stability
Resilience and Stability enable failures in servicing requests to be handled gracefully. If an application fails to service a request due to connection loss, in a resilient system that request can be retried after the connection is re-established.
Probe
A probe is a Kubernetes action that periodically performs diagnostics on a running container.

The purpose of this use case is to demonstrate the health check pattern through the use of probing. Probing is used to report the liveness and readiness of an application. In this use case, you configure an application which exposes an HTTP health endpoint to issue HTTP requests. If the container is alive, according to the liveness probe on the health HTTP endpoint, the management platform receives 200 as return code and no further action is required. If the health HTTP endpoint does not return a response, for example if the JVM is no longer running or a thread is blocked, then the application is not considered alive according to the liveness probe. In that case, the platform kills the pod corresponding to that application and recreates a new pod to restart the application.

This use case also allows you to demonstrate and use a readiness probe. In cases where the application is running but is unable to handle requests, such as when the application returns an HTTP 503 response code during restart, this application is not considered ready according to the readiness probe. If the application is not considered ready by the readiness probe, requests are not routed to that application until it is considered ready according to the readiness probe.

Note

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, follow the instructions in the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Red Hat OpenShift Application Runtimes.

4.4.1. Building and Deploying the Health Check Booster to OpenShift Online

You have two options for executing the Health Check booster on OpenShift Online:

There is no functional difference between these methods, choose the one you prefer.

4.4.1.1. Deploying the Booster Using developers.redhat.com/launch

Prerequisites

Procedure

  • Navigate to the OpenShift Online URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.4.1.2. Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites

Procedure

  1. Navigate to the OpenShift Online URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your OpenShift Online account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.4.1.3. Deploying the Health Check Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    You MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started. You should also wait for your pod to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.4.2. Building and Deploying the Health Check Booster to Single-node OpenShift Cluster

You have two options for executing the Health Check booster locally on Single-node OpenShift Cluster:

There is no functional difference between these methods, choose the one you prefer.

4.4.2.1. Getting the Fabric8 Launcher Tool URL and Credentials on Single-node OpenShift Cluster

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. These data are provided by the Single-node OpenShift Cluster binary when you start it.

Prerequisites

Procedure

  1. Navigate to the console where you started Single-node OpenShift Cluster.
  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup

    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin

4.4.2.2. Deploying the Booster Using the Fabric8 Launcher Tool

Prerequisites

Procedure

  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.4.2.3. Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites

Procedure

  1. Navigate to the Single-node OpenShift Cluster URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your Single-node OpenShift Cluster account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.4.2.4. Deploying the Health Check Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-1-aaaaa               1/1       Running     0          58s
    MY_APP_NAME-s2i-1-build           0/1       Completed   0          2m

    You MY_APP_NAME-1-aaaaa pod should have a status of Running once its fully deployed and started. You should also wait for your pod to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME         MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME      MY_APP_NAME      8080

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.4.3. Building and Deploying the Health Check Booster to OpenShift Container Platform

The process of building and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites

Procedure

4.4.4. Interacting with the Unmodified WildFly Swarm Booster

Once you have the WildFly Swarm booster deployed, you will have a service called MY_APP_NAME running that exposes a two REST endpoints: /api/greeting which returns a name as a String and /api/stop, which forces the service to become unresponsive as means to simulate a failure.

In the following we demonstrate the basic interactions on the command line that can be used verify the service availability and simulate a failure that causes the OpenShift self-healing capabilities to be trigger on the service. Alternatively you can use the service web interface to perform these steps (See #4).

  1. Use curl to execute a GET request against the MY_APP_NAME service. You can also use a browser to do this.

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello, World!"}
  2. To simulate a service internal failure and to trigger the OpenShift selfhealing capabilities, invoke the /api/stop endpoint and verify the availability of the /api/greeting endpoint shortly after that. It should return a HTTP status 503:

    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/stop
    
    (followed by)
    
    $ curl http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME/api/greeting
    
    <html>
      <head><title>Error</title></head>
      <body>503 - Service Unavailable</body>
    </html>
  3. While performing these operations you can use the OpenShift console, or the oc client tools to watch the self-healing capabilities in action. Use oc get pods -w to continuously watch the service state from another terminal window. You should see that number pods in a READY state moves to zero (0/1) and after a while (~30 sec to 1 min) back up to one (1/1). In addition the RESTARTS count increases every time you kill the service:

    $ oc get pods -w
    NAME                           READY     STATUS    RESTARTS   AGE
    MY_APP_NAME-1-26iy7   0/1       Running   5          18m
    MY_APP_NAME-1-26iy7   1/1       Running   5         19m
  4. Alternatively to the interaction using the terminal window, you can use the web interface provided by the service to invoke the different methods and watch the service move through the life cycle phases:

    $ http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME
  1. Optional: Use the web console to view the log output generated by the application at each stage of the self-healing process.

    1. Navigate to your project.
    2. On the sidebar, click on Monitoring.
    3. In the upper right-hand corner of the screen, click on Events to display the log messages.
    4. Optional: Click View Details to display a detailed view of the Event log.

The health check application generates the following messages:

MessageStatus

Unhealthy

Readiness probe failed. This message is expected and indicates that the simulated failure of the /api/greeting endpoint has been detected and the self-healing process starts.

Killing

The unavailable Docker container running the service is being killed before being re-created.

Pulling

Downloading the latest version of docker image to re-create the container.

Pulled

Docker image downloaded successfully.

Created

Docker container has been successfully created

Started

Docker container is ready to handle requests

4.4.5. Running the Health Check Booster Integration Tests on WildFly Swarm

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,
  • execute the individual tests on that instance,
  • remove all instances of the application from the project when the testing is done.
Warning

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites

  • the oc client authenticated
  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

4.4.6. Health Check Resources

More background and related information on health checking can be found here:

4.5. Circuit Breaker Mission - WildFly Swarm Booster

Important

This booster is not currently available on OpenShift Online Starter. You can still run it using a Single-node OpenShift Cluster. You can also use a manual workflow to deploy this booster to OpenShift Online Pro and OpenShift Container Platform.

Mission proficiency level: Foundational.

The Circuit Breaker Mission demonstrates a generic pattern for reporting the failure of a service and then limiting access to the failed service until it becomes available to handle requests. This helps prevent cascading failure in other services that depend on the failed services for functionality.

This mission shows you how to implement a Circuit Breaker and Fallback pattern in your services.

4.5.1. About Circuit Breaker

The Circuit Breaker is a pattern intended to mitigate the impact of network failure and high latency on service architectures where services synchronously invoke other services. In such cases, if one of the services becomes unavailable due to network failure or incurs unusually high latency values due to overwhelming traffic, other services attempting to call its endpoint may end up exhausting critical resources in an attempt to reach it, rendering themselves unusable. This condition is also known as cascading failure and can render the entire microservice architecture unusable.

Essentially, the Circuit Breaker acts as a proxy between a protected function and a remote function, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error or a predefined fallback response, without the protected call being made at all. The Circuit Breaker usually also contain an error reporting mechanism that notifies you when the Circuit Breaker trips.

4.5.2. Why Circuit Breaker is Important

In an architecture where multiple services depend on each other for functionality, a failure in one service can rapidly propagate to its dependent services, causing the entire architecture to collapse. Implementing a Circuit Breaker pattern helps prevent this. With the Circuit Breaker pattern implemented, a service client invokes a remote service endpoint via a proxy at regular intervals. If the calls to the remote service endpoint fail repeatedly and consistently, the Circuit Breaker trips, making all calls to the service fail immediately over a set timeout period and returns a predefined fallback response. When the timeout period expires, a limited number of test calls are allowed to pass through to the remote service to determine whether it has healed, or remains unavailable. If these test calls fail, the Circuit Breaker keeps the service unavailable and keeps returning the fallback responses to incoming calls. If the test calls succeed, the Circuit Breaker closes, fully enabling traffic to reach the remote service again.

Note

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, follow the instructions in the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Red Hat OpenShift Application Runtimes.

Table 4.4. Design Tradeoffs

ProsCons
  • Enables a service to handle the failure of other services it invokes.
  • Optimizing the timeout values can be challenging

    • Larger-than-necessary timeout values may generate excessive latency.
    • Smaller-than-necessary timeout values may introduce false positives.

4.5.3. Building and Deploying the Circuit Breaker Booster to OpenShift Online

You have two options for executing the Circuit Breaker booster on OpenShift Online:

There is no functional difference between these methods, choose the one you prefer.

4.5.3.1. Deploying the Booster Using developers.redhat.com/launch

Prerequisites

Procedure

  • Navigate to the OpenShift Online URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.5.3.2. Authenticating the oc CLI Client

To work with boosters on OpenShift Online using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Online web interface.

Prerequisites

Procedure

  1. Navigate to the OpenShift Online URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your OpenShift Online account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.5.3.3. Deploying the Circuit Breaker Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-greeting-1-aaaaa     1/1       Running   0           17s
    MY_APP_NAME-greeting-1-deploy    0/1       Completed 0           22s
    MY_APP_NAME-name-1-aaaaa         1/1       Running   0           14s
    MY_APP_NAME-name-1-deploy        0/1       Completed 0           28s

    Both the MY_APP_NAME-greeting-1-aaaaa and MY_APP_NAME-name-1-aaaaa pods should have a status of Running once they are fully deployed and started. You should also wait for your pods to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-greeting-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME-greeting   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-greeting   8080                    None
    MY_APP_NAME-name       MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-name       8080                    None

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.5.4. Building and Deploying the Circuit Breaker Booster to Single-node OpenShift Cluster

You have two options for executing the Circuit Breaker booster locally on Single-node OpenShift Cluster:

There is no functional difference between these methods, choose the one you prefer.

4.5.4.1. Getting the Fabric8 Launcher Tool URL and Credentials on Single-node OpenShift Cluster

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. These data are provided by the Single-node OpenShift Cluster binary when you start it.

Prerequisites

Procedure

  1. Navigate to the console where you started Single-node OpenShift Cluster.
  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup

    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin

4.5.4.2. Deploying the Booster Using the Fabric8 Launcher Tool

Prerequisites

Procedure

  • Navigate to the Single-node OpenShift Cluster URL in a browser and log in.
  • Follow on-screen instructions to create and launch your booster in WildFly Swarm.

4.5.4.3. Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites

Procedure

  1. Navigate to the Single-node OpenShift Cluster URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your Single-node OpenShift Cluster account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.5.4.4. Deploying the Circuit Breaker Booster using the oc CLI Client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Use Maven to start the deployment to OpenShift.

    $ mvn clean fabric8:deploy -Popenshift

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift and to start the pod.

  4. Check the status of your booster and ensure your pod is running.

    $ oc get pods -w
    NAME                             READY     STATUS      RESTARTS   AGE
    MY_APP_NAME-greeting-1-aaaaa     1/1       Running   0           17s
    MY_APP_NAME-greeting-1-deploy    0/1       Completed 0           22s
    MY_APP_NAME-name-1-aaaaa         1/1       Running   0           14s
    MY_APP_NAME-name-1-deploy        0/1       Completed 0           28s

    Both the MY_APP_NAME-greeting-1-aaaaa and MY_APP_NAME-name-1-aaaaa pods should have a status of Running once they are fully deployed and started. You should also wait for your pods to be ready before proceeding, which is shown in the READY column. For example, MY_APP_NAME-greeting-1-aaaaa is ready when the READY column is 1/1.

  5. Once your booster is deployed and started, determine its route.

    Example Route Information

    $ oc get routes
    NAME                 HOST/PORT                                                     PATH      SERVICES        PORT      TERMINATION
    MY_APP_NAME-greeting   MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-greeting   8080                    None
    MY_APP_NAME-name       MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME            MY_APP_NAME-name       8080                    None

    The route information of a pod gives you the base URL which you use to access it. In the example above, you would use http://MY_APP_NAME-MY_PROJECT_NAME.OPENSHIFT_HOSTNAME as the base URL to access the application.

4.5.5. Building and Deploying the Circuit Breaker Booster to OpenShift Container Platform

The process of building and deploying boosters to OpenShift Container Platform is similar to OpenShift Online:

Prerequisites

Procedure

4.5.6. Interacting with the Unmodified WildFly Swarm Circuit Breaker Booster

Once you have the WildFly Swarm booster deployed, you have the following services running:

MY_APP_NAME-name

Exposes the following endpoints:

  • the /api/name endpoint, which returns a name when this service is working, and an error when this service is set up to demonstrate failure.
  • the /api/state endpoint, which controls the behavior of the /api/name endpoint and determines whether the service works correctly or demonstrates failure.
MY_APP_NAME-greeting

Exposes the following endpoints:

  • the /api/greeting endpoint that you can call to get a personalized greeting response.

    When you call the /api/greeting endpoint, it issues a call against the /api/name endpoint of the MY_APP_NAME-name service as part of processing your request. The call made against the /api/name endpoint is protected by the Circuit Breaker.

    If the remote endpoint is available, the name service responds with an HTTP code 200 (OK) and you receive the following greeting from the /api/greeting endpoint:

    {"content":"Hello, World!"}

    If the remote endpoint is unavailable, the name service responds with an HTTP code 500 (Internal server error) and you receive a predefined fallback response from the /api/greeting endpoint:

    {"content":"Hello, Fallback!"}
  • the /api/cb-state endpoint, which returns the state of the Circuit Breaker. The state can be:

    • open : the circuit breaker is preventing requests from reaching the failed service,
    • closed: the circuit breaker is allowing requests to reach the service.

The following steps demonstrate how to verify the availability of the service, simulate a failure and receive a fallback response.

  1. Use curl to execute a GET request against the MY_APP_NAME-greeting service. You can also use the Invoke button in the web interface to do this.

    $ curl http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting
    {"content":"Hello, World!"}
  2. To simulate the failure of the MY_APP_NAME-name service you can:

    • use the Toggle button in the web interface.
    • scale the number of replicas of the pod running the MY_APP_NAME-name service down to 0.
    • execute an HTTP PUT request against the /api/state endpoint of the MY_APP_NAME-name service to set its state to fail.

      $ curl -X PUT -H "Content-Type: application/json" -d '{"state": "fail"}' http://MY_APP_NAME-name-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/state
  3. Invoke the /api/greeting endpoint. When several requests on the /api/name endpoint fail:

    1. the Circuit Breaker opens,
    2. the state indicator in the web interface changes from CLOSED to OPEN,
    3. the Circuit Breaker issues a fallback response when you invoke the /api/greeting endpoint:

      $ curl http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting
      {"content":"Hello, Fallback!"}
  4. Restore the name MY_APP_NAME-name service to availability. To do this you can:

    • use the Toggle button in the web interface.
    • scale the number of replicas of the pod running the MY_APP_NAME-name service back up to 1.
    • execute an HTTP PUT request against the /api/state endpoint of the MY_APP_NAME-name service to set its state back to ok.

      $ curl -X PUT -H "Content-Type: application/json" -d '{"state": "ok"}' http://MY_APP_NAME-name-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/state
  5. Invoke the /api/greeting endpoint again. When several requests on the /api/name endpoint succeed:

    1. the Circuit Breaker closes,
    2. the state indicator in the web interface changes from OPEN to CLOSED,
    3. the Circuit Breaker issues a returns the Hello World! greeting when you invoke the /api/greeting endpoint:

      $ curl http://MY_APP_NAME-greeting-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting
      {"content":"Hello, World!"}

4.5.7. Running the Circuit Breaker Booster Integration Tests on WildFly Swarm

This booster includes a self-contained set of integration tests. When run inside an OpenShift project, the tests:

  • deploy a test instance of the application to the project,
  • execute the individual tests on that instance,
  • remove all instances of the application from the project when the testing is done.
Warning

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

Prerequisites

  • the oc client authenticated
  • an empty project in your OpenShift namespace

Procedure

Execute the following command to run the integration tests:

$ mvn clean verify -Popenshift,openshift-it

4.5.8. Using Hystrix Dashboard to Monitor the Circuit Breaker

Hystrix Dashboard lets you easily monitor the health of your services in real time by aggregating Hystrix metrics data from an event stream and displaying them on one screen. For more detail, see the Hystrix Dashboard wiki page.

Note

You must have the Circuit Breaker booster application deployed before proceeding with the steps below.

  1. Log in to your Single-node OpenShift Cluster cluster.

    $ oc login OPENSHIFT_URL --token=MYTOKEN
  2. To access the Web console, use your browser to navigate to your Single-node OpenShift Cluster URL.
  3. Navigate to the project that contains your Circuit Breaker application.

    $ oc project MY_PROJECT_NAME
  4. Import the YAML template for the Hystrix Dashboard application. You can do this by clicking Add to Project, then selecting the Import YAML / JSON tab, and copying the contents of the YAML file into the text box. Alternatively, you can execute the following command.

    oc create -f https://raw.githubusercontent.com/snowdrop/openshift-templates/master/hystrix-dashboard/hystrix-dashboard.yml
  5. Click the Create button to create the Hystrix Dashboard application based on the template. Alternatively, you can execute the following command.

    oc new-app --template=hystrix-dashboard
  6. Wait for the pod containing Hystrix Dashboard to deploy.
  7. Obtain the route of your Hystrix Dashboard application.

    oc get route hystrix-dashboard
    NAME                HOST/PORT                                                    PATH      SERVICES            PORT      TERMINATION   WILDCARD
    hystrix-dashboard   hystrix-dashboard-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME                 hystrix-dashboard   <all>                   None
  8. To access the Dashboard, open the Dashboard application route URL in your browser. Alternatively, you can navigate to the Overview screen in the Web console and click the route URL in the header above the pod containing your Hystrix Dashboard application.
  9. To use the Dashboard to monitor the MY_APP_NAME-greeting service, replace the default event stream address with the following address and click the Monitor Stream button.

    http://MY_APP_NAME-greeting/hystrix.stream

4.5.9. Circuit Breaker Resources

Follow the links below for more background information on the design principles behind the Circuit Breaker pattern

4.6. Secured Mission - WildFly Swarm Booster

Important

This booster is not currently available on OpenShift Online Starter. You can still run it using a Single-node OpenShift Cluster. You can also use a manual workflow to deploy this booster to OpenShift Online Pro and OpenShift Container Platform.

Mission proficiency level: Advanced.

The Secured booster expands on the REST API Level 0 booster by securing a REST endpoint using Red Hat SSO. Red Hat SSO implements the Open ID Connect protocol that is an extension of the OAuth 2.0 specification and uses it to issue access tokens to provide clients with various access rights to secured resources. Securing an application with SSO enables you to add security to your applications while centralizing the security configuration.

Important

While this mission comes with Red Hat SSO pre-configured for demonstration purposes, it does not explain its principles, usage, or configuration. Before using this mission, ensure that you are familiar with the basic concepts related to Red Hat SSO.

Note

To view the source code and README file of this booster, download and extract the ZIP file with the booster. To get the download link of the ZIP file, follow the instructions in the Creating and Deploying a Booster Using OpenShift Online chapter of the Getting Started with Red Hat OpenShift Application Runtimes.

4.6.1. Configuring Your Single-node OpenShift Cluster

You can use your Single-node OpenShift Cluster to run your secured booster, but you need to update it from the default configuration.

  1. Before you can use your Single-node OpenShift Cluster, you need to have it installed, configured, and running. You can find details on installing a Single-node OpenShift Cluster for your platform in Install and Configure the Fabric8 Launcher Tool.

    Note

    You only need to do this once for any of the secured booster missions. They can share the same RH SSO/Single-node OpenShift Cluster setup.

  2. The SSO booster currently only works with the CentOS base image. To start a Single-node OpenShift Cluster with that image, you need to delete the current Single-node OpenShift Cluster configuration and restart it with a new ISO.

    Important

    You must use a different boot iso image than the default in order for RH SSO to startup. If you have already run the minishift command with a different memory setting and iso-url value, you need to stop it and completely delete the ~/.minishift directory before running the following startup sequence.

    $ minishift delete
    $ rm -r ~/.minishift
    $ minishift start --memory=6144 --iso-url=centos

4.6.2. Project Structure

The SSO booster project contains:

  • the sources for the Greeting service, which is the one which we are going to to secure
  • a template file (service.sso.yaml) to stand up the SSO server
  • the Keycloak adapter configuration to secure the service

4.6.3. Standing up Red Hat SSO

The service.sso.yaml file contains all OpenShift configuration items to stand up a pre-configured Red Hat SSO server. The SSO server configuration has been simplified for the sake of this exercise and does provide an out-of-the-box configuration, with pre-configured users and security settings.

Important

It is not recommended to use this SSO configuration in production. Specifically, the simplifications made to the booster security configuration impact the ability to use it in a production environment.

Table 4.5. SSO Booster Simplifications

ChangeReasonRecommendation

The default configuration includes both public and private keys in the yaml configuration files.

We did this because the end user can deploy Red Hat SSO module and have it in a usable state without needing to know the internals or how to configure Red Hat SSO.

In production, it is recommend to not store any private key under source control and have it added by the server administrator.

The configured clients accept any callback url.

To avoid having a custom configuration per runtime we avoid the verification of the callback as mandated by the OAuth2 specification.

An application-specific callback URL should be provided with a valid domain name.

Clients do not require SSL/TLS and the secured applications are not exposed over HTTPS.

The boosters are simplified by not requiring certificates generated for each runtime.

In production a secure application should use HTTPS rather than plain HTTP.

The token timeout has been increased to 10 minutes from the default of 1 minute.

Provides a better user experience when working with the command line examples

From a security perspective, the window an attacker would have to guess the access token is extended. It is recommended to keep this window short as it makes it much harder for a potential attacker to guess the current token.

4.6.4. Red Hat SSO Realm Model

The master realm is used to secure this booster. There are two pre-configured application client definitions that provide a model for command line clients and the secured REST endpoint.

There are also two pre-configured users in the Red Hat SSO master realm that can be used to validate various authentication and authorization outcomes: admin and alice.

4.6.4.1. Red Hat SSO Users

The realm model for the secured boosters includes two users:

admin
The admin user has a password of admin and is the realm administrator. This user has full access to the Red Hat SSO administration console, but none of the role mappings that are required to access the secured endpoints. You can use this user to illustrate the behavior of an authenticated, but unauthorized user.
alice

The alice user has a password of password and is the canonical application user. This user will demonstrate successful authenticated and authorized access to the secured endpoints. An example representation of the role mappings is provided in this decoded JWT bearer token:

{
  "jti": "0073cfaa-7ed6-4326-ac07-c108d34b4f82",
  "exp": 1510162193,
  "nbf": 0,
  "iat": 1510161593,
  "iss": "https://secure-sso-sso.LOCAL_OPENSHIFT_HOSTNAME/auth/realms/master", 1
  "aud": "demoapp",
  "sub": "c0175ccb-0892-4b31-829f-dda873815fe8",
  "typ": "Bearer",
  "azp": "demoapp",
  "nonce": "90ff5d1a-ba44-45ae-a413-50b08bf4a242",
  "auth_time": 1510161591,
  "session_state": "98efb95a-b355-43d1-996b-0abcb1304352",
  "acr": "1",
  "client_session": "5962112c-2b19-461e-8aac-84ab512d2a01",
  "allowed-origins": [
    "*"
  ],
  "realm_access": {
    "roles": [ 2
      "booster-admin"
    ]
  },
  "resource_access": { 3
    "secured-booster-endpoint": {
      "roles": [
        "booster-admin" 4
      ]
    },
    "account": {
      "roles": [
        "manage-account",
        "view-profile"
      ]
    }
  },
  "name": "Alice InChains",
  "preferred_username": "alice", 5
  "given_name": "Alice",
  "family_name": "InChains",
  "email": "alice@keycloak.org"
}
1
The iss field corresponds to the Red Hat SSO realm instance URL that issues the token. This must be configured in the secured endpoint deployments in order for the token to be verified.
2
The roles object provides the roles that have been granted to the user at the global realm level. In this case alice has been granted the booster-admin role. We will see that the secured endpoint will look to the realm level for authorized roles.
3
The resource_access object contains resource specific role grants. Under this object you will find an object for each of the secured endpoints.
4
The resource_access.secured-booster-endpoint.roles object contains the roles granted to alice for the secured-booster-endpoint resource.
5
The preferred_username field provides the username that was used to generate the access token.

4.6.4.2. The Application Clients

The OAuth 2.0 specification allows you to define a role for application clients that access secured resources on behalf of resource owners. The master realm has the following application clients defined:

demoapp
This is a confidential type client with a client secret that is used to obtain an access token that contains grants for the alice user which enable alice to access the WildFly Swarm, Eclipse Vert.x and Spring Boot based REST booster deployments.
secured-booster-endpoint
The secured-booster-endpoint is a bearer-only type of client that requires a booster-admin role for accessing the associated resources, specifically the Greeting service.

4.6.4.3. SSO Adapter

The SSO adapter is the client side, or client to the SSO server, component that enforces security on the web resources. In this specific case, it is the greeting service.

In WildFly Swarm, the security configuration breaks down into two notable assets: * The web.xml configuration to enact the security for the service * The keycloak.json configuration for the keycloak adapter.

Enacting Security using web.xml

<web-app xmlns="http://java.sun.com/xml/ns/javaee" version="2.5">
  <security-constraint>
    <web-resource-collection>
      <url-pattern>/api/greeting</url-pattern> 1
    </web-resource-collection>
    <auth-constraint>
      <role-name>booster-admin</role-name> 2
    </auth-constraint>
  </security-constraint>

  <login-config>
    <auth-method>KEYCLOAK</auth-method> 3
  </login-config>

  <security-role>
    <role-name>booster-admin</role-name>
  </security-role>
</web-app>

1
The web context that is to be secured.
2
The role needed to access the endpoint.
3
Using keycloak as the security provider.

Enacting Security in Keycloak Adapter using keycloak.json

{
  "realm": "master", 1
  "resource": "secured-booster-endpoint", 2
  "realm-public-key": "...", 3
  "auth-server-url": "${sso.auth.server.url}", 4
  "ssl-required": "external",
  "disable-trust-manager": true,
  "bearer-only": true, 5
  "use-resource-role-mappings": true
}

1
The security realm to be used.
2
The actual keycloak client configuration.
3
PEM format of the realm public key. You can obtain this from the administration console.
4
The address of the Red Hat SSO server (Interpolation at build time).
5
If enabled the adapter will not attempt to authenticate users, but only verify bearer tokens.

The web.xml enables keycloak and enforces protection of the Greeting service web resource endpoint. The keycloak.json configures the security adapter to interact with Red Hat SSO.

4.6.5. Building and deploying the Secured booster to Single-node OpenShift Cluster

4.6.5.1. Getting the Fabric8 Launcher Tool URL and Credentials on Single-node OpenShift Cluster

You need the Fabric8 Launcher tool URL and user credentials to create and deploy boosters on Single-node OpenShift Cluster. These data are provided by the Single-node OpenShift Cluster binary when you start it.

Prerequisites

Procedure

  1. Navigate to the console where you started Single-node OpenShift Cluster.
  2. Check the console output for the URL and user credentials you can use to access the running Fabric8 Launcher:

    Example Console Output from a Single-node OpenShift Cluster Startup

    ...
    -- Removing temporary directory ... OK
    -- Server Information ...
       OpenShift server started.
       The server is accessible via web console at:
           https://192.168.42.152:8443
    
       You are logged in as:
           User:     developer
           Password: developer
    
       To login as administrator:
           oc login -u system:admin

4.6.5.2. Creating the Secured Booster Using Fabric8 Launcher

Prerequisites

Procedure

  • Navigate to the Fabric8 Launcher URL in a browser and log in.
  • Follow the on-screen instructions to create your booster in WildFly Swarm. When asked about which deployment type, select I will build and run locally.
  • Follow on-screen instructions.

    When done, click the Download as ZIP file button and store the file on your hard drive.

4.6.5.3. Authenticating the oc CLI Client

To work with boosters on Single-node OpenShift Cluster using the oc command-line client, you need to authenticate the client using the token provided by the Single-node OpenShift Cluster web interface.

Prerequisites

Procedure

  1. Navigate to the Single-node OpenShift Cluster URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your Single-node OpenShift Cluster account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.6.5.4. Deploying the Secured booster using the oc CLI client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Deploy the Red Hat SSO server using the service.sso.yaml file from your booster ZIP file:

    $ oc create -f service.sso.yaml
  4. Use Maven to start the deployment to Single-node OpenShift Cluster.

    $ mvn clean fabric8:deploy -Popenshift -DskipTests \
          -DSSO_AUTH_SERVER_URL=$(oc get route secure-sso -o jsonpath='{"https://"}{.spec.host}{"/auth\n"}')

    This command uses the Fabric8 Maven Plugin to launch the S2I process on Single-node OpenShift Cluster and to start the pod.

This process generates the uberjar file as well as the OpenShift resources and deploys them to the current project on your Single-node OpenShift Cluster server.

4.6.6. Building and Deploying the Secured Booster to the OpenShift Container Platform

In addition to the Single-node OpenShift Cluster, you can build and deploy the booster on OpenShift Container Platform with only minor differences. The most important difference is that you need to create the booster application on Single-node OpenShift Cluster before you can deploy it with OpenShift Container Platform.

Prerequisites

4.6.6.1. Authenticating the oc CLI Client

To work with boosters on OpenShift Container Platform using the oc command-line client, you need to authenticate the client using the token provided by the OpenShift Container Platform web interface.

Prerequisites

  • An account at OpenShift Container Platform.

Procedure

  1. Navigate to the OpenShift Container Platform URL in a browser.
  2. Click on the question mark icon in the top right-hand corner of the Web console, next to your user name.
  3. Select Command Line Tools in the drop-down menu.
  4. Find the text box that contains the oc login …​ command with the hidden token, and click the button next to it to copy its content to your clipboard.
  5. Paste the command you copied in the previous step into a terminal application to authenticate your oc CLI client with your OpenShift Container Platform account by using your authentication token.

    $ oc login OPENSHIFT_URL --token=MYTOKEN

4.6.6.2. Deploying the Secured booster using the oc CLI client

Prerequisites

Procedure

  1. Create a new project.

    $ oc new-project MY_PROJECT_NAME
  2. Navigate to the root directory of your booster.
  3. Deploy the Red Hat SSO server using the service.sso.yaml file from your booster ZIP file:

    $ oc create -f service.sso.yaml
  4. Use Maven to start the deployment to OpenShift Container Platform.

    $ mvn clean fabric8:deploy -Popenshift -DskipTests \
          -DSSO_AUTH_SERVER_URL=$(oc get route secure-sso -o jsonpath='{"https://"}{.spec.host}{"/auth\n"}')

    This command uses the Fabric8 Maven Plugin to launch the S2I process on OpenShift Container Platform and to start the pod.

This process generates the uberjar file as well as the OpenShift resources and deploys them to the current project on your OpenShift Container Platform server.

4.6.7. Interacting with the Secured Booster

The Secured booster provides a default HTTP endpoint that accepts GET requests if the caller is authenticated and authorized. The client first authenticates against the Red Hat SSO server and then performs a GET request against the Secured booster using the access token returned by the authentication step.

4.6.7.1. Getting the Secured Booster API Endpoint

When using a client to interact with the booster, you must specify the Secured booster endpoint, which is the Greeting service.

Prerequisites

  • The Secured booster deployed and running.
  • The oc client authenticated.

Procedure

  1. In a terminal application, execute the oc get routes command.

    A sample output is shown in the following table:

    Example 4.1. List of Secured endpoints

    NameHost/PortPathServicesPortTermination

    secure-sso

    secure-sso-myproject.LOCAL_OPENSHIFT_HOSTNAME

     

    secure-sso

    <all>

    passthrough

    PROJECT_ID

    PROJECT_ID-myproject.LOCAL_OPENSHIFT_HOSTNAME

     

    PROJECT_ID

    <all>

     

    sso

    sso-myproject.LOCAL_OPENSHIFT_HOSTNAME

     

    sso

    <all>

     
Important

PROJECT_ID is based on the name you entered when generating your booster using developers.redhat.com/launch or the Fabric8 Launcher tool.

4.6.7.2. Interacting with the Secured Booster on the Command Line

Request a token by sending a HTTP POST request to the Red Hat SSO server. In the following example, the jq CLI tool is used to extract the token value from the JSON response.

Prerequisites

Procedure

  1. Request an access token:

    The attributes are usually shared with each service and kept secret, but for demonstration purposes, they are displayed here:

    Example 4.2. Secured booster credentials

    REALM=master
    USER=alice
    PASSWORD=password
    CLIENT_ID=demoapp
    SECRET=1daa57a2-b60e-468b-a3ac-25bd2dc2eadc
    • Using the credentials, use the curl command to request a token:

      $ curl -sk -X POST https://<SSO_AUTH_SERVER_URL>/auth/realms/$REALM/protocol/openid-connect/token \
        -d grant_type=password \
        -d username=$USER \
        -d password=$PASSWORD \
        -d client_id=$CLIENT_ID \
        -d client_secret=$SECRET
      Note

      The -sk option tells curl to ignore failures resulting from self-signed certificates. Do not use this option in a production environment.

      On macOS, you must have curl version 7.56.1 or greater installed. It must also be built with OpenSSL.

    • Extract the access token information, for example using the jq tool:

      $ curl ... | jq '.access_token'
      
      "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJRek1nbXhZMUhrQnpxTnR0SnkwMm5jNTNtMGNiWDQxV1hNSTU1MFo4MGVBIn0.eyJqdGkiOiI0NDA3YTliNC04YWRhLTRlMTctODQ2ZS03YjI5MjMyN2RmYTIiLCJleHAiOjE1MDc3OTM3ODcsIm5iZiI6MCwiaWF0IjoxNTA3NzkzNzI3LCJpc3MiOiJodHRwczovL3NlY3VyZS1zc28tc3NvLWRlbW8uYXBwcy5jYWZlLWJhYmUub3JnL2F1dGgvcmVhbG1zL21hc3RlciIsImF1ZCI6ImRlbW9hcHAiLCJzdWIiOiJjMDE3NWNjYi0wODkyLTRiMzEtODI5Zi1kZGE4NzM4MTVmZTgiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJkZW1vYXBwIiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiMDFjOTkzNGQtNmZmOS00NWYzLWJkNWUtMTU4NDI5ZDZjNDczIiwiYWNyIjoiMSIsImNsaWVudF9zZXNzaW9uIjoiMzM3Yzk0MTYtYTdlZS00ZWUzLThjZWQtODhlODI0MGJjNTAyIiwiYWxsb3dlZC1vcmlnaW5zIjpbIioiXSwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbImJvb3N0ZXItYWRtaW4iXX0sInJlc291cmNlX2FjY2VzcyI6eyJzZWN1cmVkLWJvb3N0ZXItZW5kcG9pbnQiOnsicm9sZXMiOlsiYm9vc3Rlci1hZG1pbiJdfSwiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsInZpZXctcHJvZmlsZSJdfX0sIm5hbWUiOiJBbGljZSBJbkNoYWlucyIsInByZWZlcnJlZF91c2VybmFtZSI6ImFsaWNlIiwiZ2l2ZW5fbmFtZSI6IkFsaWNlIiwiZmFtaWx5X25hbWUiOiJJbkNoYWlucyIsImVtYWlsIjoiYWxpY2VAa2V5Y2xvYWsub3JnIn0.mjmZe37enHpigJv0BGuIitOj-kfMLPNwYzNd3n0Ax4Nga7KpnfytGyuPSvR4KAG8rzkfBNN9klPYdy7pJEeYlfmnFUkM4EDrZYgn4qZAznP1Wzy1RfVRdUFi0-GqFTMPb37o5HRldZZ09QljX_j3GHnoMGXRtYW9RZN4eKkYkcz9hRwgfJoTy2CuwFqeJwZYUyXifrfA-JoTr0UmSUed-0NMksGrtJjjPggUGS-qOn6OgKcmN2vaVAQlxW32y53JqUXctfLQ6DhJzIMYTmOflIPy0sgG1mG7sovQhw1xTg0vTjdx8zQ-EJcexkj7IivRevRZsslKgqRFWs67jQAFQA"
      Note

      When using this result, make sure to remove the surrounding double quotes (").

  2. Invoke the Secured service. Attach the access (bearer) token to the HTTP headers:

    $ curl -v -H "Authorization: Bearer <TOKEN>" http://<SERVICE_HOST>/api/greeting
    
    {
        "content": "Hello, World!",
        "id": 2
    }

    Example 4.3. A sample GET Request Headers with an Access (Bearer) Token

    > GET /api/greeting HTTP/1.1
    > Host: <SERVICE_HOST>
    > User-Agent: curl/7.51.0
    > Accept: */*
    > Authorization: Bearer <TOKEN>
  3. Verify the signature of the access token.

    The access token is a JSON Web Token, so you can decode it using the JWT Debugger:

    1. In a web browser, navigate to the JWT Debugger website.
    2. Select RS256 from the Algorithm drop down menu.

      Note

      Make sure the web form has been updated after you made the selection, so it displays the correct RSASHA256(…​) information in the Signature section. If it has not, try switching to HS256 and then back to RS256.

    3. Paste the following content in the topmost text box into the VERIFY SIGNATURE section:

      -----BEGIN PUBLIC KEY-----
      MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoETnPmN55xBJjRzN/cs30OzJ9olkteLVNRjzdTxFOyRtS2ovDfzdhhO9XzUcTMbIsCOAZtSt8K+6yvBXypOSYvI75EUdypmkcK1KoptqY5KEBQ1KwhWuP7IWQ0fshUwD6jI1QWDfGxfM/h34FvEn/0tJ71xN2P8TI2YanwuDZgosdobx/PAvlGREBGuk4BgmexTOkAdnFxIUQcCkiEZ2C41uCrxiS4CEe5OX91aK9HKZV4ZJX6vnqMHmdDnsMdO+UFtxOBYZio+a1jP4W3d7J5fGeiOaXjQCOpivKnP2yU2DPdWmDMyVb67l8DRA+jh0OJFKZ5H2fNgE3II59vdsRwIDAQAB
      -----END PUBLIC KEY-----
      Note

      This is the master realm public key from the Red Hat SSO server deployment of the Secured booster.

    4. Paste the token output from the client output into the Encoded box.

      The Signature Verified sign appears on the debugger page.

4.6.7.2.1. Interacting with the Secured Booster Using the Web Interface

In addition to the HTTP API, the secured endpoint also contains a web interface to interact with.

The following procedure is an exercise for you to see how security is enforced, how you authenticate, and how you work with the authentication token.

Prerequisites

Procedure

  1. In a web browser, navigate to the endpoint URL.
  2. Perform an unauthenticated request:

    1. Click the Invoke button.

      Figure 4.1. Unauthenticated Secured Booster Web Interface

      sso main

      The services responds with an HTTP 401 Unauthorized status code.

      Figure 4.2. Unauthenticated Error Message

      sso unauthenticated
  3. Perform an authenticated request as a user:

    1. Click the Login button to authenticate against Red Hat SSO. You will be redirected to the SSO server.
    2. Log in as the Alice user. You will be redirected back to the web interface.

      Note

      You can see the access (bearer) token in the command line output at the bottom of the page.

      Figure 4.3. Authenticated Secured Booster Web Interface (as Alice)

      sso alice
    3. Click Invoke again to access the Greeting service.

      Confirm that there is no exception and the JSON response payload is displayed. This means the service accepted your access (bearer) token and you are authorized access to the Greeting service.

      Figure 4.4. The Result of an Authenticated Greeting Request (as Alice)

      sso invoke alice
    4. Log out.
  4. Perform an authenticated request as an admininstrator:

    1. Click the Invoke button.

      Confirm that this sends an unauthenticated request to the Greeting service.

    2. Click the Login button and log in as the admin user.

      Figure 4.5. Authenticated Secured Booster Web Interface (as admin)

      sso admin
  5. Click the Invoke button.

    The service responds with an HTTP 403 Forbidden status code because the admin user is not authorized to access the Greeting service.

    Figure 4.6. Unauthorized Error Message

    sso unauthorized

4.6.8. Running the Secured Booster Integration Tests

Important

The keycloak-authz-client library for WildFly Swarm is provided as a Technology Preview.

Prerequisites

  • The oc client authenticated.
Procedure

Executing integration tests removes all existing instances of the booster application from the target OpenShift project. To avoid accidentally removing your booster application, ensure that you create and select a separate OpenShift project to execute the tests.

  1. In a terminal application, navigate to the directory with your project.
  2. Deploy the Red Hat SSO server:

    oc apply -f service.sso.yaml
  3. Wait until the Red Hat SSO server is ready. Go to the Web console or view the output of oc get pods to check if the pod is ready.
  4. Execute the integration tests. Provide the URL of the Red Hat SSO server as a parameter:

    $ mvn clean verify -Popenshift,openshift-it -DSSO_AUTH_SERVER_URL=$(oc get route secure-sso -o jsonpath='{"https://"}{.spec.host}{"/auth\n"}')
  5. Once the tests are finished, remove the Red Hat SSO server:

    oc delete -f service.sso.yaml

4.6.9. Secured SSO Resources

Follow the links below for additional information on the principles behind the OAuth2 specification and on securing your applications using Red Hat SSO and Keycloak:

Appendix A. The Source-to-Image (S2I) Build Process

Source-to-Image (S2I) is a build tool for generating reproducible Docker-formatted container images from online SCM repositories with application sources. With S2I builds, you can easily deliver the latest version of your application into production with shorter build times, decreased resource and network usage, improved security, and a number of other advantages. For more information, see the Source-to-Image (S2I) Build chapter of the OpenShift Container Platform documentation.

You must provide three elements to the S2I process:

  • The application sources hosted in an online SCM repository, such as GitHub.
  • The S2I scripts.
  • The Builder image, which serves as the foundation for the assembled image and provides the ecosystem in which your application is running.

The process injects your application source and dependencies into the Builder image according to instructions specified in the S2I script, and generates a Docker-formatted container image that runs the assembled application. For more information, check the S2I build requirements and build options sections of the OpenShift Container Platform documentation.

Appendix B. WildFly Swarm Fractions Reference

For information about using the configuration properties provided in WildFly Swarm fractions, see Section 2.4, “Configuring a WildFly Swarm application”.

B.1. Bean Validation

Provides class-level constraint and validation according to JSR 303.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>bean-validation</artifactId>
</dependency>

B.2. CDI

Provides context and dependency-injection support according to JSR-299.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>cdi</artifactId>
</dependency>

Configuration

swarm.c-d-i.development-mode
Weld comes with a special mode for application development. When the development mode is enabled, certain built-in tools, which facilitate the development of CDI applications, are available. Setting this attribute to true activates the development mode.
swarm.c-d-i.non-portable-mode
If true then the non-portable mode is enabled. The non-portable mode is suggested by the specification to overcome problems with legacy applications that do not use CDI SPI properly and may be rejected by more strict validation in CDI 1.1.
swarm.c-d-i.require-bean-descriptor
If true then implicit bean archives without bean descriptor file (beans.xml) are ignored by Weld

B.2.1. CDI Configuration

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>cdi-config</artifactId>
</dependency>

B.3. Connector

Primarily an internal fraction used to provide support for higher-level fractions such as JCA (JSR-322).

If you require JCA support, please see the JCA fraction documentation.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>connector</artifactId>
</dependency>

B.4. Container

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>container</artifactId>
</dependency>

B.5. Datasources

Provides support for container-managed database connections.

B.5.1. Autodetectable drivers

If your application includes the appropriate vendor JDBC library in its normal dependencies, these drivers will be detected and installed by WildFly Swarm without any additional effort.

The list of detectable drivers and their driver-name which may be used when defining a datasource is as follows:

Databasedriver-name

MySQL

mysql

PostgreSQL

postgresql

H2

h2

EnterpriseDB

edb

IBM DB2

ibmdb2

Oracle DB

oracle

Microsoft SQLServer

sqlserver

Sybase

sybase

Teiid

teiid

MariaDB

mariadb

Derby

derby

Hive2

hive2

PrestoDB

prestodb

B.5.2. Example datasource definitions

B.5.2.1. MySQL

An example of a MySQL datasource configuration with connection information, basic security, and validation options:

swarm:
  datasources:
    data-sources:
      MyDS:
        driver-name: mysql
        connection-url: jdbc:mysql://localhost:3306/jbossdb
        user-name: admin
        password: admin
        valid-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker
        validate-on-match: true
        background-validation: false
        exception-sorter-class-name: org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter

B.5.2.2. PostgreSQL

An example of a PostgreSQL datasource configuration with connection information, basic security, and validation options:

swarm:
  datasources:
    data-sources:
      MyDS:
        driver-name: postgresql
        connection-url: jdbc:postgresql://localhost:5432/postgresdb
        user-name: admin
        password: admin
        valid-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker
        validate-on-match: true
        background-validation: false
        exception-sorter-class-name: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter

B.5.2.3. Oracle

An example of an Oracle datasource configuration with connection information, basic security, and validation options:

swarm:
  datasources:
    data-sources:
      MyDS:
        driver-name: oracle
        connection-url: jdbc:oracle:thin:@localhost:1521:XE
        user-name: admin
        password: admin
        valid-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker
        validate-on-match: true
        background-validation: false
        stale-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.oracle.OracleStaleConnectionChecker
        exception-sorter-class-name: org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>datasources</artifactId>
</dependency>

Configuration

swarm.datasources.data-sources.KEY.allocation-retry
The allocation retry element indicates the number of times that allocating a connection should be tried before throwing an exception
swarm.datasources.data-sources.KEY.allocation-retry-wait-millis
The allocation retry wait millis element specifies the amount of time, in milliseconds, to wait between retrying to allocate a connection
swarm.datasources.data-sources.KEY.allow-multiple-users
Specifies if multiple users will access the datasource through the getConnection(user, password) method and hence if the internal pool type should account for that
swarm.datasources.data-sources.KEY.background-validation
An element to specify that connections should be validated on a background thread versus being validated prior to use. Changing this value can be done only on disabled datasource, requires a server restart otherwise.
swarm.datasources.data-sources.KEY.background-validation-millis
The background-validation-millis element specifies the amount of time, in milliseconds, that background validation will run. Changing this value can be done only on disabled datasource, requires a server restart otherwise
swarm.datasources.data-sources.KEY.blocking-timeout-wait-millis
The blocking-timeout-millis element specifies the maximum time, in milliseconds, to block while waiting for a connection before throwing an exception. Note that this blocks only while waiting for locking a connection, and will never throw an exception if creating a new connection takes an inordinately long time
swarm.datasources.data-sources.KEY.capacity-decrementer-class
Class defining the policy for decrementing connections in the pool
swarm.datasources.data-sources.KEY.capacity-decrementer-properties
Properties to be injected in class defining the policy for decrementing connections in the pool
swarm.datasources.data-sources.KEY.capacity-incrementer-class
Class defining the policy for incrementing connections in the pool
swarm.datasources.data-sources.KEY.capacity-incrementer-properties
Properties to be injected in class defining the policy for incrementing connections in the pool
swarm.datasources.data-sources.KEY.check-valid-connection-sql
Specify an SQL statement to check validity of a pool connection. This may be called when managed connection is obtained from the pool
swarm.datasources.data-sources.KEY.connectable
Enable the use of CMR. This feature means that a local resource can reliably participate in an XA transaction.
swarm.datasources.data-sources.KEY.connection-listener-class
Speciefies class name extending org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener that provides a possible to listen for connection activation and passivation in order to perform actions before the connection is returned to the application or returned to the pool.
swarm.datasources.data-sources.KEY.connection-listener-property
Properties to be injected in class specidied in connection-listener-class
swarm.datasources.data-sources.KEY.connection-properties.KEY.value
Each connection-property specifies a string name/value pair with the property name coming from the name attribute and the value coming from the element content
swarm.datasources.data-sources.KEY.connection-url
The JDBC driver connection URL
swarm.datasources.data-sources.KEY.datasource-class
The fully qualified name of the JDBC datasource class
swarm.datasources.data-sources.KEY.driver-class
The fully qualified name of the JDBC driver class
swarm.datasources.data-sources.KEY.driver-name
Defines the JDBC driver the datasource should use. It is a symbolic name matching the the name of installed driver. In case the driver is deployed as jar, the name is the name of deployment unit
swarm.datasources.data-sources.KEY.enlistment-trace
Defines if WildFly/IronJacamar should record enlistment traces
swarm.datasources.data-sources.KEY.exception-sorter-class-name
An org.jboss.jca.adapters.jdbc.ExceptionSorter that provides an isExceptionFatal(SQLException) method to validate if an exception should broadcast an error
swarm.datasources.data-sources.KEY.exception-sorter-properties
The exception sorter properties
swarm.datasources.data-sources.KEY.flush-strategy
Specifies how the pool should be flush in case of an error. Valid values are: FailingConnectionOnly (default), IdleConnections and EntirePool
swarm.datasources.data-sources.KEY.idle-timeout-minutes
The idle-timeout-minutes elements specifies the maximum time, in minutes, a connection may be idle before being closed. The actual maximum time depends also on the IdleRemover scan time, which is half of the smallest idle-timeout-minutes value of any pool. Changing this value can be done only on disabled datasource, requires a server restart otherwise.
swarm.datasources.data-sources.KEY.initial-pool-size
The initial-pool-size element indicates the initial number of connections a pool should hold.
swarm.datasources.data-sources.KEY.jndi-name
Specifies the JNDI name for the datasource
swarm.datasources.data-sources.KEY.jta
Enable JTA integration
swarm.datasources.data-sources.KEY.max-pool-size
The max-pool-size element specifies the maximum number of connections for a pool. No more connections will be created in each sub-pool
swarm.datasources.data-sources.KEY.mcp
Defines the ManagedConnectionPool implementation, f.ex. org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool
swarm.datasources.data-sources.KEY.min-pool-size
The min-pool-size element specifies the minimum number of connections for a pool
swarm.datasources.data-sources.KEY.new-connection-sql
Specifies an SQL statement to execute whenever a connection is added to the connection pool
swarm.datasources.data-sources.KEY.password
Specifies the password used when creating a new connection
swarm.datasources.data-sources.KEY.pool-fair
Defines if pool should use be fair
swarm.datasources.data-sources.KEY.pool-prefill
Should the pool be prefilled. Changing this value can be done only on disabled datasource, requires a server restart otherwise.
swarm.datasources.data-sources.KEY.pool-use-strict-min
Specifies if the min-pool-size should be considered strictly
swarm.datasources.data-sources.KEY.prepared-statements-cache-size
The number of prepared statements per connection in an LRU cache
swarm.datasources.data-sources.KEY.query-timeout
Any configured query timeout in seconds. If not provided no timeout will be set
swarm.datasources.data-sources.KEY.reauth-plugin-class-name
The fully qualified class name of the reauthentication plugin implementation
swarm.datasources.data-sources.KEY.reauth-plugin-properties
The properties for the reauthentication plugin
swarm.datasources.data-sources.KEY.security-domain
Specifies the security domain which defines the javax.security.auth.Subject that are used to distinguish connections in the pool
swarm.datasources.data-sources.KEY.set-tx-query-timeout
Whether to set the query timeout based on the time remaining until transaction timeout. Any configured query timeout will be used if there is no transaction
swarm.datasources.data-sources.KEY.share-prepared-statements
Whether to share prepared statements, i.e. whether asking for same statement twice without closing uses the same underlying prepared statement
swarm.datasources.data-sources.KEY.spy
Enable spying of SQL statements
swarm.datasources.data-sources.KEY.stale-connection-checker-class-name
An org.jboss.jca.adapters.jdbc.StaleConnectionChecker that provides an isStaleConnection(SQLException) method which if it returns true will wrap the exception in an org.jboss.jca.adapters.jdbc.StaleConnectionException
swarm.datasources.data-sources.KEY.stale-connection-checker-properties
The stale connection checker properties
swarm.datasources.data-sources.KEY.statistics-enabled
Define whether runtime statistics are enabled or not.
swarm.datasources.data-sources.KEY.track-statements
Whether to check for unclosed statements when a connection is returned to the pool, result sets are closed, a statement is closed or return to the prepared statement cache. Valid values are: "false" - do not track statements, "true" - track statements and result sets and warn when they are not closed, "nowarn" - track statements but do not warn about them being unclosed
swarm.datasources.data-sources.KEY.tracking
Defines if IronJacamar should track connection handles across transaction boundaries
swarm.datasources.data-sources.KEY.transaction-isolation
Set the java.sql.Connection transaction isolation level. Valid values are: TRANSACTION_READ_UNCOMMITTED, TRANSACTION_READ_COMMITTED, TRANSACTION_REPEATABLE_READ, TRANSACTION_SERIALIZABLE and TRANSACTION_NONE
swarm.datasources.data-sources.KEY.url-delimiter
Specifies the delimiter for URLs in connection-url for HA datasources
swarm.datasources.data-sources.KEY.url-selector-strategy-class-name
A class that implements org.jboss.jca.adapters.jdbc.URLSelectorStrategy
swarm.datasources.data-sources.KEY.use-ccm
Enable the use of a cached connection manager
swarm.datasources.data-sources.KEY.use-fast-fail
Whether to fail a connection allocation on the first try if it is invalid (true) or keep trying until the pool is exhausted of all potential connections (false)
swarm.datasources.data-sources.KEY.use-java-context
Setting this to false will bind the datasource into global JNDI
swarm.datasources.data-sources.KEY.use-try-lock
Any configured timeout for internal locks on the resource adapter objects in seconds
swarm.datasources.data-sources.KEY.user-name
Specify the user name used when creating a new connection
swarm.datasources.data-sources.KEY.valid-connection-checker-class-name
An org.jboss.jca.adapters.jdbc.ValidConnectionChecker that provides an isValidConnection(Connection) method to validate a connection. If an exception is returned that means the connection is invalid. This overrides the check-valid-connection-sql element
swarm.datasources.data-sources.KEY.valid-connection-checker-properties
The valid connection checker properties
swarm.datasources.data-sources.KEY.validate-on-match
The validate-on-match element specifies if connection validation should be done when a connection factory attempts to match a managed connection. This is typically exclusive to the use of background validation
swarm.datasources.installed-drivers
List of JDBC drivers that have been installed in the runtime
swarm.datasources.jdbc-drivers.KEY.deployment-name
The name of the deployment unit from which the driver was loaded
swarm.datasources.jdbc-drivers.KEY.driver-class-name
The fully qualified class name of the java.sql.Driver implementation
swarm.datasources.jdbc-drivers.KEY.driver-datasource-class-name
The fully qualified class name of the javax.sql.DataSource implementation
swarm.datasources.jdbc-drivers.KEY.driver-major-version
The driver’s major version number
swarm.datasources.jdbc-drivers.KEY.driver-minor-version
The driver’s minor version number
swarm.datasources.jdbc-drivers.KEY.driver-module-name
The name of the module from which the driver was loaded, if it was loaded from the module path
swarm.datasources.jdbc-drivers.KEY.driver-name
Defines the JDBC driver the datasource should use. It is a symbolic name matching the the name of installed driver. In case the driver is deployed as jar, the name is the name of deployment unit
swarm.datasources.jdbc-drivers.KEY.driver-xa-datasource-class-name
The fully qualified class name of the javax.sql.XADataSource implementation
swarm.datasources.jdbc-drivers.KEY.jdbc-compliant
Whether or not the driver is JDBC compliant
swarm.datasources.jdbc-drivers.KEY.module-slot
The slot of the module from which the driver was loaded, if it was loaded from the module path
swarm.datasources.jdbc-drivers.KEY.xa-datasource-class
XA datasource class
swarm.datasources.xa-data-sources.KEY.allocation-retry
The allocation retry element indicates the number of times that allocating a connection should be tried before throwing an exception
swarm.datasources.xa-data-sources.KEY.allocation-retry-wait-millis
The allocation retry wait millis element specifies the amount of time, in milliseconds, to wait between retrying to allocate a connection
swarm.datasources.xa-data-sources.KEY.allow-multiple-users
Specifies if multiple users will access the datasource through the getConnection(user, password) method and hence if the internal pool type should account for that
swarm.datasources.xa-data-sources.KEY.background-validation
An element to specify that connections should be validated on a background thread versus being validated prior to use. Changing this value can be done only on disabled datasource, requires a server restart otherwise.
swarm.datasources.xa-data-sources.KEY.background-validation-millis
The background-validation-millis element specifies the amount of time, in milliseconds, that background validation will run. Changing this value can be done only on disabled datasource, requires a server restart otherwise
swarm.datasources.xa-data-sources.KEY.blocking-timeout-wait-millis
The blocking-timeout-millis element specifies the maximum time, in milliseconds, to block while waiting for a connection before throwing an exception. Note that this blocks only while waiting for locking a connection, and will never throw an exception if creating a new connection takes an inordinately long time
swarm.datasources.xa-data-sources.KEY.capacity-decrementer-class
Class defining the policy for decrementing connections in the pool
swarm.datasources.xa-data-sources.KEY.capacity-decrementer-properties
Properties to inject in class defining the policy for decrementing connections in the pool
swarm.datasources.xa-data-sources.KEY.capacity-incrementer-class
Class defining the policy for incrementing connections in the pool
swarm.datasources.xa-data-sources.KEY.capacity-incrementer-properties
Properties to inject in class defining the policy for incrementing connections in the pool
swarm.datasources.xa-data-sources.KEY.check-valid-connection-sql
Specify an SQL statement to check validity of a pool connection. This may be called when managed connection is obtained from the pool
swarm.datasources.xa-data-sources.KEY.connectable
Enable the use of CMR for this datasource. This feature means that a local resource can reliably participate in an XA transaction.
swarm.datasources.xa-data-sources.KEY.connection-listener-class
Speciefies class name extending org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener that provides a possible to listen for connection activation and passivation in order to perform actions before the connection is returned to the application or returned to the pool.
swarm.datasources.xa-data-sources.KEY.connection-listener-property
Properties to be injected in class specified in connection-listener-class
swarm.datasources.xa-data-sources.KEY.driver-name
Defines the JDBC driver the datasource should use. It is a symbolic name matching the the name of installed driver. In case the driver is deployed as jar, the name is the name of deployment unit
swarm.datasources.xa-data-sources.KEY.enlistment-trace
Defines if WildFly/IronJacamar should record enlistment traces
swarm.datasources.xa-data-sources.KEY.exception-sorter-class-name
An org.jboss.jca.adapters.jdbc.ExceptionSorter that provides an isExceptionFatal(SQLException) method to validate if an exception should broadcast an error
swarm.datasources.xa-data-sources.KEY.exception-sorter-properties
The exception sorter properties
swarm.datasources.xa-data-sources.KEY.flush-strategy
Specifies how the pool should be flush in case of an error. Valid values are: FailingConnectionOnly (default), IdleConnections and EntirePool
swarm.datasources.xa-data-sources.KEY.idle-timeout-minutes
The idle-timeout-minutes elements specifies the maximum time, in minutes, a connection may be idle before being closed. The actual maximum time depends also on the IdleRemover scan time, which is half of the smallest idle-timeout-minutes value of any pool. Changing this value can be done only on disabled datasource, requires a server restart otherwise.
swarm.datasources.xa-data-sources.KEY.initial-pool-size
The initial-pool-size element indicates the initial number of connections a pool should hold.
swarm.datasources.xa-data-sources.KEY.interleaving
An element to enable interleaving for XA connections
swarm.datasources.xa-data-sources.KEY.jndi-name
Specifies the JNDI name for the datasource
swarm.datasources.xa-data-sources.KEY.max-pool-size
The max-pool-size element specifies the maximum number of connections for a pool. No more connections will be created in each sub-pool
swarm.datasources.xa-data-sources.KEY.mcp
Defines the ManagedConnectionPool implementation, f.ex. org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool
swarm.datasources.xa-data-sources.KEY.min-pool-size
The min-pool-size element specifies the minimum number of connections for a pool
swarm.datasources.xa-data-sources.KEY.new-connection-sql
Specifies an SQL statement to execute whenever a connection is added to the connection pool
swarm.datasources.xa-data-sources.KEY.no-recovery
Specifies if the connection pool should be excluded from recovery
swarm.datasources.xa-data-sources.KEY.no-tx-separate-pool
Oracle does not like XA connections getting used both inside and outside a JTA transaction. To workaround the problem you can create separate sub-pools for the different contexts
swarm.datasources.xa-data-sources.KEY.pad-xid
Should the Xid be padded
swarm.datasources.xa-data-sources.KEY.password
Specifies the password used when creating a new connection
swarm.datasources.xa-data-sources.KEY.pool-fair
Defines if pool should use be fair
swarm.datasources.xa-data-sources.KEY.pool-prefill
Should the pool be prefilled. Changing this value can be done only on disabled datasource, requires a server restart otherwise.
swarm.datasources.xa-data-sources.KEY.pool-use-strict-min
Specifies if the min-pool-size should be considered strictly
swarm.datasources.xa-data-sources.KEY.prepared-statements-cache-size
The number of prepared statements per connection in an LRU cache
swarm.datasources.xa-data-sources.KEY.query-timeout
Any configured query timeout in seconds. If not provided no timeout will be set
swarm.datasources.xa-data-sources.KEY.reauth-plugin-class-name
The fully qualified class name of the reauthentication plugin implementation
swarm.datasources.xa-data-sources.KEY.reauth-plugin-properties
The properties for the reauthentication plugin
swarm.datasources.xa-data-sources.KEY.recovery-password
The password used for recovery
swarm.datasources.xa-data-sources.KEY.recovery-plugin-class-name
The fully qualified class name of the recovery plugin implementation
swarm.datasources.xa-data-sources.KEY.recovery-plugin-properties
The properties for the recovery plugin
swarm.datasources.xa-data-sources.KEY.recovery-security-domain
The security domain used for recovery
swarm.datasources.xa-data-sources.KEY.recovery-username
The user name used for recovery
swarm.datasources.xa-data-sources.KEY.same-rm-override
The is-same-rm-override element allows one to unconditionally set whether the javax.transaction.xa.XAResource.isSameRM(XAResource) returns true or false
swarm.datasources.xa-data-sources.KEY.security-domain
Specifies the security domain which defines the javax.security.auth.Subject that are used to distinguish connections in the pool
swarm.datasources.xa-data-sources.KEY.set-tx-query-timeout
Whether to set the query timeout based on the time remaining until transaction timeout. Any configured query timeout will be used if there is no transaction
swarm.datasources.xa-data-sources.KEY.share-prepared-statements
Whether to share prepared statements, i.e. whether asking for same statement twice without closing uses the same underlying prepared statement
swarm.datasources.xa-data-sources.KEY.spy
Enable spying of SQL statements
swarm.datasources.xa-data-sources.KEY.stale-connection-checker-class-name
An org.jboss.jca.adapters.jdbc.StaleConnectionChecker that provides an isStaleConnection(SQLException) method which if it returns true will wrap the exception in an org.jboss.jca.adapters.jdbc.StaleConnectionException
swarm.datasources.xa-data-sources.KEY.stale-connection-checker-properties
The stale connection checker properties
swarm.datasources.xa-data-sources.KEY.statistics-enabled
Define whether runtime statistics are enabled or not.
swarm.datasources.xa-data-sources.KEY.track-statements
Whether to check for unclosed statements when a connection is returned to the pool, result sets are closed, a statement is closed or return to the prepared statement cache. Valid values are: "false" - do not track statements, "true" - track statements and result sets and warn when they are not closed, "nowarn" - track statements but do not warn about them being unclosed
swarm.datasources.xa-data-sources.KEY.tracking
Defines if IronJacamar should track connection handles across transaction boundaries
swarm.datasources.xa-data-sources.KEY.transaction-isolation
Set the java.sql.Connection transaction isolation level. Valid values are: TRANSACTION_READ_UNCOMMITTED, TRANSACTION_READ_COMMITTED, TRANSACTION_REPEATABLE_READ, TRANSACTION_SERIALIZABLE and TRANSACTION_NONE
swarm.datasources.xa-data-sources.KEY.url-delimiter
Specifies the delimiter for URLs in connection-url for HA datasources
swarm.datasources.xa-data-sources.KEY.url-property
Specifies the property for the URL property in the xa-datasource-property values
swarm.datasources.xa-data-sources.KEY.url-selector-strategy-class-name
A class that implements org.jboss.jca.adapters.jdbc.URLSelectorStrategy
swarm.datasources.xa-data-sources.KEY.use-ccm
Enable the use of a cached connection manager
swarm.datasources.xa-data-sources.KEY.use-fast-fail
Whether to fail a connection allocation on the first try if it is invalid (true) or keep trying until the pool is exhausted of all potential connections (false)
swarm.datasources.xa-data-sources.KEY.use-java-context
Setting this to false will bind the datasource into global JNDI
swarm.datasources.xa-data-sources.KEY.use-try-lock
Any configured timeout for internal locks on the resource adapter objects in seconds
swarm.datasources.xa-data-sources.KEY.user-name
Specify the user name used when creating a new connection
swarm.datasources.xa-data-sources.KEY.valid-connection-checker-class-name
An org.jboss.jca.adapters.jdbc.ValidConnectionChecker that provides an isValidConnection(Connection) method to validate a connection. If an exception is returned that means the connection is invalid. This overrides the check-valid-connection-sql element
swarm.datasources.xa-data-sources.KEY.valid-connection-checker-properties
The valid connection checker properties
swarm.datasources.xa-data-sources.KEY.validate-on-match
The validate-on-match element specifies if connection validation should be done when a connection factory attempts to match a managed connection. This is typically exclusive to the use of background validation
swarm.datasources.xa-data-sources.KEY.wrap-xa-resource
Should the XAResource instances be wrapped in an org.jboss.tm.XAResourceWrapper instance
swarm.datasources.xa-data-sources.KEY.xa-datasource-class
The fully qualified name of the javax.sql.XADataSource implementation
swarm.datasources.xa-data-sources.KEY.xa-datasource-properties.KEY.value
Specifies a property value to assign to the XADataSource implementation class. Each property is identified by the name attribute and the property value is given by the xa-datasource-property element content. The property is mapped onto the XADataSource implementation by looking for a JavaBeans style getter method for the property name. If found, the value of the property is set using the JavaBeans setter with the element text translated to the true property type using the java.beans.PropertyEditor
swarm.datasources.xa-data-sources.KEY.xa-resource-timeout
The value is passed to XAResource.setTransactionTimeout(), in seconds. Default is zero
swarm.ds.connection.url
Default datasource connection URL
swarm.ds.name
Name of the default datasource
swarm.ds.password
Defatul datasource connection password
swarm.ds.username
Default datasource connection user name
swarm.jdbc.driver
Defatul datasource JDBC driver name

B.6. EE

An internal fraction used to support other higher-level fractions.

The EE fraction does not imply the totality of Java EE support.

If you require specific Java EE technologies, address them individually, for example jaxrs, cdi, datasources, or ejb.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>ee</artifactId>
</dependency>

Configuration

swarm.e-e.annotation-property-replacement
Flag indicating whether Java EE annotations will have property replacements applied
swarm.e-e.context-services.KEY.jndi-name
The JNDI Name to lookup the context service.
swarm.e-e.context-services.KEY.use-transaction-setup-provider
Flag which indicates if the transaction setup provider should be used
swarm.e-e.default-bindings-service.context-service
The JNDI name where the default EE Context Service can be found
swarm.e-e.default-bindings-service.datasource
The JNDI name where the default EE Datasource can be found
swarm.e-e.default-bindings-service.jms-connection-factory
The JNDI name where the default EE JMS Connection Factory can be found
swarm.e-e.default-bindings-service.managed-executor-service
The JNDI name where the default EE Managed Executor Service can be found
swarm.e-e.default-bindings-service.managed-scheduled-executor-service
The JNDI name where the default EE Managed Scheduled Executor Service can be found
swarm.e-e.default-bindings-service.managed-thread-factory
The JNDI name where the default EE Managed Thread Factory can be found
swarm.e-e.ear-subdeployments-isolated
Flag indicating whether each of the subdeployments within a .ear can access classes belonging to another subdeployment within the same .ear. A value of false means the subdeployments can see classes belonging to other subdeployments within the .ear.
swarm.e-e.global-modules
A list of modules that should be made available to all deployments.
swarm.e-e.jboss-descriptor-property-replacement
Flag indicating whether JBoss specific deployment descriptors will have property replacements applied
swarm.e-e.managed-executor-services.KEY.context-service
The name of the context service to be used by the executor.
swarm.e-e.managed-executor-services.KEY.core-threads
The minimum number of threads to be used by the executor. If left undefined the default core-size is calculated based on the number of processors. A value of zero is not advised and in some cases invalid. See the queue-length attribute for details on how this value is used to determine the queuing strategy.
swarm.e-e.managed-executor-services.KEY.hung-task-threshold
The runtime, in milliseconds, for tasks to be considered hung by the managed executor service. If value is 0 tasks are never considered hung.
swarm.e-e.managed-executor-services.KEY.jndi-name
The JNDI Name to lookup the managed executor service.
swarm.e-e.managed-executor-services.KEY.keepalive-time
When the number of threads is greater than the core, this is the maximum time, in milliseconds, that excess idle threads will wait for new tasks before terminating.
swarm.e-e.managed-executor-services.KEY.long-running-tasks
Flag which hints the duration of tasks executed by the executor.
swarm.e-e.managed-executor-services.KEY.max-threads
The maximum number of threads to be used by the executor. If left undefined the value from core-size will be used. This value is ignored if an unbounded queue is used (only core-threads will be used in that case).
swarm.e-e.managed-executor-services.KEY.queue-length
The executors task queue capacity. A length of 0 means direct hand-off and possible rejection will occur. An undefined length (the default), or Integer.MAX_VALUE, indicates that an unbounded queue should be used. All other values specify an exact queue size. If an unbounded queue or direct hand-off is used, a core-threads value greater than zero is required.
swarm.e-e.managed-executor-services.KEY.reject-policy
The policy to be applied to aborted tasks.
swarm.e-e.managed-executor-services.KEY.thread-factory
The name of the thread factory to be used by the executor.
swarm.e-e.managed-scheduled-executor-services.KEY.context-service
The name of the context service to be used by the scheduled executor.
swarm.e-e.managed-scheduled-executor-services.KEY.core-threads
The minimum number of threads to be used by the scheduled executor.
swarm.e-e.managed-scheduled-executor-services.KEY.hung-task-threshold
The runtime, in milliseconds, for tasks to be considered hung by the scheduled executor. If 0 tasks are never considered hung.
swarm.e-e.managed-scheduled-executor-services.KEY.jndi-name
The JNDI Name to lookup the managed scheduled executor service.
swarm.e-e.managed-scheduled-executor-services.KEY.keepalive-time
When the number of threads is greater than the core, this is the maximum time, in milliseconds, that excess idle threads will wait for new tasks before terminating.
swarm.e-e.managed-scheduled-executor-services.KEY.long-running-tasks
Flag which hints the duration of tasks executed by the scheduled executor.
swarm.e-e.managed-scheduled-executor-services.KEY.reject-policy
The policy to be applied to aborted tasks.
swarm.e-e.managed-scheduled-executor-services.KEY.thread-factory
The name of the thread factory to be used by the scheduled executor.
swarm.e-e.managed-thread-factories.KEY.context-service
The name of the context service to be used by the managed thread factory
swarm.e-e.managed-thread-factories.KEY.jndi-name
The JNDI Name to lookup the managed thread factory.
swarm.e-e.managed-thread-factories.KEY.priority
The priority applied to threads created by the factory
swarm.e-e.spec-descriptor-property-replacement
Flag indicating whether descriptors defined by the Java EE specification will have property replacements applied

B.7. EJB

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>ejb</artifactId>
</dependency>

Configuration

swarm.ejb3.async-service.thread-pool-name
The name of the thread pool which handles asynchronous invocations
swarm.ejb3.caches.KEY.aliases
The aliases by which this cache may also be referenced
swarm.ejb3.caches.KEY.passivation-store
The passivation store used by this cache
swarm.ejb3.cluster-passivation-stores.KEY.bean-cache
The name of the cache used to store bean instances.
swarm.ejb3.cluster-passivation-stores.KEY.cache-container
The name of the cache container used for the bean and client-mappings caches
swarm.ejb3.cluster-passivation-stores.KEY.idle-timeout
The timeout in units specified by idle-timeout-unit, after which a bean will passivate
swarm.ejb3.cluster-passivation-stores.KEY.max-size
The maximum number of beans this cache should store before forcing old beans to passivate
swarm.ejb3.default-distinct-name
The default distinct name that is applied to every EJB deployed on this server
swarm.ejb3.default-entity-bean-instance-pool
Name of the default entity bean instance pool, which will be applicable to all entity beans, unless overridden at the deployment or bean level
swarm.ejb3.default-entity-bean-optimistic-locking
If set to true entity beans will use optimistic locking by default
swarm.ejb3.default-mdb-instance-pool
Name of the default MDB instance pool, which will be applicable to all MDBs, unless overridden at the deployment or bean level
swarm.ejb3.default-missing-method-permissions-deny-access
If this is set to true then methods on an EJB with a security domain specified or with other methods with security metadata will have an implicit @DenyAll unless other security metadata is present
swarm.ejb3.default-resource-adapter-name
Name of the default resource adapter name that will be used by MDBs, unless overridden at the deployment or bean level
swarm.ejb3.default-security-domain
The default security domain that will be used for EJBs if the bean doesn’t explicitly specify one
swarm.ejb3.default-sfsb-cache
Name of the default stateful bean cache, which will be applicable to all stateful EJBs, unless overridden at the deployment or bean level
swarm.ejb3.default-sfsb-passivation-disabled-cache
Name of the default stateful bean cache, which will be applicable to all stateful EJBs which have passivation disabled. Each deployment or EJB can optionally override this cache name.
swarm.ejb3.default-singleton-bean-access-timeout
The default access timeout for singleton beans
swarm.ejb3.default-slsb-instance-pool
Name of the default stateless bean instance pool, which will be applicable to all stateless EJBs, unless overridden at the deployment or bean level
swarm.ejb3.default-stateful-bean-access-timeout
The default access timeout for stateful beans
swarm.ejb3.enable-statistics
If set to true, enable the collection of invocation statistics.
swarm.ejb3.file-passivation-stores.KEY.idle-timeout
The timeout in units specified by idle-timeout-unit, after which a bean will passivate
swarm.ejb3.file-passivation-stores.KEY.max-size
The maximum number of beans this cache should store before forcing old beans to passivate
swarm.ejb3.iiop-service.enable-by-default
If this is true EJB’s will be exposed over IIOP by default, otherwise it needs to be explicitly enabled in the deployment descriptor
swarm.ejb3.iiop-service.use-qualified-name
If true EJB names will be bound into the naming service with the application and module name prepended to the name (e.g. myapp/mymodule/MyEjb)
swarm.ejb3.in-vm-remote-interface-invocation-pass-by-value
If set to false, the parameters to invocations on remote interface of an EJB, will be passed by reference. Else, the parameters will be passed by value.
swarm.ejb3.log-system-exceptions
If this is true then all EJB system (not application) exceptions will be logged. The EJB spec mandates this behaviour, however it is not recommended as it will often result in exceptions being logged twice (once by the EJB and once by the calling code)
swarm.ejb3.mdb-delivery-groups.KEY.active
Indicates if delivery for all MDBs belonging to this group is active
swarm.ejb3.passivation-stores.KEY.bean-cache
The name of the cache used to store bean instances.
swarm.ejb3.passivation-stores.KEY.cache-container
The name of the cache container used for the bean and client-mappings caches
swarm.ejb3.passivation-stores.KEY.max-size
The maximum number of beans this cache should store before forcing old beans to passivate
swarm.ejb3.remote-service.channel-creation-options.KEY.type
The type of the channel creation option
swarm.ejb3.remote-service.channel-creation-options.KEY.value
The value for the EJB remote channel creation option
swarm.ejb3.remote-service.cluster
The name of the clustered cache container which will be used to store/access the client-mappings of the EJB remoting connector’s socket-binding on each node, in the cluster
swarm.ejb3.remote-service.connector-ref
The name of the connector on which the EJB3 remoting channel is registered
swarm.ejb3.remote-service.execute-in-worker
If this is true the EJB request will be executed in the IO subsystems worker, otherwise it will dispatch to the EJB thread pool
swarm.ejb3.remote-service.thread-pool-name
The name of the thread pool that handles remote invocations
swarm.ejb3.remoting-profiles.KEY.exclude-local-receiver
If set no local receiver is used in this profile
swarm.ejb3.remoting-profiles.KEY.local-receiver-pass-by-value
If set local receiver will pass ejb beans by value
swarm.ejb3.remoting-profiles.KEY.remoting-ejb-receivers.KEY.channel-creation-options.KEY.type
The type of the channel creation option
swarm.ejb3.remoting-profiles.KEY.remoting-ejb-receivers.KEY.channel-creation-options.KEY.value
The value for the EJB remote channel creation option
swarm.ejb3.remoting-profiles.KEY.remoting-ejb-receivers.KEY.connect-timeout
Remoting ejb receiver connect timeout
swarm.ejb3.remoting-profiles.KEY.remoting-ejb-receivers.KEY.outbound-connection-ref
Name of outbound connection that will be used by the ejb receiver
swarm.ejb3.strict-max-bean-instance-pools.KEY.derive-size
Specifies if and what the max pool size should be derived from. A value of 'none', the default, indicates that the explicit value of max-pool-size should be used. A value of 'from-worker-pools' indicates that the max pool size should be derived from the size of the total threads for all worker pools configured on the system. A value of 'from-cpu-count' indicates that the max pool size should be derived from the total number of processors available on the system. Note that the computation isn’t a 1:1 mapping, the values may or may not be augmented by other factors.
swarm.ejb3.strict-max-bean-instance-pools.KEY.max-pool-size
The maximum number of bean instances that the pool can hold at a given point in time
swarm.ejb3.strict-max-bean-instance-pools.KEY.timeout
The maximum amount of time to wait for a bean instance to be available from the pool
swarm.ejb3.strict-max-bean-instance-pools.KEY.timeout-unit
The instance acquisition timeout unit
swarm.ejb3.thread-pools.KEY.active-count
The approximate number of threads that are actively executing tasks.
swarm.ejb3.thread-pools.KEY.completed-task-count
The approximate total number of tasks that have completed execution.
swarm.ejb3.thread-pools.KEY.current-thread-count
The current number of threads in the pool.
swarm.ejb3.thread-pools.KEY.keepalive-time
Used to specify the amount of time that pool threads should be kept running when idle; if not specified, threads will run until the executor is shut down.
swarm.ejb3.thread-pools.KEY.largest-thread-count
The largest number of threads that have ever simultaneously been in the pool.
swarm.ejb3.thread-pools.KEY.max-threads
The maximum thread pool size.
swarm.ejb3.thread-pools.KEY.name
The name of the thread pool.
swarm.ejb3.thread-pools.KEY.queue-size
The queue size.
swarm.ejb3.thread-pools.KEY.rejected-count
The number of tasks that have been rejected.
swarm.ejb3.thread-pools.KEY.task-count
The approximate total number of tasks that have ever been scheduled for execution.
swarm.ejb3.thread-pools.KEY.thread-factory
Specifies the name of a specific thread factory to use to create worker threads. If not defined an appropriate default thread factory will be used.
swarm.ejb3.timer-service.database-data-stores.KEY.allow-execution
If this node is allowed to execute timers. If this is false then the timers will be added to the database, and another node may execute them. Note that depending on your refresh interval if you add timers with a very short delay they will not be executed until another node refreshes.
swarm.ejb3.timer-service.database-data-stores.KEY.database
The type of database that is in use. SQL can be customised per database type.
swarm.ejb3.timer-service.database-data-stores.KEY.datasource-jndi-name
The datasource that is used to persist the timers
swarm.ejb3.timer-service.database-data-stores.KEY.partition
The partition name. This should be set to a different value for every node that is sharing a database to prevent the same timer being loaded by multiple noded.
swarm.ejb3.timer-service.database-data-stores.KEY.refresh-interval
Interval between refreshing the current timer set against the underlying database. A low value means timers get picked up more quickly, but increase load on the database.
swarm.ejb3.timer-service.default-data-store
The default data store used for persistent timers
swarm.ejb3.timer-service.file-data-stores.KEY.path
The directory to store persistent timer information in
swarm.ejb3.timer-service.file-data-stores.KEY.relative-to
The relative path that is used to resolve the timer data store location
swarm.ejb3.timer-service.thread-pool-name
The name of the thread pool used to run timer service invocations

B.8. Hibernate Validator

Provides support and integration for applications using Hibernate Validator.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>hibernate-validator</artifactId>
</dependency>

B.9. IO

Primarily an internal fraction supporting I/O activities for higher-level fractions.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>io</artifactId>
</dependency>

Configuration

swarm.i-o.buffer-pools.KEY.buffer-size
The size of each buffer slice in bytes, if not set optimal value is calculated based on available RAM resources in your system.
swarm.i-o.buffer-pools.KEY.buffers-per-slice
How many buffers per slice, if not set optimal value is calculated based on available RAM resources in your system.
swarm.i-o.buffer-pools.KEY.direct-buffers
Does the buffer pool use direct buffers, some platforms don’t support direct buffers
swarm.i-o.workers.KEY.io-threads
Specify the number of I/O threads to create for the worker. If not specified, a default will be chosen, which is calculated by cpuCount * 2
swarm.i-o.workers.KEY.stack-size
The stack size (in bytes) to attempt to use for worker threads.
swarm.i-o.workers.KEY.task-keepalive
Specify the number of milliseconds to keep non-core task threads alive.
swarm.i-o.workers.KEY.task-max-threads
Specify the maximum number of threads for the worker task thread pool.If not set, default value used which is calculated by formula cpuCount * 16

B.10. JAX-RS

Provides support for building RESTful web services according to JSR-311.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jaxrs</artifactId>
</dependency>

Configuration

swarm.deployment.KEY.jaxrs.application-path
Set the JAX-RS application path, if the JAX-RS application class is autogenerated by Swarm

B.10.1. JAX-RS + CDI

An internal fraction providing integration between JAX-RS and CDI.

For more information, see the JAX-RS and CDI fraction documentation.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jaxrs-cdi</artifactId>
</dependency>

B.10.2. JAX-RS + JAXB

Provides support within JAX-RS applications for the XML binding framework according to JSR-31 and JSR-222.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jaxrs-jaxb</artifactId>
</dependency>

B.10.3. JAX-RS + JSON-P

Provides support within JAX-RS application for JSON processing according to JSR-374.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jaxrs-jsonp</artifactId>
</dependency>

B.10.4. JAX-RS + Multipart

Provides support within JAX-RS application for MIME multipart form processing.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jaxrs-multipart</artifactId>
</dependency>

B.10.5. JAX-RS + Validator

Provides integration and support between JAX-RS applications and Hibernate Validator.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jaxrs-validator</artifactId>
</dependency>

B.11. JCA

Provides support for the Java Connector Architecture (JCA) according to JSR 322.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jca</artifactId>
</dependency>

Configuration

swarm.j-c-a.archive-validation.enabled
Specify whether archive validation is enabled
swarm.j-c-a.archive-validation.fail-on-error
Should an archive validation error report fail the deployment
swarm.j-c-a.archive-validation.fail-on-warn
Should an archive validation warning report fail the deployment
swarm.j-c-a.bean-validation.enabled
Specify whether bean validation is enabled
swarm.j-c-a.bootstrap-contexts.KEY.name
The name of the BootstrapContext
swarm.j-c-a.bootstrap-contexts.KEY.workmanager
The WorkManager instance for the BootstrapContext
swarm.j-c-a.cached-connection-manager.debug
Enable/disable debug information logging
swarm.j-c-a.cached-connection-manager.error
Enable/disable error information logging
swarm.j-c-a.cached-connection-manager.ignore-unknown-connections
Do not cache unknown connections
swarm.j-c-a.cached-connection-manager.install
Enable/disable the cached connection manager valve and interceptor
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.allow-core-timeout
Whether core threads may time out.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.core-threads
The core thread pool size which is smaller than the maximum pool size. If undefined, the core thread pool size is the same as the maximum thread pool size.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.current-thread-count
The current number of threads in the pool.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.handoff-executor
An executor to delegate tasks to in the event that a task cannot be accepted. If not specified, tasks that cannot be accepted will be silently discarded.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.keepalive-time
Used to specify the amount of time that pool threads should be kept running when idle; if not specified, threads will run until the executor is shut down.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.largest-thread-count
The largest number of threads that have ever simultaneously been in the pool.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.max-threads
The maximum thread pool size.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.name
The name of the thread pool.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.queue-length
The queue length.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.queue-size
The queue size.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.rejected-count
The number of tasks that have been passed to the handoff-executor (if one is specified) or discarded.
swarm.j-c-a.distributed-workmanagers.KEY.long-running-threads.KEY.thread-factory
Specifies the name of a specific thread factory to use to create worker threads. If not defined an appropriate default thread factory will be used.
swarm.j-c-a.distributed-workmanagers.KEY.name
The name of the DistributedWorkManager
swarm.j-c-a.distributed-workmanagers.KEY.policy
The policy decides when to redistribute a Work instance
swarm.j-c-a.distributed-workmanagers.KEY.policy-options
List of policy’s options key/value pairs
swarm.j-c-a.distributed-workmanagers.KEY.selector
The selector decides to which nodes in the network to redistribute the Work instance to
swarm.j-c-a.distributed-workmanagers.KEY.selector-options
List of selector’s options key/value pairs
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.allow-core-timeout
Whether core threads may time out.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.core-threads
The core thread pool size which is smaller than the maximum pool size. If undefined, the core thread pool size is the same as the maximum thread pool size.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.current-thread-count
The current number of threads in the pool.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.handoff-executor
An executor to delegate tasks to in the event that a task cannot be accepted. If not specified, tasks that cannot be accepted will be silently discarded.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.keepalive-time
Used to specify the amount of time that pool threads should be kept running when idle; if not specified, threads will run until the executor is shut down.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.largest-thread-count
The largest number of threads that have ever simultaneously been in the pool.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.max-threads
The maximum thread pool size.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.name
The name of the thread pool.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.queue-length
The queue length.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.queue-size
The queue size.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.rejected-count
The number of tasks that have been passed to the handoff-executor (if one is specified) or discarded.
swarm.j-c-a.distributed-workmanagers.KEY.short-running-threads.KEY.thread-factory
Specifies the name of a specific thread factory to use to create worker threads. If not defined an appropriate default thread factory will be used.
swarm.j-c-a.tracer.enabled
Specify whether tracer is enabled
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.allow-core-timeout
Whether core threads may time out.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.core-threads
The core thread pool size which is smaller than the maximum pool size. If undefined, the core thread pool size is the same as the maximum thread pool size.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.current-thread-count
The current number of threads in the pool.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.handoff-executor
An executor to delegate tasks to in the event that a task cannot be accepted. If not specified, tasks that cannot be accepted will be silently discarded.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.keepalive-time
Used to specify the amount of time that pool threads should be kept running when idle; if not specified, threads will run until the executor is shut down.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.largest-thread-count
The largest number of threads that have ever simultaneously been in the pool.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.max-threads
The maximum thread pool size.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.name
The name of the thread pool.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.queue-length
The queue length.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.queue-size
The queue size.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.rejected-count
The number of tasks that have been passed to the handoff-executor (if one is specified) or discarded.
swarm.j-c-a.workmanagers.KEY.long-running-threads.KEY.thread-factory
Specifies the name of a specific thread factory to use to create worker threads. If not defined an appropriate default thread factory will be used.
swarm.j-c-a.workmanagers.KEY.name
The name of the WorkManager
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.allow-core-timeout
Whether core threads may time out.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.core-threads
The core thread pool size which is smaller than the maximum pool size. If undefined, the core thread pool size is the same as the maximum thread pool size.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.current-thread-count
The current number of threads in the pool.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.handoff-executor
An executor to delegate tasks to in the event that a task cannot be accepted. If not specified, tasks that cannot be accepted will be silently discarded.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.keepalive-time
Used to specify the amount of time that pool threads should be kept running when idle; if not specified, threads will run until the executor is shut down.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.largest-thread-count
The largest number of threads that have ever simultaneously been in the pool.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.max-threads
The maximum thread pool size.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.name
The name of the thread pool.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.queue-length
The queue length.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.queue-size
The queue size.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.rejected-count
The number of tasks that have been passed to the handoff-executor (if one is specified) or discarded.
swarm.j-c-a.workmanagers.KEY.short-running-threads.KEY.thread-factory
Specifies the name of a specific thread factory to use to create worker threads. If not defined an appropriate default thread factory will be used.

B.12. JMX

Provides support for Java Management Extensions (JMX) according to JSR-3.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jmx</artifactId>
</dependency>

Configuration

swarm.j-m-x.audit-log-configuration.enabled
Whether audit logging is enabled.
swarm.j-m-x.audit-log-configuration.log-boot
Whether operations should be logged on server boot.
swarm.j-m-x.audit-log-configuration.log-read-only
Whether operations that do not modify the configuration or any runtime services should be logged.
swarm.j-m-x.expression-expose-model.domain-name
The domain name to use for the 'expression' model controller JMX facade in the MBeanServer.
swarm.j-m-x.jmx-remoting-connector.use-management-endpoint
If true the connector will use the management endpoint, otherwise it will use the remoting subsystem one
swarm.j-m-x.non-core-mbean-sensitivity
Whether or not core MBeans, i.e. mbeans not coming from the model controller, should be considered sensitive.
swarm.j-m-x.resolved-expose-model.domain-name
The domain name to use for the 'resolved' model controller JMX facade in the MBeanServer.
swarm.j-m-x.resolved-expose-model.proper-property-format
If false, PROPERTY type attributes are represented as a DMR string, this is the legacy behaviour. If true, PROPERTY type attributes are represented by a composite type where the key is a string, and the value has the same type as the property in the underlying model.
swarm.j-m-x.show-model
Alias for the existence of the 'resolved' model controller jmx facade. When writing, if set to 'true' it will add the 'resolved' model controller jmx facade resource with the default domain name.

B.13. JPA

Provides support for the Java Persistence API according to JSR-220.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jpa</artifactId>
</dependency>

Configuration

swarm.j-p-a.default-datasource
The name of the default global datasource.
swarm.j-p-a.default-extended-persistence-inheritance
Controls how JPA extended persistence context (XPC) inheritance is performed. 'DEEP' shares the extended persistence context at top bean level. 'SHALLOW' the extended persistece context is only shared with the parent bean (never with sibling beans).

B.14. JSF

Provides support for JavaServer Faces according to JSR-344.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jsf</artifactId>
</dependency>

Configuration

swarm.j-s-f.default-jsf-impl-slot
Default JSF implementation slot

B.15. JSON-P

Provides support for JSON Processing according to JSR-353.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>jsonp</artifactId>
</dependency>

B.16. Keycloak

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>keycloak</artifactId>
</dependency>

Configuration

swarm.keycloak.json.path
Path to keycloak.json configuration
swarm.keycloak.realms.KEY.allow-any-hostname
SSL Setting
swarm.keycloak.realms.KEY.always-refresh-token
Refresh token on every single web request
swarm.keycloak.realms.KEY.auth-server-url
Base URL of the Realm Auth Server
swarm.keycloak.realms.KEY.auth-server-url-for-backend-requests
URL to use to make background calls to auth server
swarm.keycloak.realms.KEY.client-key-password
n/a
swarm.keycloak.realms.KEY.client-keystore
n/a
swarm.keycloak.realms.KEY.client-keystore-password
n/a
swarm.keycloak.realms.KEY.connection-pool-size
Connection pool size for the client used by the adapter
swarm.keycloak.realms.KEY.cors-allowed-headers
CORS allowed headers
swarm.keycloak.realms.KEY.cors-allowed-methods
CORS allowed methods
swarm.keycloak.realms.KEY.cors-max-age
CORS max-age header
swarm.keycloak.realms.KEY.disable-trust-manager
Adapter will not use a trust manager when making adapter HTTPS requests
swarm.keycloak.realms.KEY.enable-cors
Enable Keycloak CORS support
swarm.keycloak.realms.KEY.expose-token
Enable secure URL that exposes access token
swarm.keycloak.realms.KEY.principal-attribute
token attribute to use to set Principal name
swarm.keycloak.realms.KEY.realm-public-key
Public key of the realm
swarm.keycloak.realms.KEY.register-node-at-startup
Cluster setting
swarm.keycloak.realms.KEY.register-node-period
how often to re-register node
swarm.keycloak.realms.KEY.ssl-required
Specify if SSL is required (valid values are all, external and none)
swarm.keycloak.realms.KEY.token-store
cookie or session storage for auth session data
swarm.keycloak.realms.KEY.truststore
Truststore used for adapter client HTTPS requests
swarm.keycloak.realms.KEY.truststore-password
Password of the Truststore
swarm.keycloak.secure-deployments.KEY.allow-any-hostname
SSL Setting
swarm.keycloak.secure-deployments.KEY.always-refresh-token
Refresh token on every single web request
swarm.keycloak.secure-deployments.KEY.auth-server-url
Base URL of the Realm Auth Server
swarm.keycloak.secure-deployments.KEY.auth-server-url-for-backend-requests
URL to use to make background calls to auth server
swarm.keycloak.secure-deployments.KEY.bearer-only
Bearer Token Auth only
swarm.keycloak.secure-deployments.KEY.client-key-password
n/a
swarm.keycloak.secure-deployments.KEY.client-keystore
n/a
swarm.keycloak.secure-deployments.KEY.client-keystore-password
n/a
swarm.keycloak.secure-deployments.KEY.connection-pool-size
Connection pool size for the client used by the adapter
swarm.keycloak.secure-deployments.KEY.cors-allowed-headers
CORS allowed headers
swarm.keycloak.secure-deployments.KEY.cors-allowed-methods
CORS allowed methods
swarm.keycloak.secure-deployments.KEY.cors-max-age
CORS max-age header
swarm.keycloak.secure-deployments.KEY.credentials.KEY.value
Credential value
swarm.keycloak.secure-deployments.KEY.disable-trust-manager
Adapter will not use a trust manager when making adapter HTTPS requests
swarm.keycloak.secure-deployments.KEY.enable-basic-auth
Enable Basic Authentication
swarm.keycloak.secure-deployments.KEY.enable-cors
Enable Keycloak CORS support
swarm.keycloak.secure-deployments.KEY.expose-token
Enable secure URL that exposes access token
swarm.keycloak.secure-deployments.KEY.min-time-between-jwks-requests
If adapter recognize token signed by unknown public key, it will try to download new public key from keycloak server. However it won’t try to download if already tried it in less than 'min-time-between-jwks-requests' seconds
swarm.keycloak.secure-deployments.KEY.principal-attribute
token attribute to use to set Principal name
swarm.keycloak.secure-deployments.KEY.public-client
Public client
swarm.keycloak.secure-deployments.KEY.realm
Keycloak realm
swarm.keycloak.secure-deployments.KEY.realm-public-key
Public key of the realm
swarm.keycloak.secure-deployments.KEY.register-node-at-startup
Cluster setting
swarm.keycloak.secure-deployments.KEY.register-node-period
how often to re-register node
swarm.keycloak.secure-deployments.KEY.resource
Application name
swarm.keycloak.secure-deployments.KEY.ssl-required
Specify if SSL is required (valid values are all, external and none)
swarm.keycloak.secure-deployments.KEY.token-minimum-time-to-live
The adapter will refresh the token if the current token is expired OR will expire in 'token-minimum-time-to-live' seconds or less
swarm.keycloak.secure-deployments.KEY.token-store
cookie or session storage for auth session data
swarm.keycloak.secure-deployments.KEY.truststore
Truststore used for adapter client HTTPS requests
swarm.keycloak.secure-deployments.KEY.truststore-password
Password of the Truststore
swarm.keycloak.secure-deployments.KEY.turn-off-change-session-id-on-login
The session id is changed by default on a successful login. Change this to true if you want to turn this off
swarm.keycloak.secure-deployments.KEY.use-resource-role-mappings
Use resource level permissions from token

B.17. Logging

Provides facilities to configure logging categories, levels and handlers.

When specifying log-levels through properties, since they include dots, they should be placed between square brackets, such as swarm.logging.loggers.[com.mycorp.logger].level.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>logging</artifactId>
</dependency>

Configuration

swarm.logging.add-logging-api-dependencies
Indicates whether or not logging API dependencies should be added to deployments during the deployment process. A value of true will add the dependencies to the deployment. A value of false will skip the deployment from being processed for logging API dependencies.
swarm.logging.async-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.async-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.async-handlers.KEY.level
The log level specifying which message levels will be logged by this handler. Message levels lower than this value will be discarded.
swarm.logging.async-handlers.KEY.overflow-action
Specify what action to take when the overflowing. The valid options are 'block' and 'discard'
swarm.logging.async-handlers.KEY.queue-length
The queue length to use before flushing writing
swarm.logging.async-handlers.KEY.subhandlers
The Handlers associated with this async handler.
swarm.logging.console-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.console-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.console-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.console-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.console-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.console-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.console-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.console-handlers.KEY.target
Defines the target of the console handler. The value can be System.out, System.err or console.
swarm.logging.custom-formatters.KEY.attribute-class
The logging handler class to be used.
swarm.logging.custom-formatters.KEY.module
The module that the logging handler depends on.
swarm.logging.custom-formatters.KEY.properties
Defines the properties used for the logging handler. All properties must be accessible via a setter method.
swarm.logging.custom-handlers.KEY.attribute-class
The logging handler class to be used.
swarm.logging.custom-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.custom-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.custom-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.custom-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.custom-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.custom-handlers.KEY.module
The module that the logging handler depends on.
swarm.logging.custom-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.custom-handlers.KEY.properties
Defines the properties used for the logging handler. All properties must be accessible via a setter method.
swarm.logging.file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.log-files.KEY.file-size
The size of the log file in bytes.
swarm.logging.log-files.KEY.last-modified-time
The date, in milliseconds, the file was last modified.
swarm.logging.log-files.KEY.last-modified-timestamp
The date, in ISO 8601 format, the file was last modified.
swarm.logging.log-files.KEY.stream
Provides the server log as a response attachment. The response result value is the unique id of the attachment.
swarm.logging.loggers.KEY.category
Specifies the category for the logger.
swarm.logging.loggers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.loggers.KEY.handlers
The handlers associated with the logger.
swarm.logging.loggers.KEY.level
The log level specifying which message levels will be logged by the logger. Message levels lower than this value will be discarded.
swarm.logging.loggers.KEY.use-parent-handlers
Specifies whether or not this logger should send its output to it’s parent Logger.
swarm.logging.logging-profiles.KEY.async-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.async-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.async-handlers.KEY.level
The log level specifying which message levels will be logged by this handler. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.async-handlers.KEY.overflow-action
Specify what action to take when the overflowing. The valid options are 'block' and 'discard'
swarm.logging.logging-profiles.KEY.async-handlers.KEY.queue-length
The queue length to use before flushing writing
swarm.logging.logging-profiles.KEY.async-handlers.KEY.subhandlers
The Handlers associated with this async handler.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.console-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.logging-profiles.KEY.console-handlers.KEY.target
Defines the target of the console handler. The value can be System.out, System.err or console.
swarm.logging.logging-profiles.KEY.custom-formatters.KEY.attribute-class
The logging handler class to be used.
swarm.logging.logging-profiles.KEY.custom-formatters.KEY.module
The module that the logging handler depends on.
swarm.logging.logging-profiles.KEY.custom-formatters.KEY.properties
Defines the properties used for the logging handler. All properties must be accessible via a setter method.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.attribute-class
The logging handler class to be used.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.module
The module that the logging handler depends on.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.logging-profiles.KEY.custom-handlers.KEY.properties
Defines the properties used for the logging handler. All properties must be accessible via a setter method.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.logging-profiles.KEY.log-files.KEY.file-size
The size of the log file in bytes.
swarm.logging.logging-profiles.KEY.log-files.KEY.last-modified-time
The date, in milliseconds, the file was last modified.
swarm.logging.logging-profiles.KEY.log-files.KEY.last-modified-timestamp
The date, in ISO 8601 format, the file was last modified.
swarm.logging.logging-profiles.KEY.log-files.KEY.stream
Provides the server log as a response attachment. The response result value is the unique id of the attachment.
swarm.logging.logging-profiles.KEY.loggers.KEY.category
Specifies the category for the logger.
swarm.logging.logging-profiles.KEY.loggers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.loggers.KEY.handlers
The handlers associated with the logger.
swarm.logging.logging-profiles.KEY.loggers.KEY.level
The log level specifying which message levels will be logged by the logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.loggers.KEY.use-parent-handlers
Specifies whether or not this logger should send its output to it’s parent Logger.
swarm.logging.logging-profiles.KEY.pattern-formatters.KEY.color-map
The color-map attribute allows for a comma delimited list of colors to be used for different levels with a pattern formatter. The format for the color mapping pattern is level-name:color-name.Valid Levels; severe, fatal, error, warn, warning, info, debug, trace, config, fine, finer, finest Valid Colors; black, green, red, yellow, blue, magenta, cyan, white, brightblack, brightred, brightgreen, brightblue, brightyellow, brightmagenta, brightcyan, brightwhite
swarm.logging.logging-profiles.KEY.pattern-formatters.KEY.pattern
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.logging-profiles.KEY.periodic-rotating-file-handlers.KEY.suffix
Set the suffix string. The string is in a format which can be understood by java.text.SimpleDateFormat. The period of the rotation is automatically calculated based on the suffix.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.max-backup-index
The maximum number of backups to keep.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.rotate-on-boot
Indicates the file should be rotated each time the file attribute is changed. This always happens when at initialization time.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.rotate-size
The size at which to rotate the log file.
swarm.logging.logging-profiles.KEY.periodic-size-rotating-file-handlers.KEY.suffix
Set the suffix string. The string is in a format which can be understood by java.text.SimpleDateFormat. The period of the rotation is automatically calculated based on the suffix.
swarm.logging.logging-profiles.KEY.root-logger.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.root-logger.handlers
The handlers associated with the root logger.
swarm.logging.logging-profiles.KEY.root-logger.level
The log level specifying which message levels will be logged by the root logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.max-backup-index
The maximum number of backups to keep.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.rotate-on-boot
Indicates the file should be rotated each time the file attribute is changed. This always happens when at initialization time.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.rotate-size
The size at which to rotate the log file.
swarm.logging.logging-profiles.KEY.size-rotating-file-handlers.KEY.suffix
Set the suffix string. The string is in a format which can be understood by java.text.SimpleDateFormat. The suffix does not determine when the file should be rotated.
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.app-name
The app name used when formatting the message in RFC5424 format. By default the app name is "java".
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.facility
Facility as defined by RFC-5424 (http://tools.ietf.org/html/rfc5424)and RFC-3164 (http://tools.ietf.org/html/rfc3164).
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.hostname
The name of the host the messages are being sent from. For example the name of the host the application server is running on.
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.port
The port the syslog server is listening on.
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.server-address
The address of the syslog server.
swarm.logging.logging-profiles.KEY.syslog-handlers.KEY.syslog-format
Formats the log message according to the RFC specification.
swarm.logging.pattern-formatters.KEY.color-map
The color-map attribute allows for a comma delimited list of colors to be used for different levels with a pattern formatter. The format for the color mapping pattern is level-name:color-name.Valid Levels; severe, fatal, error, warn, warning, info, debug, trace, config, fine, finer, finest Valid Colors; black, green, red, yellow, blue, magenta, cyan, white, brightblack, brightred, brightgreen, brightblue, brightyellow, brightmagenta, brightcyan, brightwhite
swarm.logging.pattern-formatters.KEY.pattern
Defines a pattern for the formatter.
swarm.logging.periodic-rotating-file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.periodic-rotating-file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.periodic-rotating-file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.periodic-rotating-file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.periodic-rotating-file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.periodic-rotating-file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.periodic-rotating-file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.periodic-rotating-file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.periodic-rotating-file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.periodic-rotating-file-handlers.KEY.suffix
Set the suffix string. The string is in a format which can be understood by java.text.SimpleDateFormat. The period of the rotation is automatically calculated based on the suffix.
swarm.logging.periodic-size-rotating-file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.periodic-size-rotating-file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.periodic-size-rotating-file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.periodic-size-rotating-file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.periodic-size-rotating-file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.periodic-size-rotating-file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.periodic-size-rotating-file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.periodic-size-rotating-file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.periodic-size-rotating-file-handlers.KEY.max-backup-index
The maximum number of backups to keep.
swarm.logging.periodic-size-rotating-file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.periodic-size-rotating-file-handlers.KEY.rotate-on-boot
Indicates the file should be rotated each time the file attribute is changed. This always happens when at initialization time.
swarm.logging.periodic-size-rotating-file-handlers.KEY.rotate-size
The size at which to rotate the log file.
swarm.logging.periodic-size-rotating-file-handlers.KEY.suffix
Set the suffix string. The string is in a format which can be understood by java.text.SimpleDateFormat. The period of the rotation is automatically calculated based on the suffix.
swarm.logging.root-logger.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.root-logger.handlers
The handlers associated with the root logger.
swarm.logging.root-logger.level
The log level specifying which message levels will be logged by the root logger. Message levels lower than this value will be discarded.
swarm.logging.size-rotating-file-handlers.KEY.append
Specify whether to append to the target file.
swarm.logging.size-rotating-file-handlers.KEY.autoflush
Automatically flush after each write.
swarm.logging.size-rotating-file-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.size-rotating-file-handlers.KEY.encoding
The character encoding used by this Handler.
swarm.logging.size-rotating-file-handlers.KEY.file
The file description consisting of the path and optional relative to path.
swarm.logging.size-rotating-file-handlers.KEY.filter-spec
A filter expression value to define a filter. Example for a filter that does not match a pattern: not(match("JBAS.*"))
swarm.logging.size-rotating-file-handlers.KEY.formatter
Defines a pattern for the formatter.
swarm.logging.size-rotating-file-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.size-rotating-file-handlers.KEY.max-backup-index
The maximum number of backups to keep.
swarm.logging.size-rotating-file-handlers.KEY.named-formatter
The name of the defined formatter to be used on the handler.
swarm.logging.size-rotating-file-handlers.KEY.rotate-on-boot
Indicates the file should be rotated each time the file attribute is changed. This always happens when at initialization time.
swarm.logging.size-rotating-file-handlers.KEY.rotate-size
The size at which to rotate the log file.
swarm.logging.size-rotating-file-handlers.KEY.suffix
Set the suffix string. The string is in a format which can be understood by java.text.SimpleDateFormat. The suffix does not determine when the file should be rotated.
swarm.logging.syslog-handlers.KEY.app-name
The app name used when formatting the message in RFC5424 format. By default the app name is "java".
swarm.logging.syslog-handlers.KEY.enabled
If set to true the handler is enabled and functioning as normal, if set to false the handler is ignored when processing log messages.
swarm.logging.syslog-handlers.KEY.facility
Facility as defined by RFC-5424 (http://tools.ietf.org/html/rfc5424)and RFC-3164 (http://tools.ietf.org/html/rfc3164).
swarm.logging.syslog-handlers.KEY.hostname
The name of the host the messages are being sent from. For example the name of the host the application server is running on.
swarm.logging.syslog-handlers.KEY.level
The log level specifying which message levels will be logged by this logger. Message levels lower than this value will be discarded.
swarm.logging.syslog-handlers.KEY.port
The port the syslog server is listening on.
swarm.logging.syslog-handlers.KEY.server-address
The address of the syslog server.
swarm.logging.syslog-handlers.KEY.syslog-format
Formats the log message according to the RFC specification.
swarm.logging.use-deployment-logging-config
Indicates whether or not deployments should use a logging configuration file found in the deployment to configure the log manager. If set to true and a logging configuration file was found in the deployments META-INF or WEB-INF/classes directory, then a log manager will be configured with those settings. If set false the servers logging configuration will be used regardless of any logging configuration files supplied in the deployment.

B.18. Management

Provides the Red Hat JBoss EAP management API.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>management</artifactId>
</dependency>

Configuration

swarm.management.audit-access.audit-log-logger.enabled
Whether audit logging is enabled.
swarm.management.audit-access.audit-log-logger.log-boot
Whether operations should be logged on server boot.
swarm.management.audit-access.audit-log-logger.log-read-only
Whether operations that do not modify the configuration or any runtime services should be logged.
swarm.management.audit-access.file-handlers.KEY.disabled-due-to-failure
Whether this handler has been disabled due to logging failures.
swarm.management.audit-access.file-handlers.KEY.failure-count
The number of logging failures since the handler was initialized.
swarm.management.audit-access.file-handlers.KEY.formatter
The formatter used to format the log messages.
swarm.management.audit-access.file-handlers.KEY.max-failure-count
The maximum number of logging failures before disabling this handler.
swarm.management.audit-access.file-handlers.KEY.path
The path of the audit log file.
swarm.management.audit-access.file-handlers.KEY.relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.audit-access.in-memory-handlers.KEY.max-history
The maximum number of operation stored in history for this handler.
swarm.management.audit-access.json-formatters.KEY.compact
If true will format the JSON on one line. There may still be values containing new lines, so if having the whole record on one line is important, set escape-new-line or escape-control-characters to true.
swarm.management.audit-access.json-formatters.KEY.date-format
The date format to use as understood by {@link java.text.SimpleDateFormat}. Will be ignored if include-date="false".
swarm.management.audit-access.json-formatters.KEY.date-separator
The separator between the date and the rest of the formatted log message. Will be ignored if include-date="false".
swarm.management.audit-access.json-formatters.KEY.escape-control-characters
If true will escape all control characters (ascii entries with a decimal value < 32) with the ascii code in octal, e.g.' becomes '#012'. If this is true, it will override escape-new-line="false".
swarm.management.audit-access.json-formatters.KEY.escape-new-line
If true will escape all new lines with the ascii code in octal, e.g. "#012".
swarm.management.audit-access.json-formatters.KEY.include-date
Whether or not to include the date in the formatted log record.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.disabled-due-to-failure
Whether this handler has been disabled due to logging failures.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.failure-count
The number of logging failures since the handler was initialized.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.formatter
The formatter used to format the log messages.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.max-failure-count
The maximum number of logging failures before disabling this handler.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.path
The path of the audit log file.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.audit-access.periodic-rotating-file-handlers.KEY.suffix
The suffix string in a format which can be understood by java.text.SimpleDateFormat. The period of the rotation is automatically calculated based on the suffix.
swarm.management.audit-access.size-rotating-file-handlers.KEY.disabled-due-to-failure
Whether this handler has been disabled due to logging failures.
swarm.management.audit-access.size-rotating-file-handlers.KEY.failure-count
The number of logging failures since the handler was initialized.
swarm.management.audit-access.size-rotating-file-handlers.KEY.formatter
The formatter used to format the log messages.
swarm.management.audit-access.size-rotating-file-handlers.KEY.max-backup-index
The maximum number of backups to keep.
swarm.management.audit-access.size-rotating-file-handlers.KEY.max-failure-count
The maximum number of logging failures before disabling this handler.
swarm.management.audit-access.size-rotating-file-handlers.KEY.path
The path of the audit log file.
swarm.management.audit-access.size-rotating-file-handlers.KEY.relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.audit-access.size-rotating-file-handlers.KEY.rotate-size
The size at which to rotate the log file.
swarm.management.audit-access.syslog-handlers.KEY.app-name
The application name to add to the syslog records as defined in section 6.2.5 of RFC-5424. If not specified it will default to the name of the product.
swarm.management.audit-access.syslog-handlers.KEY.disabled-due-to-failure
Whether this handler has been disabled due to logging failures.
swarm.management.audit-access.syslog-handlers.KEY.facility
The facility to use for syslog logging as defined in section 6.2.1 of RFC-5424, and section 4.1.1 of RFC-3164.
swarm.management.audit-access.syslog-handlers.KEY.failure-count
The number of logging failures since the handler was initialized.
swarm.management.audit-access.syslog-handlers.KEY.formatter
The formatter used to format the log messages.
swarm.management.audit-access.syslog-handlers.KEY.max-failure-count
The maximum number of logging failures before disabling this handler.
swarm.management.audit-access.syslog-handlers.KEY.max-length
The maximum length in bytes a log message, including the header, is allowed to be. If undefined, it will default to 1024 bytes if the syslog-format is RFC3164, or 2048 bytes if the syslog-format is RFC5424.
swarm.management.audit-access.syslog-handlers.KEY.syslog-format
Whether to set the syslog format to the one specified in RFC-5424 or RFC-3164.
swarm.management.audit-access.syslog-handlers.KEY.tcp-protocol.host
The host of the syslog server for the tcp requests.
swarm.management.audit-access.syslog-handlers.KEY.tcp-protocol.message-transfer
The message transfer setting as described in section 3.4 of RFC-6587. This can either be OCTET_COUNTING as described in section 3.4.1 of RFC-6587, or NON_TRANSPARENT_FRAMING as described in section 3.4.1 of RFC-6587. See your syslog provider’s documentation for what is supported.
swarm.management.audit-access.syslog-handlers.KEY.tcp-protocol.port
The port of the syslog server for the tcp requests.
swarm.management.audit-access.syslog-handlers.KEY.tcp-protocol.reconnect-timeout
If a connection drop is detected, the number of seconds to wait before reconnecting. A negative number means don’t reconnect automatically.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.client-certificate-store-authentication.key-password
The password for the keystore key.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.client-certificate-store-authentication.keystore-password
The password for the keystore.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.client-certificate-store-authentication.keystore-path
=The path of the keystore.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.client-certificate-store-authentication.keystore-relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'keystore-relative-to' is provided, the value of the 'keystore-path' attribute is treated as relative to the path specified by this attribute.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.host
The host of the syslog server for the tls over tcp requests.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.message-transfer
The message transfer setting as described in section 3.4 of RFC-6587. This can either be OCTET_COUNTING as described in section 3.4.1 of RFC-6587, or NON_TRANSPARENT_FRAMING as described in section 3.4.1 of RFC-6587. See your syslog provider’s documentation for what is supported.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.port
The port of the syslog server for the tls over tcp requests.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.reconnect-timeout
If a connection drop is detected, the number of seconds to wait before reconnecting. A negative number means don’t reconnect automatically.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.truststore-authentication.keystore-password
The password for the truststore.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.truststore-authentication.keystore-path
=The path of the truststore.
swarm.management.audit-access.syslog-handlers.KEY.tls-protocol.truststore-authentication.keystore-relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'keystore-relative-to' is provided, the value of the 'keystore-path' attribute is treated as relative to the path specified by this attribute.
swarm.management.audit-access.syslog-handlers.KEY.truncate
Whether or not a message, including the header, should truncate the message if the length in bytes is greater than the maximum length. If set to false messages will be split and sent with the same header values.
swarm.management.audit-access.syslog-handlers.KEY.udp-protocol.host
The host of the syslog server for the udp requests.
swarm.management.audit-access.syslog-handlers.KEY.udp-protocol.port
The port of the syslog server for the udp requests.
swarm.management.authorization-access.all-role-names
The official names of all roles supported by the current management access control provider. This includes any standard roles as well as any user-defined roles.
swarm.management.authorization-access.application-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.address
Address pattern describing a resource or resources to which the constraint applies.
swarm.management.authorization-access.application-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.attributes
List of the names of attributes to which the constraint specifically applies.
swarm.management.authorization-access.application-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.entire-resource
True if the constraint applies to the resource as a whole; false if it only applies to one or more attributes or operations.
swarm.management.authorization-access.application-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.operations
List of the names of operations to which the constraint specifically applies.
swarm.management.authorization-access.application-classification-constraint.types.KEY.classifications.KEY.configured-application
Set to override the default as to whether the constraint is considered an application resource.
swarm.management.authorization-access.application-classification-constraint.types.KEY.classifications.KEY.default-application
Whether targets having this application type constraint are considered application resources.
swarm.management.authorization-access.permission-combination-policy
The policy for combining access control permissions when the authorization policy grants the user more than one type of permission for a given action. In the standard role based authorization policy, this would occur when a user maps to multiple roles. The 'permissive' policy means if any of the permissions allow the action, the action is allowed. The 'rejecting' policy means the existence of multiple permissions should result in an error.
swarm.management.authorization-access.provider
The provider to use for management access control decisions.
swarm.management.authorization-access.role-mappings.KEY.excludes.KEY.name
The name of the user or group being mapped.
swarm.management.authorization-access.role-mappings.KEY.excludes.KEY.realm
An optional attribute to map based on the realm used for authentication.
swarm.management.authorization-access.role-mappings.KEY.excludes.KEY.type
The type of the Principal being mapped, either 'group' or 'user'.
swarm.management.authorization-access.role-mappings.KEY.include-all
Configure if all authenticated users should be automatically assigned this role.
swarm.management.authorization-access.role-mappings.KEY.includes.KEY.name
The name of the user or group being mapped.
swarm.management.authorization-access.role-mappings.KEY.includes.KEY.realm
An optional attribute to map based on the realm used for authentication.
swarm.management.authorization-access.role-mappings.KEY.includes.KEY.type
The type of the Principal being mapped, either 'group' or 'user'.
swarm.management.authorization-access.sensitivity-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.address
Address pattern describing a resource or resources to which the constraint applies.
swarm.management.authorization-access.sensitivity-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.attributes
List of the names of attributes to which the constraint specifically applies.
swarm.management.authorization-access.sensitivity-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.entire-resource
True if the constraint applies to the resource as a whole; false if it only applies to one or more attributes or operations.
swarm.management.authorization-access.sensitivity-classification-constraint.types.KEY.classifications.KEY.applies-tos.KEY.operations
List of the names of operations to which the constraint specifically applies.
swarm.management.authorization-access.sensitivity-classification-constraint.types.KEY.classifications.KEY.configured-application
Set to override the default as to whether the constraint is considered an application resource.
swarm.management.authorization-access.sensitivity-classification-constraint.types.KEY.classifications.KEY.default-application
Whether targets having this application type constraint are considered application resources.
swarm.management.authorization-access.standard-role-names
The official names of the standard roles supported by the current management access control provider.
swarm.management.authorization-access.vault-expression-constraint.configured-requires-read
Set to override the default as to whether reading attributes containing vault expressions should be considered sensitive.
swarm.management.authorization-access.vault-expression-constraint.configured-requires-write
Set to override the default as to whether writing attributes containing vault expressions should be considered sensitive.
swarm.management.authorization-access.vault-expression-constraint.default-requires-read
Whether reading attributes containing vault expressions should be considered sensitive.
swarm.management.authorization-access.vault-expression-constraint.default-requires-write
Whether writing attributes containing vault expressions should be considered sensitive.
swarm.management.bind.interface
Interface to bind for the management ports
swarm.management.configuration-changes-service.max-history
The maximum number of configuration changes stored in history.
swarm.management.http-interface-management-interface.allowed-origins
Comma separated list of trusted Origins for sending Cross-Origin Resource Sharing requests on the management API once the user is authenticated.
swarm.management.http-interface-management-interface.console-enabled
Flag that indicates admin console is enabled
swarm.management.http-interface-management-interface.http-upgrade-enabled
Flag that indicates HTTP Upgrade is enabled, which allows HTTP requests to be upgraded to native remoting connections
swarm.management.http-interface-management-interface.sasl-protocol
The name of the protocol to be passed to the SASL mechanisms used for authentication.
swarm.management.http-interface-management-interface.secure-socket-binding
The name of the socket binding configuration to use for the HTTPS management interface’s socket.
swarm.management.http-interface-management-interface.security-realm
The security realm to use for the HTTP management interface.
swarm.management.http-interface-management-interface.server-name
The name of the server used in the initial Remoting exchange and within the SASL mechanisms.
swarm.management.http-interface-management-interface.socket-binding
The name of the socket binding configuration to use for the HTTP management interface’s socket.
swarm.management.http.disable
Flag to disable HTTP access to management interface
swarm.management.http.port
Port for HTTP access to management interface
swarm.management.https.port
Port for HTTPS access to management interface
swarm.management.ldap-connections.KEY.handles-referrals-for
List of URLs that this connection handles referrals for.
swarm.management.ldap-connections.KEY.initial-context-factory
The initial context factory to establish the LdapContext.
swarm.management.ldap-connections.KEY.properties.KEY.value
The optional value of the property.
swarm.management.ldap-connections.KEY.referrals
The referral handling mode for this connection.
swarm.management.ldap-connections.KEY.search-credential
The credential to use when connecting to perform a search.
swarm.management.ldap-connections.KEY.search-dn
The distinguished name to use when connecting to the LDAP server to perform searches.
swarm.management.ldap-connections.KEY.security-realm
The security realm to reference to obtain a configured SSLContext to use when establishing the connection.
swarm.management.ldap-connections.KEY.url
The URL to use to connect to the LDAP server.
swarm.management.management-operations-service.active-operations.KEY.access-mechanism
The mechanism used to submit a request to the server.
swarm.management.management-operations-service.active-operations.KEY.address
The address of the resource targeted by the operation. The value in the final element of the address will be '<hidden>' if the caller is not authorized to address the operation’s target resource.
swarm.management.management-operations-service.active-operations.KEY.caller-thread
The name of the thread that is executing the operation.
swarm.management.management-operations-service.active-operations.KEY.cancelled
Whether the operation has been cancelled.
swarm.management.management-operations-service.active-operations.KEY.domain-rollout
True if the operation is a subsidiary request on a domain process other than the one directly handling the original operation, executing locally as part of the rollout of the original operation across the domain.
swarm.management.management-operations-service.active-operations.KEY.domain-uuid
Identifier of an overall multi-process domain operation of which this operation is a part, or undefined is this operation is not associated with such a domain operation.
swarm.management.management-operations-service.active-operations.KEY.exclusive-running-time
Amount of time the operation has been executing with the exclusive operation execution lock held, or -1 if the operation does not hold the exclusive execution lock.
swarm.management.management-operations-service.active-operations.KEY.execution-status
The current activity of the operation.
swarm.management.management-operations-service.active-operations.KEY.operation
The name of the operation, or '<hidden>' if the caller is not authorized to address the operation’s target resource.
swarm.management.management-operations-service.active-operations.KEY.running-time
Amount of time the operation has been executing.
swarm.management.native-interface-management-interface.sasl-protocol
The name of the protocol to be passed to the SASL mechanisms used for authentication.
swarm.management.native-interface-management-interface.security-realm
The security realm to use for the native management interface.
swarm.management.native-interface-management-interface.server-name
The name of the server used in the initial Remoting exchange and within the SASL mechanisms.
swarm.management.native-interface-management-interface.socket-binding
The name of the socket binding configuration to use for the native management interface’s socket.
swarm.management.security-realms.KEY.jaas-authentication.assign-groups
Map the roles loaded by JAAS to groups.
swarm.management.security-realms.KEY.jaas-authentication.name
The name of the JAAS configuration to use.
swarm.management.security-realms.KEY.kerberos-authentication.remove-realm
After authentication should the realm name be stripped from the users name.
swarm.management.security-realms.KEY.kerberos-server-identity.keytabs.KEY.debug
Should additional debug logging be enabled during TGT acquisition?
swarm.management.security-realms.KEY.kerberos-server-identity.keytabs.KEY.for-hosts
A server can be accessed using different host names, this attribute specifies which host names this keytab can be used with.
swarm.management.security-realms.KEY.kerberos-server-identity.keytabs.KEY.path
The path to the keytab.
swarm.management.security-realms.KEY.kerberos-server-identity.keytabs.KEY.relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.security-realms.KEY.ldap-authentication.advanced-filter
The fully defined filter to be used to search for the user based on their entered user ID. The filter should contain a variable in the form {0} - this will be replaced with the username supplied by the user.
swarm.management.security-realms.KEY.ldap-authentication.allow-empty-passwords
Should empty passwords be accepted from the user being authenticated.
swarm.management.security-realms.KEY.ldap-authentication.base-dn
The base distinguished name to commence the search for the user.
swarm.management.security-realms.KEY.ldap-authentication.by-access-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authentication.by-access-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authentication.by-access-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authentication.by-access-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authentication.by-search-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authentication.by-search-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authentication.by-search-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authentication.by-search-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authentication.connection
The name of the connection to use to connect to LDAP.
swarm.management.security-realms.KEY.ldap-authentication.recursive
Whether the search should be recursive.
swarm.management.security-realms.KEY.ldap-authentication.user-dn
The name of the attribute which is the user’s distinguished name.
swarm.management.security-realms.KEY.ldap-authentication.username-attribute
The name of the attribute to search for the user. This filter will then perform a simple search where the username entered by the user matches the attribute specified here.
swarm.management.security-realms.KEY.ldap-authentication.username-load
The name of the attribute that should be loaded from the authenticated users LDAP entry to replace the username that they supplied, e.g. convert an e-mail address to an ID or correct the case entered.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.base-dn
The starting point of the search for the user.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-access-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-access-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-access-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-access-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-search-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-search-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-search-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.by-search-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.filter
The filter to use for the LDAP search.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.force
Authentication may have already converted the username to a distinguished name, force this to occur again before loading groups.
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.recursive
Should levels below the starting point be recursively searched?
swarm.management.security-realms.KEY.ldap-authorization.advanced-filter-username-to-dn.user-dn-attribute
The attribute on the user entry that contains their distinguished name.
swarm.management.security-realms.KEY.ldap-authorization.connection
The name of the connection to use to connect to LDAP.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.base-dn
The starting point of the search for the group.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-access-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-access-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-access-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-access-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-search-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-search-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-search-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.by-search-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.group-dn-attribute
Which attribute on a group entry is it’s distinguished name.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.group-name
An enumeration to identify if groups should be referenced using a simple name or their distinguished name.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.group-name-attribute
Which attribute on a group entry is it’s simple name.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.iterative
Should further searches be performed to identify groups that the groups identified are a member of?
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.prefer-original-connection
After following a referral should subsequent searches prefer the original connection or use the connection of the last referral.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.principal-attribute
The attribute on the group entry that references the principal.
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.recursive
Should levels below the starting point be recursively searched?
swarm.management.security-realms.KEY.ldap-authorization.group-to-principal-group-search.search-by
Should searches be performed using simple names or distinguished names?
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-access-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-access-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-access-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-access-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-search-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-search-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-search-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.by-search-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.group-attribute
The attribute on the principal which references the group the principal is a member of.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.group-dn-attribute
Which attribute on a group entry is it’s distinguished name.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.group-name
An enumeration to identify if groups should be referenced using a simple name or their distinguished name.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.group-name-attribute
Which attribute on a group entry is it’s simple name.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.iterative
Should further searches be performed to identify groups that the groups identified are a member of?
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.prefer-original-connection
After following a referral should subsequent searches prefer the original connection or use the connection of the last referral.
swarm.management.security-realms.KEY.ldap-authorization.principal-to-group-group-search.skip-missing-groups
If a non-existent group is referenced should it be quietly ignored.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.attribute
The attribute on the user entry that is their username.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.base-dn
The starting point of the search for the user.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-access-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-access-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-access-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-access-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-search-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-search-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-search-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.by-search-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.force
Authentication may have already converted the username to a distinguished name, force this to occur again before loading groups.
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.recursive
Should levels below the starting point be recursively searched?
swarm.management.security-realms.KEY.ldap-authorization.username-filter-username-to-dn.user-dn-attribute
The attribute on the user entry that contains their distinguished name.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-access-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-access-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-access-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-access-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-search-time-cache.cache-failures
Should failures be cached?
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-search-time-cache.cache-size
The current size of the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-search-time-cache.eviction-time
The time in seconds until an entry should be evicted from the cache.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.by-search-time-cache.max-cache-size
The maximum size of the cache before the oldest items are removed to make room for new entries.
swarm.management.security-realms.KEY.ldap-authorization.username-is-dn-username-to-dn.force
Authentication may have already converted the username to a distinguished name, force this to occur again before loading groups.
swarm.management.security-realms.KEY.local-authentication.allowed-users
The comma separated list of users that will be accepted using the JBOSS-LOCAL-USER mechanism or '*' to accept all. If specified the default-user is always assumed allowed.
swarm.management.security-realms.KEY.local-authentication.default-user
The name of the default user to assume if no user specified by the remote client.
swarm.management.security-realms.KEY.local-authentication.skip-group-loading
Disable the loading of the users group membership information after local authentication has been used.
swarm.management.security-realms.KEY.map-groups-to-roles
After a users group membership has been loaded should a 1:1 relationship be assumed regarding group to role mapping.
swarm.management.security-realms.KEY.plug-in-authentication.mechanism
Allow the mechanism this plug-in is compatible with to be overridden from DIGEST.
swarm.management.security-realms.KEY.plug-in-authentication.name
The short name of the plug-in (as registered) to use.
swarm.management.security-realms.KEY.plug-in-authentication.properties.KEY.value
The optional value of the property.
swarm.management.security-realms.KEY.plug-in-authorization.name
The short name of the plug-in (as registered) to use.
swarm.management.security-realms.KEY.plug-in-authorization.properties.KEY.value
The optional value of the property.
swarm.management.security-realms.KEY.properties-authentication.path
The path of the properties file containing the users.
swarm.management.security-realms.KEY.properties-authentication.plain-text
Are the credentials within the properties file stored in plain text. If not the credential is expected to be the hex encoded Digest hash of 'username : realm : password'.
swarm.management.security-realms.KEY.properties-authentication.relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.security-realms.KEY.properties-authorization.path
The path of the properties file containing the users roles.
swarm.management.security-realms.KEY.properties-authorization.relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.security-realms.KEY.secret-server-identity.value
The secret / password - Base64 Encoded.
swarm.management.security-realms.KEY.ssl-server-identity.alias
The alias of the entry to use from the keystore.
swarm.management.security-realms.KEY.ssl-server-identity.enabled-cipher-suites
The cipher suites that can be enabled on the underlying SSLEngine.
swarm.management.security-realms.KEY.ssl-server-identity.enabled-protocols
The protocols that can be enabled on the underlying SSLEngine.
swarm.management.security-realms.KEY.ssl-server-identity.generate-self-signed-certificate-host
If the keystore does not exist and this attribute is set then a self signed certificate will be generated for the specified host name. This is not intended for production use.
swarm.management.security-realms.KEY.ssl-server-identity.key-password
The password to obtain the key from the keystore.
swarm.management.security-realms.KEY.ssl-server-identity.keystore-password
The password to open the keystore.
swarm.management.security-realms.KEY.ssl-server-identity.keystore-path
The path of the keystore, will be ignored if the keystore-provider is anything other than JKS.
swarm.management.security-realms.KEY.ssl-server-identity.keystore-provider
The provider for loading the keystore, defaults to JKS.
swarm.management.security-realms.KEY.ssl-server-identity.keystore-relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.security-realms.KEY.ssl-server-identity.protocol
The protocol to use when creating the SSLContext.
swarm.management.security-realms.KEY.truststore-authentication.keystore-password
The password to open the keystore.
swarm.management.security-realms.KEY.truststore-authentication.keystore-path
The path of the keystore, will be ignored if the keystore-provider is anything other than JKS.
swarm.management.security-realms.KEY.truststore-authentication.keystore-provider
The provider for loading the keystore, defaults to JKS.
swarm.management.security-realms.KEY.truststore-authentication.keystore-relative-to
The name of another previously named path, or of one of the standard paths provided by the system. If 'relative-to' is provided, the value of the 'path' attribute is treated as relative to the path specified by this attribute.
swarm.management.security-realms.KEY.users-authentication.users.KEY.password
The user’s password.

B.19. MicroProfile

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>microprofile</artifactId>
</dependency>

B.20. Monitor

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>monitor</artifactId>
</dependency>

Configuration

swarm.monitor.security-realm
(not yet documented)

B.21. MSC

Primarily an internal fraction providing support for the JBoss Modular Container (MSC). JBoss MSC provides the underpinning for all services wired together supporting the container and the application.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>msc</artifactId>
</dependency>

B.22. Naming

Provides support for JNDI.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>naming</artifactId>
</dependency>

Configuration

swarm.naming.bindings.KEY.attribute-class
The object factory class name for object factory bindings
swarm.naming.bindings.KEY.binding-type
The type of binding to create, may be simple, lookup, external-context or object-factory
swarm.naming.bindings.KEY.cache
If the external context should be cached
swarm.naming.bindings.KEY.environment
The environment to use on object factory instance retrieval
swarm.naming.bindings.KEY.lookup
The entry to lookup in JNDI for lookup bindings
swarm.naming.bindings.KEY.module
The module to load the object factory from for object factory bindings
swarm.naming.bindings.KEY.type
The type of the value to bind for simple bindings, this must be a primitive type
swarm.naming.bindings.KEY.value
The value to bind for simple bindings

B.23. Remoting

Primarily an internal fraction providing remote invocation support for higher-level fractions such as EJB.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>remoting</artifactId>
</dependency>

Configuration

swarm.remoting.connectors.KEY.authentication-provider
The "authentication-provider" element contains the name of the authentication provider to use for incoming connections.
swarm.remoting.connectors.KEY.properties.KEY.value
The property value.
swarm.remoting.connectors.KEY.sasl-protocol
The protocol to pass into the SASL mechanisms used for authentication.
swarm.remoting.connectors.KEY.sasl-security.include-mechanisms
The optional nested "include-mechanisms" element contains a whitelist of allowed SASL mechanism names. No mechanisms will be allowed which are not present in this list.
swarm.remoting.connectors.KEY.sasl-security.policy-sasl-policy.forward-secrecy
The optional nested "forward-secrecy" element contains a boolean value which specifies whether mechanisms that implement forward secrecy between sessions are required. Forward secrecy means that breaking into one session will not automatically provide information for breaking into future sessions.
swarm.remoting.connectors.KEY.sasl-security.policy-sasl-policy.no-active
The optional nested "no-active" element contains a boolean value which specifies whether mechanisms susceptible to active (non-dictionary) attacks are not permitted. "false" to permit, "true" to deny.
swarm.remoting.connectors.KEY.sasl-security.policy-sasl-policy.no-anonymous
The optional nested "no-anonymous" element contains a boolean value which specifies whether mechanisms that accept anonymous login are permitted. "false" to permit, "true" to deny.
swarm.remoting.connectors.KEY.sasl-security.policy-sasl-policy.no-dictionary
The optional nested "no-dictionary" element contains a boolean value which specifies whether mechanisms susceptible to passive dictionary attacks are permitted. "false" to permit, "true" to deny.
swarm.remoting.connectors.KEY.sasl-security.policy-sasl-policy.no-plain-text
The optional nested "no-plain-text" element contains a boolean value which specifies whether mechanisms susceptible to simple plain passive attacks (e.g., "PLAIN") are not permitted. "false" to permit, "true" to deny.
swarm.remoting.connectors.KEY.sasl-security.policy-sasl-policy.pass-credentials
The optional nested "pass-credentials" element contains a boolean value which specifies whether mechanisms that pass client credentials are required.
swarm.remoting.connectors.KEY.sasl-security.properties.KEY.value
The property value.
swarm.remoting.connectors.KEY.sasl-security.qop
The optional nested "qop" element contains a list of quality-of-protection values, in decreasing order of preference.
swarm.remoting.connectors.KEY.sasl-security.reuse-session
The optional nested "reuse-session" boolean element specifies whether or not the server should attempt to reuse previously authenticated session information. The mechanism may or may not support such reuse, and other factors may also prevent it.
swarm.remoting.connectors.KEY.sasl-security.server-auth
The optional nested "server-auth" boolean element specifies whether the server should authenticate to the client. Not all mechanisms may support this setting.
swarm.remoting.connectors.KEY.sasl-security.strength
The optional nested "strength" element contains a list of cipher strength values, in decreasing order of preference.
swarm.remoting.connectors.KEY.security-realm
The associated security realm to use for authentication for this connector.
swarm.remoting.connectors.KEY.server-name
The server name to send in the initial message exchange and for SASL based authentication.
swarm.remoting.connectors.KEY.socket-binding
The name (or names) of the socket binding(s) to attach to.
swarm.remoting.endpoint-configuration.auth-realm
The authentication realm to use if no authentication {@code CallbackHandler} is specified.
swarm.remoting.endpoint-configuration.authentication-retries
Specify the number of times a client is allowed to retry authentication before closing the connection.
swarm.remoting.endpoint-configuration.authorize-id
The SASL authorization ID. Used as authentication user name to use if no authentication {@code CallbackHandler} is specifiedand the selected SASL mechanism demands a user name.
swarm.remoting.endpoint-configuration.buffer-region-size
The size of allocated buffer regions.
swarm.remoting.endpoint-configuration.heartbeat-interval
The interval to use for connection heartbeat, in milliseconds. If the connection is idle in the outbound directionfor this amount of time, a ping message will be sent, which will trigger a corresponding reply message.
swarm.remoting.endpoint-configuration.max-inbound-channels
The maximum number of concurrent inbound messages on a channel.
swarm.remoting.endpoint-configuration.max-inbound-message-size
The maximum inbound message size to be allowed. Messages exceeding this size will cause an exception to be thrown on the reading side as well as the writing side.
swarm.remoting.endpoint-configuration.max-inbound-messages
The maximum number of inbound channels to support for a connection.
swarm.remoting.endpoint-configuration.max-outbound-channels
The maximum number of concurrent outbound messages on a channel.
swarm.remoting.endpoint-configuration.max-outbound-message-size
The maximum outbound message size to send. No messages larger than this well be transmitted; attempting to do so will cause an exception on the writing side.
swarm.remoting.endpoint-configuration.max-outbound-messages
The maximum number of outbound channels to support for a connection.
swarm.remoting.endpoint-configuration.receive-buffer-size
The size of the largest buffer that this endpoint will accept over a connection.
swarm.remoting.endpoint-configuration.receive-window-size
The maximum window size of the receive direction for connection channels, in bytes.
swarm.remoting.endpoint-configuration.sasl-protocol
Where a SaslServer or SaslClient are created by default the protocol specified it 'remoting', this can be used to override this.
swarm.remoting.endpoint-configuration.send-buffer-size
The size of the largest buffer that this endpoint will transmit over a connection.
swarm.remoting.endpoint-configuration.server-name
The server side of the connection passes it’s name to the client in the initial greeting, by default the name is automatically discovered from the local address of the connection or it can be overridden using this.
swarm.remoting.endpoint-configuration.transmit-window-size
The maximum window size of the transmit direction for connection channels, in bytes.
swarm.remoting.endpoint-configuration.worker
Worker to use
swarm.remoting.http-connectors.KEY.authentication-provider
The "authentication-provider" element contains the name of the authentication provider to use for incoming connections.
swarm.remoting.http-connectors.KEY.connector-ref
The name (or names) of a connector in the Undertow subsystem to connect to.
swarm.remoting.http-connectors.KEY.properties.KEY.value
The property value.
swarm.remoting.http-connectors.KEY.sasl-protocol
The protocol to pass into the SASL mechanisms used for authentication.
swarm.remoting.http-connectors.KEY.sasl-security.include-mechanisms
The optional nested "include-mechanisms" element contains a whitelist of allowed SASL mechanism names. No mechanisms will be allowed which are not present in this list.
swarm.remoting.http-connectors.KEY.sasl-security.policy-sasl-policy.forward-secrecy
The optional nested "forward-secrecy" element contains a boolean value which specifies whether mechanisms that implement forward secrecy between sessions are required. Forward secrecy means that breaking into one session will not automatically provide information for breaking into future sessions.
swarm.remoting.http-connectors.KEY.sasl-security.policy-sasl-policy.no-active
The optional nested "no-active" element contains a boolean value which specifies whether mechanisms susceptible to active (non-dictionary) attacks are not permitted. "false" to permit, "true" to deny.
swarm.remoting.http-connectors.KEY.sasl-security.policy-sasl-policy.no-anonymous
The optional nested "no-anonymous" element contains a boolean value which specifies whether mechanisms that accept anonymous login are permitted. "false" to permit, "true" to deny.
swarm.remoting.http-connectors.KEY.sasl-security.policy-sasl-policy.no-dictionary
The optional nested "no-dictionary" element contains a boolean value which specifies whether mechanisms susceptible to passive dictionary attacks are permitted. "false" to permit, "true" to deny.
swarm.remoting.http-connectors.KEY.sasl-security.policy-sasl-policy.no-plain-text
The optional nested "no-plain-text" element contains a boolean value which specifies whether mechanisms susceptible to simple plain passive attacks (e.g., "PLAIN") are not permitted. "false" to permit, "true" to deny.
swarm.remoting.http-connectors.KEY.sasl-security.policy-sasl-policy.pass-credentials
The optional nested "pass-credentials" element contains a boolean value which specifies whether mechanisms that pass client credentials are required.
swarm.remoting.http-connectors.KEY.sasl-security.properties.KEY.value
The property value.
swarm.remoting.http-connectors.KEY.sasl-security.qop
The optional nested "qop" element contains a list of quality-of-protection values, in decreasing order of preference.
swarm.remoting.http-connectors.KEY.sasl-security.reuse-session
The optional nested "reuse-session" boolean element specifies whether or not the server should attempt to reuse previously authenticated session information. The mechanism may or may not support such reuse, and other factors may also prevent it.
swarm.remoting.http-connectors.KEY.sasl-security.server-auth
The optional nested "server-auth" boolean element specifies whether the server should authenticate to the client. Not all mechanisms may support this setting.
swarm.remoting.http-connectors.KEY.sasl-security.strength
The optional nested "strength" element contains a list of cipher strength values, in decreasing order of preference.
swarm.remoting.http-connectors.KEY.security-realm
The associated security realm to use for authentication for this connector.
swarm.remoting.http-connectors.KEY.server-name
The server name to send in the initial message exchange and for SASL based authentication.
swarm.remoting.local-outbound-connections.KEY.outbound-socket-binding-ref
Name of the outbound-socket-binding which will be used to determine the destination address and port for the connection.
swarm.remoting.local-outbound-connections.KEY.properties.KEY.value
The property value.
swarm.remoting.outbound-connections.KEY.properties.KEY.value
The property value.
swarm.remoting.outbound-connections.KEY.uri
The connection URI for the outbound connection.
swarm.remoting.port
Port for legacy remoting connector
swarm.remoting.remote-outbound-connections.KEY.outbound-socket-binding-ref
Name of the outbound-socket-binding which will be used to determine the destination address and port for the connection.
swarm.remoting.remote-outbound-connections.KEY.properties.KEY.value
The property value.
swarm.remoting.remote-outbound-connections.KEY.protocol
The protocol to use for the remote connection. Defaults to http-remoting.
swarm.remoting.remote-outbound-connections.KEY.security-realm
Reference to the security realm to use to obtain the password and SSL configuration.
swarm.remoting.remote-outbound-connections.KEY.username
The user name to use when authenticating against the remote server.
swarm.remoting.required
(not yet documented)

B.24. Request Controller

Provides support for the Red Hat JBoss EAP request-controller, allowing for graceful pause/resume/shutdown of the container.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>request-controller</artifactId>
</dependency>

Configuration

swarm.request-controller.active-requests
The number of requests that are currently running in the server
swarm.request-controller.max-requests
The maximum number of all types of requests that can be running in a server at a time
swarm.request-controller.track-individual-endpoints
If this is true requests are tracked at an endpoint level, which will allow individual deployments to be suspended

B.25. Security

Provides underlying security infrastructure to support JAAS and other security APIs.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>security</artifactId>
</dependency>

Configuration

swarm.security.classic-vault.code
Fully Qualified Name of the Security Vault Implementation.
swarm.security.classic-vault.vault-options
Security Vault options.
swarm.security.deep-copy-subject-mode
Sets the copy mode of subjects done by the security managers to be deep copies that makes copies of the subject principals and credentials if they are cloneable. It should be set to true if subject include mutable content that can be corrupted when multiple threads have the same identity and cache flushes/logout clearing the subject in one thread results in subject references affecting other threads.
swarm.security.security-domains.KEY.cache-type
Adds a cache to speed up authentication checks. Allowed values are 'default' to use simple map as the cache and 'infinispan' to use an Infinispan cache.
swarm.security.security-domains.KEY.classic-acl.acl-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.classic-acl.acl-modules.KEY.flag
The flag controls how the module participates in the overall procedure. Allowed values are requisite, required, sufficient or optional.
swarm.security.security-domains.KEY.classic-acl.acl-modules.KEY.module
Name of JBoss Module where the login module is located.
swarm.security.security-domains.KEY.classic-acl.acl-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.classic-audit.provider-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.classic-audit.provider-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.classic-authentication.login-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.classic-authentication.login-modules.KEY.flag
The flag controls how the module participates in the overall procedure. Allowed values are requisite, required, sufficient or optional.
swarm.security.security-domains.KEY.classic-authentication.login-modules.KEY.module
Name of JBoss Module where the login module is located.
swarm.security.security-domains.KEY.classic-authentication.login-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.classic-authorization.policy-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.classic-authorization.policy-modules.KEY.flag
The flag controls how the module participates in the overall procedure. Allowed values are requisite, required, sufficient or optional.
swarm.security.security-domains.KEY.classic-authorization.policy-modules.KEY.module
Name of JBoss Module where the login module is located.
swarm.security.security-domains.KEY.classic-authorization.policy-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.classic-identity-trust.trust-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.classic-identity-trust.trust-modules.KEY.flag
The flag controls how the module participates in the overall procedure. Allowed values are requisite, required, sufficient or optional.
swarm.security.security-domains.KEY.classic-identity-trust.trust-modules.KEY.module
Name of JBoss Module where the login module is located.
swarm.security.security-domains.KEY.classic-identity-trust.trust-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.classic-jsse.additional-properties
Additional properties that may be necessary to configure JSSE.
swarm.security.security-domains.KEY.classic-jsse.cipher-suites
Comma separated list of cipher suites to enable on SSLSockets.
swarm.security.security-domains.KEY.classic-jsse.client-alias
Preferred alias to use when the KeyManager chooses the client alias.
swarm.security.security-domains.KEY.classic-jsse.client-auth
Boolean attribute to indicate if client’s certificates should also be authenticated on the server side.
swarm.security.security-domains.KEY.classic-jsse.key-manager
JSEE Key Manager factory
swarm.security.security-domains.KEY.classic-jsse.keystore
Configures a JSSE key store
swarm.security.security-domains.KEY.classic-jsse.protocols
Comma separated list of protocols to enable on SSLSockets.
swarm.security.security-domains.KEY.classic-jsse.server-alias
Preferred alias to use when the KeyManager chooses the server alias.
swarm.security.security-domains.KEY.classic-jsse.service-auth-token
Token to retrieve PrivateKeys from the KeyStore.
swarm.security.security-domains.KEY.classic-jsse.trust-manager
JSEE Trust Manager factory
swarm.security.security-domains.KEY.classic-jsse.truststore
Configures a JSSE trust store
swarm.security.security-domains.KEY.classic-mapping.mapping-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.classic-mapping.mapping-modules.KEY.module
Name of JBoss Module where the mapping module code is located.
swarm.security.security-domains.KEY.classic-mapping.mapping-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.classic-mapping.mapping-modules.KEY.type
Type of mapping this module performs. Allowed values are principal, role, attribute or credential..
swarm.security.security-domains.KEY.jaspi-authentication.auth-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.jaspi-authentication.auth-modules.KEY.flag
The flag controls how the module participates in the overall procedure. Allowed values are requisite, required, sufficient or optional.
swarm.security.security-domains.KEY.jaspi-authentication.auth-modules.KEY.login-module-stack-ref
Reference to a login module stack name previously configured in the same security domain.
swarm.security.security-domains.KEY.jaspi-authentication.auth-modules.KEY.module
Name of JBoss Module where the mapping module code is located.
swarm.security.security-domains.KEY.jaspi-authentication.auth-modules.KEY.module-options
List of module options containing a name/value pair.
swarm.security.security-domains.KEY.jaspi-authentication.login-module-stacks.KEY.login-modules.KEY.code
Class name of the module to be instantiated.
swarm.security.security-domains.KEY.jaspi-authentication.login-module-stacks.KEY.login-modules.KEY.flag
The flag controls how the module participates in the overall procedure. Allowed values are requisite, required, sufficient or optional.
swarm.security.security-domains.KEY.jaspi-authentication.login-module-stacks.KEY.login-modules.KEY.module
Name of JBoss Module where the login module is located.
swarm.security.security-domains.KEY.jaspi-authentication.login-module-stacks.KEY.login-modules.KEY.module-options
List of module options containing a name/value pair.

B.26. Topology

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>topology</artifactId>
</dependency>

B.26.1. OpenShift

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>topology-openshift</artifactId>
</dependency>

B.26.2. Topology UI

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>topology-webapp</artifactId>
</dependency>

Configuration

swarm.topology.web-app.expose-topology-endpoint
Flag to enable or disable the topology web endpoint
swarm.topology.web-app.proxied-service-mappings
Service name to URL path proxy mappings

B.27. Transactions

Provides support for the Java Transaction API (JTA) according to JSR-907.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>transactions</artifactId>
</dependency>

Configuration

swarm.transactions.commit-markable-resources.KEY.batch-size
Batch size for this CMR resource
swarm.transactions.commit-markable-resources.KEY.immediate-cleanup
Immediate cleanup associated to this CMR resource
swarm.transactions.commit-markable-resources.KEY.jndi-name
JNDi name of this CMR resource
swarm.transactions.commit-markable-resources.KEY.name
table name for storing XIDs
swarm.transactions.default-timeout
The default timeout.
swarm.transactions.enable-tsm-status
Whether the transaction status manager (TSM) service, needed for out of process recovery, should be provided or not..
swarm.transactions.jdbc-action-store-drop-table
Configure if jdbc action store should drop tables. Default is false. The server should be restarted for this setting to take effect.
swarm.transactions.jdbc-action-store-table-prefix
Optional prefix for table used to write transcation logs in configured jdbc action store. The server should be restarted for this setting to take effect.
swarm.transactions.jdbc-communication-store-drop-table
Configure if jdbc communication store should drop tables. Default is false. The server should be restarted for this setting to take effect.
swarm.transactions.jdbc-communication-store-table-prefix
Optional prefix for table used to write transcation logs in configured jdbc communication store. The server should be restarted for this setting to take effect.
swarm.transactions.jdbc-state-store-drop-table
Configure if jdbc state store should drop tables. Default is false. The server should be restarted for this setting to take effect.
swarm.transactions.jdbc-state-store-table-prefix
Optional prefix for table used to write transcation logs in configured jdbc state store. The server should be restarted for this setting to take effect.
swarm.transactions.jdbc-store-datasource
Jndi name of non-XA datasource used. Datasource sghould be define in datasources subsystem. The server should be restarted for this setting to take effect.
swarm.transactions.journal-store-enable-async-io
Whether AsyncIO should be enabled for the journal store. Default is false. The server should be restarted for this setting to take effect.
swarm.transactions.jts
If true this enables the Java Transaction Service.
swarm.transactions.log-store.expose-all-logs
Whether to expose all logs like orphans etc. By default only a subset of transaction logs is exposed.
swarm.transactions.log-store.transactions.KEY.age-in-seconds
The time since this transaction was prepared or when the recovery system last tried to recover it.
swarm.transactions.log-store.transactions.KEY.id
The id of this transaction.
swarm.transactions.log-store.transactions.KEY.jmx-name
The JMX name of this transaction.
swarm.transactions.log-store.transactions.KEY.participants.KEY.eis-product-name
The JCA enterprise information system’s product name.
swarm.transactions.log-store.transactions.KEY.participants.KEY.eis-product-version
The JCA enterprise information system’s product version
swarm.transactions.log-store.transactions.KEY.participants.KEY.jmx-name
The JMX name of this participant.
swarm.transactions.log-store.transactions.KEY.participants.KEY.jndi-name
JNDI name of this participant.
swarm.transactions.log-store.transactions.KEY.participants.KEY.status
Reports the commitment status of this participant (can be one of Pending, Prepared, Failed, Heuristic or Readonly).
swarm.transactions.log-store.transactions.KEY.participants.KEY.type
The type name under which this record is stored.
swarm.transactions.log-store.transactions.KEY.type
The type name under which this record is stored.
swarm.transactions.log-store.type
Specifies the implementation type of the logging store.
swarm.transactions.node-identifier
Used to set the node identifier on the core environment.
swarm.transactions.number-of-aborted-transactions
The number of aborted (i.e. rolledback) transactions.
swarm.transactions.number-of-application-rollbacks
The number of transactions that have been rolled back by application request. This includes those that timeout, since the timeout behavior is considered an attribute of the application configuration.
swarm.transactions.number-of-committed-transactions
The number of committed transactions.
swarm.transactions.number-of-heuristics
The number of transactions which have terminated with heuristic outcomes.
swarm.transactions.number-of-inflight-transactions
The number of transactions that have begun but not yet terminated.
swarm.transactions.number-of-nested-transactions
The total number of nested (sub) transactions created.
swarm.transactions.number-of-resource-rollbacks
The number of transactions that rolled back due to resource (participant) failure.
swarm.transactions.number-of-timed-out-transactions
The number of transactions that have rolled back due to timeout.
swarm.transactions.number-of-transactions
The total number of transactions (top-level and nested) created
swarm.transactions.object-store-path
Denotes a relative or absolute filesystem path denoting where the transaction manager object store should store data. By default the value is treated as relative to the path denoted by the "relative-to" attribute.
swarm.transactions.object-store-relative-to
References a global path configuration in the domain model, defaulting to the JBoss Application Server data directory (jboss.server.data.dir). The value of the "path" attribute will treated as relative to this path. Use an empty string to disable the default behavior and force the value of the "path" attribute to be treated as an absolute path.
swarm.transactions.port
Port for transaction manager
swarm.transactions.process-id-socket-binding
The name of the socket binding configuration to use if the transaction manager should use a socket-based process id. Will be 'undefined' if 'process-id-uuid' is 'true'; otherwise must be set.
swarm.transactions.process-id-socket-max-ports
The maximum number of ports to search for an open port if the transaction manager should use a socket-based process id. If the port specified by the socket binding referenced in 'process-id-socket-binding' is occupied, the next higher port will be tried until an open port is found or the number of ports specified by this attribute have been tried. Will be 'undefined' if 'process-id-uuid' is 'true'.
swarm.transactions.process-id-uuid
Indicates whether the transaction manager should use a UUID based process id.
swarm.transactions.recovery-listener
Used to specify if the recovery system should listen on a network socket or not.
swarm.transactions.socket-binding
Used to reference the correct socket binding to use for the recovery environment.
swarm.transactions.statistics-enabled
Whether statistics should be enabled.
swarm.transactions.status-port
Status port for transaction manager
swarm.transactions.status-socket-binding
Used to reference the correct socket binding to use for the transaction status manager.
swarm.transactions.use-jdbc-store
Use the jdbc store for writing transaction logs. Set to true to enable and to false to use the default log store type. The default log store is normally one file system file per transaction log. The server should be restarted for this setting to take effect. It’s alternative to Horneq based store
swarm.transactions.use-journal-store
Use the journal store for writing transaction logs. Set to true to enable and to false to use the default log store type. The default log store is normally one file system file per transaction log. The server should be restarted for this setting to take effect. It’s alternative to jdbc based store.

B.28. Undertow

Provides basic HTTP support, including Java Servlets, JavaServer Pages (JSP), and JavaServer Pages Standard Tag Library (JSTL) according to JSR-340, JSR-245 and JSR-52.

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>undertow</artifactId>
</dependency>

Configuration

swarm.ajp.enable
Determine if AJP should be enabled
swarm.ajp.port
Set the port for the default AJP listener
swarm.deployment
Map of security configuration by deployment
swarm.http.port
Set the port for the default HTTP listener
swarm.https.certificate.generate
Should a self-signed certificate be generated
swarm.https.certificate.generate.host
Hostname for the generated self-signed certificate
swarm.https.keystore.embedded
Should an embedded keystore be created
swarm.https.only
Only enable the HTTPS Listener
swarm.https.port
Set the port for the default HTTPS listener
swarm.undertow.alias
Alias to the server certificate key entry in the keystore
swarm.undertow.buffer-caches.KEY.buffer-size
The size of an individual buffer, in bytes.
swarm.undertow.buffer-caches.KEY.buffers-per-region
The numbers of buffers in a region
swarm.undertow.buffer-caches.KEY.max-regions
The maximum number of regions
swarm.undertow.default-security-domain
The default security domain used by web deployments
swarm.undertow.default-server
The default server to use for deployments
swarm.undertow.default-servlet-container
The default servlet container to use for deployments
swarm.undertow.default-virtual-host
The default virtual host to use for deployments
swarm.undertow.filter-configuration.custom-filters.KEY.class-name
Class name of HttpHandler
swarm.undertow.filter-configuration.custom-filters.KEY.module
Module name where class can be loaded from
swarm.undertow.filter-configuration.custom-filters.KEY.parameters
Filter parameters
swarm.undertow.filter-configuration.error-pages.KEY.code
Error page code
swarm.undertow.filter-configuration.error-pages.KEY.path
Error page path
swarm.undertow.filter-configuration.expression-filters.KEY.expression
The expression that defines the filter
swarm.undertow.filter-configuration.expression-filters.KEY.module
Module to use to load the filter definitions
swarm.undertow.filter-configuration.mod-clusters.KEY.advertise-frequency
The frequency (in milliseconds) that mod-cluster advertises itself on the network
swarm.undertow.filter-configuration.mod-clusters.KEY.advertise-path
The path that mod-cluster is registered under, defaults to /
swarm.undertow.filter-configuration.mod-clusters.KEY.advertise-protocol
The protocol that is in use, defaults to HTTP
swarm.undertow.filter-configuration.mod-clusters.KEY.advertise-socket-binding
The multicast group that is used to advertise
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.max-attempts
The number of attempts to send the request to a backend server
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.aliases
The nodes aliases
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.cache-connections
The number of connections to keep alive indefinitely
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.contexts.KEY.requests
The number of requests against this context
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.contexts.KEY.status
The status of this context
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.elected
The elected count
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.flush-packets
If received data should be immediately flushed
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.load
The current load of this node
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.load-balancing-group
The load balancing group this node belongs to
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.max-connections
The maximum number of connections per IO thread
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.open-connections
The current number of open connections
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.ping
The nodes ping
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.queue-new-requests
If a request is received and there is no worker immediately available should it be queued
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.read
The number of bytes read from the node
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.request-queue-size
The size of the request queue
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.status
The current status of this node
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.timeout
The request timeout
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.ttl
The time connections will stay alive with no requests before being closed, if the number of connections is larger than cache-connections
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.uri
The URI that the load balancer uses to connect to the node
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.nodes.KEY.written
The number of bytes transferred to the node
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.sticky-session
If sticky sessions are enabled
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.sticky-session-cookie
The session cookie name
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.sticky-session-force
If this is true then an error will be returned if the request cannot be routed to the sticky node, otherwise it will be routed to another node
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.sticky-session-path
The path of the sticky session cookie
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.sticky-session-remove
Remove the session cookie if the request cannot be routed to the correct host
swarm.undertow.filter-configuration.mod-clusters.KEY.balancers.KEY.wait-worker
The number of seconds to wait for an available worker
swarm.undertow.filter-configuration.mod-clusters.KEY.broken-node-timeout
The amount of time that must elapse before a broken node is removed from the table
swarm.undertow.filter-configuration.mod-clusters.KEY.cached-connections-per-thread
The number of connections that will be kept alive indefinitely
swarm.undertow.filter-configuration.mod-clusters.KEY.connection-idle-timeout
The amount of time a connection can be idle before it will be closed. Connections will not time out once the pool size is down to the configured minimum (as configured by cached-connections-per-thread)
swarm.undertow.filter-configuration.mod-clusters.KEY.connections-per-thread
The number of connections that will be maintained to backend servers, per IO thread. Defaults to 10.
swarm.undertow.filter-configuration.mod-clusters.KEY.enable-http2
If the load balancer should attempt to upgrade back end connections to HTTP2. If HTTP2 is not supported HTTP or HTTPS will be used as normal
swarm.undertow.filter-configuration.mod-clusters.KEY.health-check-interval
The frequency of health check pings to backend nodes
swarm.undertow.filter-configuration.mod-clusters.KEY.http2-enable-push
If push should be enabled for HTTP/2 connections
swarm.undertow.filter-configuration.mod-clusters.KEY.http2-header-table-size
The size of the header table used for HPACK compression, in bytes. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression.
swarm.undertow.filter-configuration.mod-clusters.KEY.http2-initial-window-size
The flow control window size that controls how quickly the client can send data to the server
swarm.undertow.filter-configuration.mod-clusters.KEY.http2-max-concurrent-streams
The maximum number of HTTP/2 streams that can be active at any time on a single connection
swarm.undertow.filter-configuration.mod-clusters.KEY.http2-max-frame-size
The max HTTP/2 frame size
swarm.undertow.filter-configuration.mod-clusters.KEY.http2-max-header-list-size
The maximum size of request headers the server is prepared to accept
swarm.undertow.filter-configuration.mod-clusters.KEY.management-access-predicate
A predicate that is applied to incoming requests to determine if they can perform mod cluster management commands. Provides additional security on top of what is provided by limiting management to requests that originate from the management-socket-binding
swarm.undertow.filter-configuration.mod-clusters.KEY.management-socket-binding
The socket binding of the mod_cluster management port. When using mod_cluster two HTTP listeners should be defined, a public one to handle requests, and one bound to the internal network to handle mod cluster commands. This socket binding should correspond to the internal listener, and should not be publicly accessible
swarm.undertow.filter-configuration.mod-clusters.KEY.max-ajp-packet-size
The maximum size for AJP packets. Increasing this will allow AJP to work for requests/responses that have a large amount of headers. This is an advanced option, and must be the same between load balancers and backend servers.
swarm.undertow.filter-configuration.mod-clusters.KEY.max-request-time
The max amount of time that a request to a backend node can take before it is killed
swarm.undertow.filter-configuration.mod-clusters.KEY.request-queue-size
The number of requests that can be queued if the connection pool is full before requests are rejected with a 503
swarm.undertow.filter-configuration.mod-clusters.KEY.security-key
The security key that is used for the mod-cluster group. All members must use the same security key.
swarm.undertow.filter-configuration.mod-clusters.KEY.security-realm
The security realm that provides the SSL configuration
swarm.undertow.filter-configuration.mod-clusters.KEY.use-alias
If an alias check is performed
swarm.undertow.filter-configuration.mod-clusters.KEY.worker
The XNIO worker that is used to send the advertise notifications
swarm.undertow.filter-configuration.request-limits.KEY.max-concurrent-requests
Maximum number of concurrent requests
swarm.undertow.filter-configuration.request-limits.KEY.queue-size
Number of requests to queue before they start being rejected
swarm.undertow.filter-configuration.response-headers.KEY.header-name
Header name
swarm.undertow.filter-configuration.response-headers.KEY.header-value
Value for header
swarm.undertow.filter-configuration.rewrites.KEY.redirect
If this is true then a redirect will be done instead of a rewrite
swarm.undertow.filter-configuration.rewrites.KEY.target
The expression that defines the target. If you are redirecting to a constant target put single quotes around the value
swarm.undertow.handler-configuration.files.KEY.cache-buffer-size
Size of the buffers, in bytes.
swarm.undertow.handler-configuration.files.KEY.cache-buffers
Number of buffers
swarm.undertow.handler-configuration.files.KEY.case-sensitive
Use case sensitive file handling
swarm.undertow.handler-configuration.files.KEY.directory-listing
Enable directory listing?
swarm.undertow.handler-configuration.files.KEY.follow-symlink
Enable following symbolic links
swarm.undertow.handler-configuration.files.KEY.path
Path on filesystem from where file handler will serve resources
swarm.undertow.handler-configuration.files.KEY.safe-symlink-paths
Paths that are safe to be targets of symbolic links
swarm.undertow.handler-configuration.reverse-proxies.KEY.cached-connections-per-thread
The number of connections that will be kept alive indefinitely
swarm.undertow.handler-configuration.reverse-proxies.KEY.connection-idle-timeout
The amount of time a connection can be idle before it will be closed. Connections will not time out once the pool size is down to the configured minimum (as configured by cached-connections-per-thread)
swarm.undertow.handler-configuration.reverse-proxies.KEY.connections-per-thread
The number of connections that will be maintained to backend servers, per IO thread. Defaults to 10.
swarm.undertow.handler-configuration.reverse-proxies.KEY.hosts.KEY.instance-id
The instance id (aka JVM route) that will be used to enable sticky sessions
swarm.undertow.handler-configuration.reverse-proxies.KEY.hosts.KEY.outbound-socket-binding
Outbound socket binding for this host
swarm.undertow.handler-configuration.reverse-proxies.KEY.hosts.KEY.path
Optional path if host is using non root resource
swarm.undertow.handler-configuration.reverse-proxies.KEY.hosts.KEY.scheme
What kind of scheme is used
swarm.undertow.handler-configuration.reverse-proxies.KEY.hosts.KEY.security-realm
The security realm that provides the SSL configuration for the connection to the host
swarm.undertow.handler-configuration.reverse-proxies.KEY.max-request-time
The maximum time that a proxy request can be active for, before being killed. Defaults to unlimited
swarm.undertow.handler-configuration.reverse-proxies.KEY.problem-server-retry
Time in seconds to wait before attempting to reconnect to a server that is down
swarm.undertow.handler-configuration.reverse-proxies.KEY.request-queue-size
The number of requests that can be queued if the connection pool is full before requests are rejected with a 503
swarm.undertow.handler-configuration.reverse-proxies.KEY.session-cookie-names
Comma separated list of session cookie names. Generally this will just be JSESSIONID.
swarm.undertow.instance-id
The cluster instance id
swarm.undertow.key-password
Password to the server certificate
swarm.undertow.keystore-password
Password to the server keystore
swarm.undertow.keystore-path
Path to the server keystore
swarm.undertow.servers.KEY.ajp-listeners.KEY.allow-encoded-slash
If a request comes in with encoded / characters (i.e. %2F), will these be decoded.
swarm.undertow.servers.KEY.ajp-listeners.KEY.allow-equals-in-cookie-value
If this is true then Undertow will allow non-escaped equals characters in unquoted cookie values. Unquoted cookie values may not contain equals characters. If present the value ends before the equals sign. The remainder of the cookie value will be dropped.
swarm.undertow.servers.KEY.ajp-listeners.KEY.always-set-keep-alive
If this is true then a Connection: keep-alive header will be added to responses, even when it is not strictly required by the specification.
swarm.undertow.servers.KEY.ajp-listeners.KEY.buffer-pipelined-data
If we should buffer pipelined requests.
swarm.undertow.servers.KEY.ajp-listeners.KEY.buffer-pool
The listeners buffer pool
swarm.undertow.servers.KEY.ajp-listeners.KEY.bytes-received
The number of bytes that have been received by this listener
swarm.undertow.servers.KEY.ajp-listeners.KEY.bytes-sent
The number of bytes that have been sent out on this listener
swarm.undertow.servers.KEY.ajp-listeners.KEY.decode-url
If this is true then the parser will decode the URL and query parameters using the selected character encoding (UTF-8 by default). If this is false they will not be decoded. This will allow a later handler to decode them into whatever charset is desired.
swarm.undertow.servers.KEY.ajp-listeners.KEY.disallowed-methods
A comma separated list of HTTP methods that are not allowed
swarm.undertow.servers.KEY.ajp-listeners.KEY.enabled
If the listener is enabled
swarm.undertow.servers.KEY.ajp-listeners.KEY.error-count
The number of 500 responses that have been sent by this listener
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-ajp-packet-size
The maximum supported size of AJP packets. If this is modified it has to be increased on the load balancer and the backend server.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-buffered-request-size
Maximum size of a buffered request, in bytes. Requests are not usually buffered, the most common case is when performing SSL renegotiation for a POST request, and the post data must be fully buffered in order to perform the renegotiation.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-connections
The maximum number of concurrent connections. Only values greater than 0 are allowed. For unlimited connections simply undefine this attribute value.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-cookies
The maximum number of cookies that will be parsed. This is used to protect against hash vulnerabilities.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-header-size
The maximum size of a http request header, in bytes.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-headers
The maximum number of headers that will be parsed. This is used to protect against hash vulnerabilities.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-parameters
The maximum number of parameters that will be parsed. This is used to protect against hash vulnerabilities. This applies to both query parameters, and to POST data, but is not cumulative (i.e. you can potentially have max parameters * 2 total parameters).
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-post-size
The maximum size of a post that will be accepted, in bytes.
swarm.undertow.servers.KEY.ajp-listeners.KEY.max-processing-time
The maximum processing time taken by a request on this listener
swarm.undertow.servers.KEY.ajp-listeners.KEY.no-request-timeout
The length of time in milliseconds that the connection can be idle before it is closed by the container, defaults to 60000 (one minute)
swarm.undertow.servers.KEY.ajp-listeners.KEY.processing-time
The total processing time of all requests handed by this listener
swarm.undertow.servers.KEY.ajp-listeners.KEY.read-timeout
Configure a read timeout for a socket, in milliseconds. If the given amount of time elapses without a successful read taking place, the socket’s next read will throw a {@link ReadTimeoutException}.
swarm.undertow.servers.KEY.ajp-listeners.KEY.receive-buffer
The receive buffer size, in bytes.
swarm.undertow.servers.KEY.ajp-listeners.KEY.record-request-start-time
If this is true then Undertow will record the request start time, to allow for request time to be logged. This has a small but measurable performance impact
swarm.undertow.servers.KEY.ajp-listeners.KEY.redirect-socket
If this listener is supporting non-SSL requests, and a request is received for which a matching <security-constraint> requires SSL transport, undertow will automatically redirect the request to the socket binding port specified here.
swarm.undertow.servers.KEY.ajp-listeners.KEY.request-count
The number of requests this listener has served
swarm.undertow.servers.KEY.ajp-listeners.KEY.request-parse-timeout
The maximum amount of time (in milliseconds) that can be spent parsing the request
swarm.undertow.servers.KEY.ajp-listeners.KEY.resolve-peer-address
Enables host dns lookup
swarm.undertow.servers.KEY.ajp-listeners.KEY.scheme
The listener scheme, can be HTTP or HTTPS. By default the scheme will be taken from the incoming AJP request.
swarm.undertow.servers.KEY.ajp-listeners.KEY.secure
If this is true then requests that originate from this listener are marked as secure, even if the request is not using HTTPS.
swarm.undertow.servers.KEY.ajp-listeners.KEY.send-buffer
The send buffer size, in bytes.
swarm.undertow.servers.KEY.ajp-listeners.KEY.socket-binding
The listener socket binding
swarm.undertow.servers.KEY.ajp-listeners.KEY.tcp-backlog
Configure a server with the specified backlog.
swarm.undertow.servers.KEY.ajp-listeners.KEY.tcp-keep-alive
Configure a channel to send TCP keep-alive messages in an implementation-dependent manner.
swarm.undertow.servers.KEY.ajp-listeners.KEY.url-charset
URL charset
swarm.undertow.servers.KEY.ajp-listeners.KEY.worker
The listeners XNIO worker
swarm.undertow.servers.KEY.ajp-listeners.KEY.write-timeout
Configure a write timeout for a socket, in milliseconds. If the given amount of time elapses without a successful write taking place, the socket’s next write will throw a {@link WriteTimeoutException}.
swarm.undertow.servers.KEY.default-host
The servers default virtual host
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.directory
Directory in which to save logs
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.extended
If the log uses the extended log file format
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.pattern
The access log pattern.
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.predicate
Predicate that determines if the request should be logged
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.prefix
Prefix for the log file name.
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.relative-to
The directory the path is relative to
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.rotate
Rotate the access log every day.
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.suffix
Suffix for the log file name.
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.use-server-log
If the log should be written to the server log, rather than a separate file. Defaults to false.
swarm.undertow.servers.KEY.hosts.KEY.access-log-setting.worker
Name of the worker to use for logging
swarm.undertow.servers.KEY.hosts.KEY.alias
Aliases for the host
swarm.undertow.servers.KEY.hosts.KEY.default-response-code
If set, this will be response code sent back in case requested context does not exist on server.
swarm.undertow.servers.KEY.hosts.KEY.default-web-module
Default web module
swarm.undertow.servers.KEY.hosts.KEY.disable-console-redirect
if set to true, /console redirect wont be enabled for this host, default is false
swarm.undertow.servers.KEY.hosts.KEY.filter-refs.KEY.predicate
Predicates provide a simple way of making a true/false decision based on an exchange. Many handlers have a requirement that they be applied conditionally, and predicates provide a general way to specify a condition.
swarm.undertow.servers.KEY.hosts.KEY.filter-refs.KEY.priority
Defines filter order, it should be set to 1 or more, higher number instructs server to be included earlier in handler chain than others under same context.
swarm.undertow.servers.KEY.hosts.KEY.locations.KEY.filter-refs.KEY.predicate
Predicates provide a simple way of making a true/false decision based on an exchange. Many handlers have a requirement that they be applied conditionally, and predicates provide a general way to specify a condition.
swarm.undertow.servers.KEY.hosts.KEY.locations.KEY.filter-refs.KEY.priority
Defines filter order, it should be set to 1 or more, higher number instructs server to be included earlier in handler chain than others under same context.
swarm.undertow.servers.KEY.hosts.KEY.locations.KEY.handler
Default handler for this location
swarm.undertow.servers.KEY.hosts.KEY.single-sign-on-setting.cookie-name
Name of the cookie
swarm.undertow.servers.KEY.hosts.KEY.single-sign-on-setting.domain
The cookie domain that will be used.
swarm.undertow.servers.KEY.hosts.KEY.single-sign-on-setting.http-only
Set Cookie httpOnly attribute.
swarm.undertow.servers.KEY.hosts.KEY.single-sign-on-setting.path
Cookie path.
swarm.undertow.servers.KEY.hosts.KEY.single-sign-on-setting.secure
Set Cookie secure attribute.
swarm.undertow.servers.KEY.http-listeners.KEY.allow-encoded-slash
If a request comes in with encoded / characters (i.e. %2F), will these be decoded.
swarm.undertow.servers.KEY.http-listeners.KEY.allow-equals-in-cookie-value
If this is true then Undertow will allow non-escaped equals characters in unquoted cookie values. Unquoted cookie values may not contain equals characters. If present the value ends before the equals sign. The remainder of the cookie value will be dropped.
swarm.undertow.servers.KEY.http-listeners.KEY.always-set-keep-alive
If this is true then a Connection: keep-alive header will be added to responses, even when it is not strictly required by the specification.
swarm.undertow.servers.KEY.http-listeners.KEY.buffer-pipelined-data
If we should buffer pipelined requests.
swarm.undertow.servers.KEY.http-listeners.KEY.buffer-pool
The listeners buffer pool
swarm.undertow.servers.KEY.http-listeners.KEY.bytes-received
The number of bytes that have been received by this listener
swarm.undertow.servers.KEY.http-listeners.KEY.bytes-sent
The number of bytes that have been sent out on this listener
swarm.undertow.servers.KEY.http-listeners.KEY.certificate-forwarding
If certificate forwarding should be enabled. If this is enabled then the listener will take the certificate from the SSL_CLIENT_CERT attribute. This should only be enabled if behind a proxy, and the proxy is configured to always set these headers.
swarm.undertow.servers.KEY.http-listeners.KEY.decode-url
If this is true then the parser will decode the URL and query parameters using the selected character encoding (UTF-8 by default). If this is false they will not be decoded. This will allow a later handler to decode them into whatever charset is desired.
swarm.undertow.servers.KEY.http-listeners.KEY.disallowed-methods
A comma separated list of HTTP methods that are not allowed
swarm.undertow.servers.KEY.http-listeners.KEY.enable-http2
Enables HTTP2 support for this listener
swarm.undertow.servers.KEY.http-listeners.KEY.enabled
If the listener is enabled
swarm.undertow.servers.KEY.http-listeners.KEY.error-count
The number of 500 responses that have been sent by this listener
swarm.undertow.servers.KEY.http-listeners.KEY.http2-enable-push
If server push is enabled for this connection
swarm.undertow.servers.KEY.http-listeners.KEY.http2-header-table-size
The size of the header table used for HPACK compression, in bytes. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression.
swarm.undertow.servers.KEY.http-listeners.KEY.http2-initial-window-size
The flow control window size that controls how quickly the client can send data to the server
swarm.undertow.servers.KEY.http-listeners.KEY.http2-max-concurrent-streams
The maximum number of HTTP/2 streams that can be active at any time on a single connection
swarm.undertow.servers.KEY.http-listeners.KEY.http2-max-frame-size
The max HTTP/2 frame size
swarm.undertow.servers.KEY.http-listeners.KEY.http2-max-header-list-size
The maximum size of request headers the server is prepared to accept
swarm.undertow.servers.KEY.http-listeners.KEY.max-buffered-request-size
Maximum size of a buffered request, in bytes. Requests are not usually buffered, the most common case is when performing SSL renegotiation for a POST request, and the post data must be fully buffered in order to perform the renegotiation.
swarm.undertow.servers.KEY.http-listeners.KEY.max-connections
The maximum number of concurrent connections. Only values greater than 0 are allowed. For unlimited connections simply undefine this attribute value.
swarm.undertow.servers.KEY.http-listeners.KEY.max-cookies
The maximum number of cookies that will be parsed. This is used to protect against hash vulnerabilities.
swarm.undertow.servers.KEY.http-listeners.KEY.max-header-size
The maximum size of a http request header, in bytes.
swarm.undertow.servers.KEY.http-listeners.KEY.max-headers
The maximum number of headers that will be parsed. This is used to protect against hash vulnerabilities.
swarm.undertow.servers.KEY.http-listeners.KEY.max-parameters
The maximum number of parameters that will be parsed. This is used to protect against hash vulnerabilities. This applies to both query parameters, and to POST data, but is not cumulative (i.e. you can potentially have max parameters * 2 total parameters).
swarm.undertow.servers.KEY.http-listeners.KEY.max-post-size
The maximum size of a post that will be accepted, in bytes.
swarm.undertow.servers.KEY.http-listeners.KEY.max-processing-time
The maximum processing time taken by a request on this listener
swarm.undertow.servers.KEY.http-listeners.KEY.no-request-timeout
The length of time in milliseconds that the connection can be idle before it is closed by the container, defaults to 60000 (one minute)
swarm.undertow.servers.KEY.http-listeners.KEY.processing-time
The total processing time of all requests handed by this listener
swarm.undertow.servers.KEY.http-listeners.KEY.proxy-address-forwarding
enables x-forwarded-host and similar headers and set a remote ip address and hostname
swarm.undertow.servers.KEY.http-listeners.KEY.read-timeout
Configure a read timeout for a socket, in milliseconds. If the given amount of time elapses without a successful read taking place, the socket’s next read will throw a {@link ReadTimeoutException}.
swarm.undertow.servers.KEY.http-listeners.KEY.receive-buffer
The receive buffer size, in bytes.
swarm.undertow.servers.KEY.http-listeners.KEY.record-request-start-time
If this is true then Undertow will record the request start time, to allow for request time to be logged. This has a small but measurable performance impact
swarm.undertow.servers.KEY.http-listeners.KEY.redirect-socket
If this listener is supporting non-SSL requests, and a request is received for which a matching <security-constraint> requires SSL transport, undertow will automatically redirect the request to the socket binding port specified here.
swarm.undertow.servers.KEY.http-listeners.KEY.request-count
The number of requests this listener has served
swarm.undertow.servers.KEY.http-listeners.KEY.request-parse-timeout
The maximum amount of time (in milliseconds) that can be spent parsing the request
swarm.undertow.servers.KEY.http-listeners.KEY.resolve-peer-address
Enables host dns lookup
swarm.undertow.servers.KEY.http-listeners.KEY.secure
If this is true then requests that originate from this listener are marked as secure, even if the request is not using HTTPS.
swarm.undertow.servers.KEY.http-listeners.KEY.send-buffer
The send buffer size, in bytes.
swarm.undertow.servers.KEY.http-listeners.KEY.socket-binding
The listener socket binding
swarm.undertow.servers.KEY.http-listeners.KEY.tcp-backlog
Configure a server with the specified backlog.
swarm.undertow.servers.KEY.http-listeners.KEY.tcp-keep-alive
Configure a channel to send TCP keep-alive messages in an implementation-dependent manner.
swarm.undertow.servers.KEY.http-listeners.KEY.url-charset
URL charset
swarm.undertow.servers.KEY.http-listeners.KEY.worker
The listeners XNIO worker
swarm.undertow.servers.KEY.http-listeners.KEY.write-timeout
Configure a write timeout for a socket, in milliseconds. If the given amount of time elapses without a successful write taking place, the socket’s next write will throw a {@link WriteTimeoutException}.
swarm.undertow.servers.KEY.https-listeners.KEY.allow-encoded-slash
If a request comes in with encoded / characters (i.e. %2F), will these be decoded.
swarm.undertow.servers.KEY.https-listeners.KEY.allow-equals-in-cookie-value
If this is true then Undertow will allow non-escaped equals characters in unquoted cookie values. Unquoted cookie values may not contain equals characters. If present the value ends before the equals sign. The remainder of the cookie value will be dropped.
swarm.undertow.servers.KEY.https-listeners.KEY.always-set-keep-alive
If this is true then a Connection: keep-alive header will be added to responses, even when it is not strictly required by the specification.
swarm.undertow.servers.KEY.https-listeners.KEY.buffer-pipelined-data
If we should buffer pipelined requests.
swarm.undertow.servers.KEY.https-listeners.KEY.buffer-pool
The listeners buffer pool
swarm.undertow.servers.KEY.https-listeners.KEY.bytes-received
The number of bytes that have been received by this listener
swarm.undertow.servers.KEY.https-listeners.KEY.bytes-sent
The number of bytes that have been sent out on this listener
swarm.undertow.servers.KEY.https-listeners.KEY.decode-url
If this is true then the parser will decode the URL and query parameters using the selected character encoding (UTF-8 by default). If this is false they will not be decoded. This will allow a later handler to decode them into whatever charset is desired.
swarm.undertow.servers.KEY.https-listeners.KEY.disallowed-methods
A comma separated list of HTTP methods that are not allowed
swarm.undertow.servers.KEY.https-listeners.KEY.enable-http2
Enables HTTP2 support for this listener
swarm.undertow.servers.KEY.https-listeners.KEY.enabled
If the listener is enabled
swarm.undertow.servers.KEY.https-listeners.KEY.enabled-cipher-suites
Configures Enabled SSL cyphers
swarm.undertow.servers.KEY.https-listeners.KEY.enabled-protocols
Configures SSL protocols
swarm.undertow.servers.KEY.https-listeners.KEY.error-count
The number of 500 responses that have been sent by this listener
swarm.undertow.servers.KEY.https-listeners.KEY.http2-enable-push
If server push is enabled for this connection
swarm.undertow.servers.KEY.https-listeners.KEY.http2-header-table-size
The size of the header table used for HPACK compression, in bytes. This amount of memory will be allocated per connection for compression. Larger values use more memory but may give better compression.
swarm.undertow.servers.KEY.https-listeners.KEY.http2-initial-window-size
The flow control window size that controls how quickly the client can send data to the server
swarm.undertow.servers.KEY.https-listeners.KEY.http2-max-concurrent-streams
The maximum number of HTTP/2 streams that can be active at any time on a single connection
swarm.undertow.servers.KEY.https-listeners.KEY.http2-max-frame-size
The max HTTP/2 frame size
swarm.undertow.servers.KEY.https-listeners.KEY.http2-max-header-list-size
The maximum size of request headers the server is prepared to accept
swarm.undertow.servers.KEY.https-listeners.KEY.max-buffered-request-size
Maximum size of a buffered request, in bytes. Requests are not usually buffered, the most common case is when performing SSL renegotiation for a POST request, and the post data must be fully buffered in order to perform the renegotiation.
swarm.undertow.servers.KEY.https-listeners.KEY.max-connections
The maximum number of concurrent connections. Only values greater than 0 are allowed. For unlimited connections simply undefine this attribute value.
swarm.undertow.servers.KEY.https-listeners.KEY.max-cookies
The maximum number of cookies that will be parsed. This is used to protect against hash vulnerabilities.
swarm.undertow.servers.KEY.https-listeners.KEY.max-header-size
The maximum size of a http request header, in bytes.
swarm.undertow.servers.KEY.https-listeners.KEY.max-headers
The maximum number of headers that will be parsed. This is used to protect against hash vulnerabilities.
swarm.undertow.servers.KEY.https-listeners.KEY.max-parameters
The maximum number of parameters that will be parsed. This is used to protect against hash vulnerabilities. This applies to both query parameters, and to POST data, but is not cumulative (i.e. you can potentially have max parameters * 2 total parameters).
swarm.undertow.servers.KEY.https-listeners.KEY.max-post-size
The maximum size of a post that will be accepted, in bytes.
swarm.undertow.servers.KEY.https-listeners.KEY.max-processing-time
The maximum processing time taken by a request on this listener
swarm.undertow.servers.KEY.https-listeners.KEY.no-request-timeout
The length of time in milliseconds that the connection can be idle before it is closed by the container, defaults to 60000 (one minute)
swarm.undertow.servers.KEY.https-listeners.KEY.processing-time
The total processing time of all requests handed by this listener
swarm.undertow.servers.KEY.https-listeners.KEY.read-timeout
Configure a read timeout for a socket, in milliseconds. If the given amount of time elapses without a successful read taking place, the socket’s next read will throw a {@link ReadTimeoutException}.
swarm.undertow.servers.KEY.https-listeners.KEY.receive-buffer
The receive buffer size, in bytes.
swarm.undertow.servers.KEY.https-listeners.KEY.record-request-start-time
If this is true then Undertow will record the request start time, to allow for request time to be logged. This has a small but measurable performance impact
swarm.undertow.servers.KEY.https-listeners.KEY.request-count
The number of requests this listener has served
swarm.undertow.servers.KEY.https-listeners.KEY.request-parse-timeout
The maximum amount of time (in milliseconds) that can be spent parsing the request
swarm.undertow.servers.KEY.https-listeners.KEY.resolve-peer-address
Enables host dns lookup
swarm.undertow.servers.KEY.https-listeners.KEY.secure
If this is true then requests that originate from this listener are marked as secure, even if the request is not using HTTPS.
swarm.undertow.servers.KEY.https-listeners.KEY.security-realm
The listeners security realm
swarm.undertow.servers.KEY.https-listeners.KEY.send-buffer
The send buffer size, in bytes.
swarm.undertow.servers.KEY.https-listeners.KEY.socket-binding
The listener socket binding
swarm.undertow.servers.KEY.https-listeners.KEY.ssl-session-cache-size
The maximum number of active SSL sessions
swarm.undertow.servers.KEY.https-listeners.KEY.ssl-session-timeout
The timeout for SSL sessions, in seconds
swarm.undertow.servers.KEY.https-listeners.KEY.tcp-backlog
Configure a server with the specified backlog.
swarm.undertow.servers.KEY.https-listeners.KEY.tcp-keep-alive
Configure a channel to send TCP keep-alive messages in an implementation-dependent manner.
swarm.undertow.servers.KEY.https-listeners.KEY.url-charset
URL charset
swarm.undertow.servers.KEY.https-listeners.KEY.verify-client
The desired SSL client authentication mode for SSL channels
swarm.undertow.servers.KEY.https-listeners.KEY.worker
The listeners XNIO worker
swarm.undertow.servers.KEY.https-listeners.KEY.write-timeout
Configure a write timeout for a socket, in milliseconds. If the given amount of time elapses without a successful write taking place, the socket’s next write will throw a {@link WriteTimeoutException}.
swarm.undertow.servers.KEY.servlet-container
The servers default servlet container
swarm.undertow.servlet-containers.KEY.allow-non-standard-wrappers
If true then request and response wrappers that do not extend the standard wrapper classes can be used
swarm.undertow.servlet-containers.KEY.crawler-session-management-setting.session-timeout
The session timeout for sessions that are owned by crawlers
swarm.undertow.servlet-containers.KEY.crawler-session-management-setting.user-agents
Regular expression that is used to match the user agenet of a crawler
swarm.undertow.servlet-containers.KEY.default-buffer-cache
The buffer cache to use for caching static resources
swarm.undertow.servlet-containers.KEY.default-encoding
Default encoding to use for all deployed applications
swarm.undertow.servlet-containers.KEY.default-session-timeout
The default session timeout (in minutes) for all applications deployed in the container.
swarm.undertow.servlet-containers.KEY.directory-listing
If directory listing should be enabled for default servlets.
swarm.undertow.servlet-containers.KEY.disable-caching-for-secured-pages
If Undertow should set headers to disable caching for secured paged. Disabling this can cause security problems, as sensitive pages may be cached by an intermediary.
swarm.undertow.servlet-containers.KEY.eager-filter-initialization
If true undertow calls filter init() on deployment start rather than when first requested.
swarm.undertow.servlet-containers.KEY.ignore-flush
Ignore flushes on the servlet output stream. In most cases these just hurt performance for no good reason.
swarm.undertow.servlet-containers.KEY.jsp-setting.check-interval
Check interval for JSP updates using a background thread.
swarm.undertow.servlet-containers.KEY.jsp-setting.development
Enable Development mode which enables reloading JSP on-the-fly
swarm.undertow.servlet-containers.KEY.jsp-setting.disabled
Enable the JSP container.
swarm.undertow.servlet-containers.KEY.jsp-setting.display-source-fragment
When a runtime error occurs, attempts to display corresponding JSP source fragment
swarm.undertow.servlet-containers.KEY.jsp-setting.dump-smap
Write SMAP data to a file.
swarm.undertow.servlet-containers.KEY.jsp-setting.error-on-use-bean-invalid-class-attribute
Enable errors when using a bad class in useBean.
swarm.undertow.servlet-containers.KEY.jsp-setting.generate-strings-as-char-arrays
Generate String constants as char arrays.
swarm.undertow.servlet-containers.KEY.jsp-setting.java-encoding
Specify the encoding used for Java sources.
swarm.undertow.servlet-containers.KEY.jsp-setting.keep-generated
Keep the generated Servlets.
swarm.undertow.servlet-containers.KEY.jsp-setting.mapped-file
Map to the JSP source.
swarm.undertow.servlet-containers.KEY.jsp-setting.modification-test-interval
Minimum amount of time between two tests for updates, in seconds.
swarm.undertow.servlet-containers.KEY.jsp-setting.optimize-scriptlets
If JSP scriptlets should be optimised to remove string concatenation
swarm.undertow.servlet-containers.KEY.jsp-setting.recompile-on-fail
Retry failed JSP compilations on each request.
swarm.undertow.servlet-containers.KEY.jsp-setting.scratch-dir
Specify a different work directory.
swarm.undertow.servlet-containers.KEY.jsp-setting.smap
Enable SMAP.
swarm.undertow.servlet-containers.KEY.jsp-setting.source-vm
Source VM level for compilation.
swarm.undertow.servlet-containers.KEY.jsp-setting.tag-pooling
Enable tag pooling.
swarm.undertow.servlet-containers.KEY.jsp-setting.target-vm
Target VM level for compilation.
swarm.undertow.servlet-containers.KEY.jsp-setting.trim-spaces
Trim some spaces from the generated Servlet.
swarm.undertow.servlet-containers.KEY.jsp-setting.x-powered-by
Enable advertising the JSP engine in x-powered-by.
swarm.undertow.servlet-containers.KEY.max-sessions
The maximum number of sessions that can be active at one time
swarm.undertow.servlet-containers.KEY.mime-mappings.KEY.value
The mime type for this mapping
swarm.undertow.servlet-containers.KEY.persistent-sessions-setting.path
The path to the persistent session data directory. If this is null sessions will be stored in memory
swarm.undertow.servlet-containers.KEY.persistent-sessions-setting.relative-to
The directory the path is relative to
swarm.undertow.servlet-containers.KEY.proactive-authentication
If proactive authentication should be used. If this is true a user will always be authenticated if credentials are present.
swarm.undertow.servlet-containers.KEY.session-cookie-setting.comment
Cookie comment
swarm.undertow.servlet-containers.KEY.session-cookie-setting.domain
Cookie domain
swarm.undertow.servlet-containers.KEY.session-cookie-setting.http-only
Is cookie http-only
swarm.undertow.servlet-containers.KEY.session-cookie-setting.max-age
Max age of cookie
swarm.undertow.servlet-containers.KEY.session-cookie-setting.name
Name of the cookie
swarm.undertow.servlet-containers.KEY.session-cookie-setting.secure
Is cookie secure?
swarm.undertow.servlet-containers.KEY.session-id-length
The length of the generated session ID. Longer session ID’s are more secure.
swarm.undertow.servlet-containers.KEY.stack-trace-on-error
If an error page with the stack trace should be generated on error. Values are all, none and local-only
swarm.undertow.servlet-containers.KEY.use-listener-encoding
Use encoding defined on listener
swarm.undertow.servlet-containers.KEY.websockets-setting.buffer-pool
The buffer pool to use for websocket deployments
swarm.undertow.servlet-containers.KEY.websockets-setting.dispatch-to-worker
If callbacks should be dispatched to a worker thread. If this is false then they will be run in the IO thread, which is faster however care must be taken not to perform blocking operations.
swarm.undertow.servlet-containers.KEY.websockets-setting.worker
The worker to use for websocket deployments
swarm.undertow.statistics-enabled
Configures if statistics are enabled

B.29. Web

Provides a collection of fractions equivalent to the Web Profile:

  • Bean Validation
  • CDI
  • EJB
  • JAX-RS

    • JSON-P
    • JAXB
    • Multipart
    • Validator
  • JPA
  • JSF
  • Transactions
  • Undertow (Servlets)

Maven Coordinates

<dependency>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>web</artifactId>
</dependency>

Appendix C. Additional Resources

Appendix D. Proficiency Levels

Each mission available on Fabric8 Launcher teaches you about certain topics, but requires certain minimum knowledge, which varies by mission. For clarity, the minimum requirements and concepts are organized in several proficiency levels. In addition to the levels described in this chapter, there can be additional requirements with each mission, specific to its aim or the technologies it uses.

Foundational

The missions rated at Foundational proficiency generally require no prior knowledge of the subject matter; they provide general awareness and demonstration of key elements, concepts, and terminology. There are no special requirements except those directly mentioned in the description of the mission.

Advanced

When using Advanced missions, the assumption is that you are familiar with the common concepts and terminology of the subject area of the mission in addition to Kubernetes and OpenShift. You must also be able to perform basic tasks on your own, for example configure services and applications, or administer networks. If a service is needed by the mission, but configuring it is not in the scope of the mission, the assumption is that you have the knowledge to to properly configure it, and only the resulting state of the service is described in the documentation.

Expert

Expert missions require the highest level of knowledge of the subject matter. You are expected to perform many tasks based on feature-based documentation and manuals, and the documentation is aimed at most complex scenarios.

Appendix E. Glossary

E.1. Product and Project Names

developers.redhat.com/launch
developers.redhat.com/launch is a standalone getting started experience offered by Red Hat for jumpstarting cloud-native application development on OpenShift. It provides a hassle-free way of creating functional example applications, called missions, as well as an easy way to build and deploy those missions to OpenShift.
Fabric8 Launcher
The Fabric8 Launcher is the upstream project from which developers.redhat.com/launch is based.
Single-node OpenShift Cluster
An OpenShift cluster running on your machine using Minishift.

E.2. Terms Specific to Fabric8 Launcher

Booster

A language-specific implementation of a particular mission on a particular runtime. Boosters are listed in a booster catalog.

For example, a booster is a web service with a REST API implemented using the WildFly Swarm runtime.

Booster Catalog
A Git repository that contains information about boosters.
Mission

An application specification, for example a web service with a REST API.

Missions generally do not specify which language or platform they should run on; the description only contains the intended functionality.

Runtime
A platform that executes boosters. For example, WildFly Swarm or Eclipse Vert.x.

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.