Chapter 5. Design and Development

5.1. Red Hat JBoss Core Services Apache HTTP Server 2.4

This reference architecture assumes that Red Hat JBoss Core Services Apache HTTP Server has been downloaded and installed at /opt/jbcs-httpd.

Providing two separate httpd configurations with distinct ports makes it possible to start two separate instances of the web server. The attached and scripts help start and stop each instance, for example:

#!/bin/sh export CLUSTER_NUM=1 ./httpd/sbin/apachectl $@

Both and scripts result in a call to the standard apachectl script, but target the corresponding web server instance and either cluster1.conf or cluster2.conf is used as the main web server configuration file. Accordingly, the error file is created as httpd1.log or httpd2.log.

The first argument passed to either script may have be start or stop to respectively start up or shut down the web server, or any other parameter accepted by the standard apachectl script.

The first and perhaps most important required configuration in an httpd configuration file is to specify the path of the web server directory structure. Some of the future references, particularly those to loading modules, will be relative to this path:

ServerRoot "/opt/jbcs-httpd/httpd"

Once an instance of the web server is started, its process id is stored in a file so that it can be used to determine its status and possibly kill the process in the future:

PidFile "/opt/jbcs-httpd/httpd/run/httpd${index}.pid"

The file name is appended with 1 or 2 for the first and second instance of the web server, respectively.

This example follows a similar pattern in naming various log files:

# For a single logfile with access, agent, and referer information
# (Combined Logfile Format), use the following directive:
CustomLog logs/access_log${index} combined

# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a <VirtualHost>
# container, error messages relating to that virtual host will be
# logged here. If you do define an error logfile for a <VirtualHost>
# container, that host’s errors will be logged there and not here.
ErrorLog logs/error_log${index}

The next step in configuring the web server is to set values for a large number of parameters, and load a number of common modules that may be required. In the reference architecture, these settings remain largely unchanged from the defaults that were downloaded in the archive files. One important common configuration is the document root, where the server can attempt to find and serve content to clients. Since the reference web server is exclusively used to forward requests to the application server, this parameter is pointed to an empty directory:

# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
DocumentRoot "/var/www/html"

Now that the basic modules and some of the dependencies have been configured, the next step is to load mod_cluster modules and configure them for each instance. The required modules are:

# mod_proxy_balancer should be disabled when mod_cluster is used
LoadModule cluster_slotmem_module modules/
LoadModule manager_module modules/
LoadModule proxy_cluster_module modules/
LoadModule advertise_module modules/

MemManagerFile is the base name for the file names mod_manager uses to store configuration, generate keys for shared memory, or lock files. It must be an absolute path name; the referenced directories are created if required. It is highly recommended that those files be placed on a local drive and not an NFS share. This path must be unique for each instance of the web server, so again append 1 or 2 to the file name:

MemManagerFile /var/cache/mod_cluster${index}

The final remaining configuration for mod_cluster is its listen port, which is set to 6661 for the first instance and modified to 6662 for the second web server instance. Also add rules to the module configuration to permit connections from required IP addresses, in this case nodes 1, 2 and 3 of the EAP 7 cluster, or the internal network where they reside:

<IfModule manager_module>
  Listen *:666${index}
  <VirtualHost *:666${index}>
    <Directory />
      Require ip 10
    ServerAdvertise off
    <Location /mod_cluster_manager>
      SetHandler mod_cluster-manager
      Require ip 10

The reference architecture does not require any additional configuration files. It is particularly important, as noted above, that conflicting modules such as mod_proxy_balancer would not be loaded. If present, the httpd/conf.d/mod_cluster.conf sample configuration can introduce such a conflict, so comment out the line that loads all configuration files in the provided folder:

# Load config files from the config directory "/etc/httpd/conf.d".
#IncludeOptional conf.d/*.conf

Simultaneous execution of two web server instances requires that they are configured to use different HTTP and HTTPS ports. For the regular HTTP port, use 81 and 82 for the two instances, so the cluster configuration files are configured as follows:

# HTTP port

Everything else in the web server configuration remains largely untouched. It is recommended that the parameters are reviewed in detail for a specific environment, particularly when there are changes to the reference architecture that might invalidate some of the accompanied assumptions.

This reference architecture does not make use of Secure Sockets Layer (SSL). To use SSL, appropriate security keys and certificates have to be issued and copied to the machine. To turn SSL off, the documentation recommends renaming ssl.conf. It is also possible to keep the SSL configuration file while setting SSLEngine to off:

# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine off

The cluster2.conf file is almost identical to cluster1.conf, while providing a different index variable value to avoid any port or file name conflict.

5.2. PostgreSQL Database

The PostgreSQL Database Server used in this reference architecture requires very little custom configuration. No files are copied to require a review, and there is no script-based configuration that changes the setup.

To enter the interactive database management mode, enter the following command as the postgres user:

$ /usr/bin/psql -U postgres postgres

With the database server running, users can enter both SQL and postgres instructions in this environment. Type help at the prompt to access to some of the documentation:

psql (9.2.15)
Type "help" for help.

Type \l and press enter to see a list of the databases along with the name of each database owner:

postgres=# \l
	                          List of databases
   Name    |  Owner   | Encoding |  Collation  |    Ctype    |   Access privileges
 eap7      | jboss    | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |

To issue commands against a specific database, enter \c followed by the database name. In this case, use the eap7 database for the reference architecture:

postgres=# \c eap7
psql (9.2.15)
You are now connected to database "eap7".

To see a list of the tables created in the database, use the \dt command:

postgres-# \dt
	List of relations
 Schema |  Name  | Type  | Owner
 public | person | table | jboss
(1 row)

The results for the above \dt command are different depending on whether the EAP servers are started, the cluster testing application is deployed and any data is created through JPA. To further query the table, use standard SQL commands.

5.3. Red Hat JBoss Enterprise Application Platform

5.3.1. Configuration Files

The CLI script in this reference architecture uses an embedded and offline domain controller for each of the domains to issue CLI instructions. The result is that the required server profiles are created and configured as appropriate for the cluster. This section reviews the various configuration files. The few manual steps involve copying the files from the master server to slave servers and adding application users, which were explained in the previous section.

This reference architecture includes two separate clusters, set up as two separate domains, where one is active and the other passive. The passive domain is a mirror image of the active one, other than minor changes to reflect the machines that are hosting it and avoid conflicting ports. The excerpts below reflect the active domain.

To start the active domain, first run the following command on node1, identified by the IP address of

# /opt/jboss-eap-7.0_active/bin/ -b -bprivate=

The system property indicates that this same server, running on node1, is the domain controller for this active domain. As such, all management users have been created on this node. This also means that the configuration for the entire domain is stored in a domain.xml file on this node. Furthermore, in the absence of a --host-config= argument, the configuration of this host is loaded from the local host.xml file. This section reviews each of these files.

To see a list of all the management users for this domain, look at the content of domain/configuration/

# Users can be added to this properties file at any time, updates after the server has started
# will be automatically detected.
# By default the properties realm expects the entries to be in the format: -
# username=HEX( MD5( username ':' realm ':' password))
# A utility script is provided which can be executed from the bin folder to add the users: -
# - Linux
# bin/
# - Windows
# bin\add-user.bat
# On start-up the server will also automatically add a user $local - this user is specifically
# for local tools running against this AS installation.
# The following illustrates how an admin user could be defined, this
# is for illustration only an does not correspond to a usable password.

Four users called admin, node1, node2 and node3 are defined in this property file. As the comments explain, the hexadecimal number provided as the value is a hash of each user’s password (along with other information) used to authenticate the user. In this example, password1! is used as the password for all four users; the hash value contains other information, including the username itself, and that is why an identical hash is not generated for all users.

The admin user has been created to allow administrative access through the management console and the CLI interface.

There is a user account configured for each node of the cluster. These accounts are used by slave hosts to connect to the domain controller.

The application users are created and stored in the same directory, under domain/configuration/. The user name and password are stored in

# Properties declaration of users for the realm 'ApplicationRealm' which is the default realm
# for application services on a new AS 7.1 installation.
# This includes the following protocols: remote ejb, remote jndi, web, remote jms
# Users can be added to this properties file at any time, updates after the server has started
# will be automatically detected.
# The format of this realm is as follows: -
# username=HEX( MD5( username ':' realm ':' password))
# …​
# The following illustrates how an admin user could be defined, this
# is for illustration only and does not correspond to a usable password.

There is only one application user ejbcaller and it is used to make remote calls to EJB. The provided EJBs do not use fine-grained security and therefore, no roles need to be assigned to this user to invoke their operations. This makes the file rather simple:

# Properties declaration of users roles for the realm 'ApplicationRealm'.
# This includes the following protocols: remote ejb, remote jndi, web, remote jms
# …​
# The format of this file is as follows: -
# username=role1,role2,role3
# The following illustrates how an admin user could be defined.

Application users are required on each server that hosts the EJB, these two files are therefore generated on all six EAP installations. Host files

As previously mentioned, the domain controller is started with the default host configuration file. This file is modified to change the server name by the CLI script. Each section of this file is inspected separately.

<?xml version='1.0' encoding='UTF-8'?>
<host name="node1" xmlns="urn:jboss:domain:4.1">

The security configuration remains unchanged from its default values, but a review that provides a better understanding of the authentication section is still useful. As seen below under authentication, the management realm uses both the file, previously reviewed, as well as a local default user configuration. This local user enables silent authentication when the client is on the same machine. As such, the user node1, although created, is not necessary for the active domain. It is created for the sake of consistency across the two domains. For further information about silent authentication of a local user, refer to the product documentation.

  <security-realm name="ManagementRealm">
         <local default-user="$local" skip-group-loading="true"/>
         <properties path="" relative-to="jboss.domain.config.dir"/>
     <authorization map-groups-to-roles="false">
         <properties path="" relative-to="jboss.domain.config.dir"/>

The rest of the security configuration simply refers to the application users, and once again, it is unchanged from the default values.

    <security-realm name="ApplicationRealm">
            <local default-user="$local" allowed-users="*" skip-group-loading="true"/>
            <properties path="" relative-to="jboss.domain.config.dir"/>
            <properties path="" relative-to="jboss.domain.config.dir"/>

Management interfaces are also configured as per the default and are not modified in the setup.

        <native-interface security-realm="ManagementRealm">
            <socket interface="management" port="${}"/>
        <http-interface security-realm="ManagementRealm">
            <socket interface="management" port="${}"/>

With node1 being the domain controller, the configuration file explicitly reflects this fact and points to <local/>. This is the default setup for the host.xml in EAP 7:


Interfaces and JVM configuration are also left unchanged.

The automated configuration scripts remove the sample servers that are configured by default in JBoss EAP 7 and instead create a new server as part of this cluster. This server is named in a way that reflects both the node on which it runs as well as the domain to which it belongs. All servers are configured to auto start when the host starts up:

<server name="node1" group="demo-cluster" auto-start="true">
    <socket-bindings port-offset="0"/>

To start the other nodes of the active domain, after the domain controller has been started, run the following commands on nodes 2 and 3, identified by IP addresses of and

# /opt/jboss-eap-7.0_active/bin/ -b -bprivate= -Djboss.domain.master.address= --host-config=host-slave.xml # /opt/jboss-eap-7.0_active/bin/ -b -bprivate= -Djboss.domain.master.address= –host-config=host-slave.xml

These two nodes are configured similarly, so it suffices to look at the host configuration file for one of them:

<?xml version='1.0' encoding='UTF-8'?>
<host name="node2" xmlns="urn:jboss:domain:4.1">

Once again, the host name is modified by CLI to reflect the host name. In the case of nodes 2 and 3, the name selected for the host is not only for display and informational purposes. This host name is also picked up as the user name that is used to authenticate against the domain controller. The hash value of the the user name and its associated password is provided as the secret value of the server identity:

    <security-realm name="ManagementRealm">
            <secret value="cGFzc3dvcmQxIQ=="/>

So in this case, node2 authenticates against the domain controller running on node1 by providing node2 as its username and the above secret value as its password hash. This value is copied from the output of the add-user script when node2 is added as a user.

The rest of the security configuration remains unchanged. Management interface configuration also remains unchanged.

The domain controller is identified by a Java system property ( -Djboss.domain.master.address) that is passed to the node2 Java virtual machine (JVM) when starting the server:

    <remote security-realm="ManagementRealm">
            <static-discovery name="primary" protocol="${jboss.domain.master.protocol:remote}" host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}"/>

Automated configuration scripts remove the sample servers of JBoss EAP 7 and instead create a new server as part of this cluster:

   <server name="node2" group="demo-cluster" auto-start="true">
       <socket-bindings port-offset="0"/>
</servers> Domain configuration file

The bulk of the configuration for the cluster and the domain is stored in its domain.xml file. This file can only be edited when the servers are stopped, and is best modified through the management capabilities, which include the management console, CLI and various scripts and languages available on top of CLI.

The first section of the domain configuration file lists the various extensions that are enabled for this domain. This section is left unchanged and includes all the standard available extensions:

<?xml version="1.0" ?>
<domain xmlns="urn:jboss:domain:4.1">

System properties are also left as default:

    <property name="" value="true"/>

The next section of the domain configuration is the profiles section. JBoss EAP 7 ships with four pre-configured profiles. Since this reference architecture only uses the full-ha profile, the other three profiles have been removed to make the domain.xml easier to read and maintain.

<profile name="default">
<profile name="ha">
<profile name="full">
<profile name="full-ha">

The configuration script updates parts of the full-ha profile, but most of the profile remains unchanged from the full-ha baseline. For eample, the ExampleDS datasource, using an in-memory H2 database, remains intact but is not used in the cluster.

The configuration script deploys a JDBC driver for postgres, and configures a connection pool to the eap7 database using the database owner’s credentials:

<datasource jndi-name="java:jboss/datasources/ClusterDS" pool-name="eap7" use-ccm="false">
   <new-connection-sql>set datestyle = ISO, European;</new-connection-sql>

The driver element includes the name of the JAR (Java ARchive) file where the driver class can be found. This creates a dependency on the JDBC driver JAR file and assumes that it has been separately deployed. There are multiple ways to make a JDBC driver available in EAP 7. The approach here is to deploy the driver, as with any other application.

The ExampleDS connection pool uses a different approach for its driver. The JDBC driver of the H2 Database is made available as a module. As such, it is configured under the drivers section of the domain configuration:

            <driver name="h2" module="com.h2database.h2">

After datasources, the profile configures the Java EE and EJB3 subsystems. This configuration is derived from the standard full-ha profile and unchanged.

The Infinispan subsystem is also left unchanged. This section configures several standard cache containers and defines the various available caches for each container. It also determines which cache is used by each container. Clustering takes advantage of these cache containers and as such, uses the configured cache of that container.

The cache container below is for applications that use a cluster singleton service:

<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
    <transport lock-timeout="60000"/>
    <replicated-cache name="default" mode="SYNC">
        <transaction mode="BATCH"/>

Next in the Infinispan subsystem is the web cache container. It is used to replicate HTTP session data between cluster nodes:

    <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
    <transport lock-timeout="60000"/>
    <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2">
        <locking isolation="REPEATABLE_READ"/>
        <transaction mode="BATCH"/>

Next is the ejb cache and used to replicate stateful session bean data.

<cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
    <transport lock-timeout="60000"/>
    <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2">
        <locking isolation="REPEATABLE_READ"/>
        <transaction mode="BATCH"/>

Finally, the last cache container in the subsystem is the hibernate cache and is used for replicating second-level cache data. Hibernate makes use of an entity cache to cache data on each node and invalidate other nodes of the cluster when the data is modified. It also has an optional query cache:

<cache-container name="hibernate" default-cache="local-query" module="org.hibernate.infinispan">
    <transport lock-timeout="60000"/>
    <local-cache name="local-query">
   <eviction strategy="LRU" max-entries="10000"/>
   <expiration max-idle="100000"/>
    <invalidation-cache name="entity" mode="SYNC">
   <transaction mode="NON_XA"/>
   <eviction strategy="LRU" max-entries="10000"/>
   <expiration max-idle="100000"/>
    <replicated-cache name="timestamps" mode="ASYNC"/>

Several other subsystems are then configured in the profile, without any modification from the baseline full-ha profile:

The next subsystem configured in the profile is messaging, Activemq. Two JMS servers are configured using the shared-store HA policy, one is a live JMS server (named "default").

<server name="default">
    <cluster password="password" user="admin"/>
    <shared-store-master failover-on-server-shutdown="true"/>
    <bindings-directory path="/mnt/activemq/sharefs/group-${live-group}/bindings"/>
    <journal-directory path="/mnt/activemq/sharefs/group-${live-group}/journal"/>
    <large-messages-directory path="/mnt/activemq/sharefs/group-${live-group}/largemessages"/>
    <paging-directory path="/mnt/activemq/sharefs/group-${live-group}/paging"/>
    <security-setting name="#">
        <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
    <address-setting name="#" redistribution-delay="0" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
    <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
    <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
        <param name="batch-delay" value="50"/>
    <remote-connector name="netty" socket-binding="messaging">
        <param name="use-nio" value="true"/>
        <param name="use-nio-global-worker-pool" value="true"/>
    <in-vm-connector name="in-vm" server-id="0"/>

    <http-acceptor name="http-acceptor" http-listener="default"/>
    <http-acceptor name="http-acceptor-throughput" http-listener="default">
        <param name="batch-delay" value="50"/>
        <param name="direct-deliver" value="false"/>
    <remote-acceptor name="netty" socket-binding="messaging">
        <param name="use-nio" value="true"/>
    <in-vm-acceptor name="in-vm" server-id="0"/>
    <broadcast-group name="bg-group1" connectors="netty" jgroups-channel="activemq-cluster"/>
    <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/>
    <cluster-connection name="amq-cluster" discovery-group="dg-group1" connector-name="netty" address="jms"/>
    <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
    <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
    <jms-queue name="DistributedQueue" entries="java:/queue/DistributedQueue"/>
    <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
    <connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="netty"/>
    <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>

And the other is a backup JMS server (named "backup").

<server name="backup">
    <cluster password="password" user="admin"/>
    <shared-store-slave failover-on-server-shutdown="true"/>
    <bindings-directory path="/mnt/activemq/sharefs/group-${backup-group}/bindings"/>
    <journal-directory path="/mnt/activemq/sharefs/group-${backup-group}/journal"/>
    <large-messages-directory path="/mnt/activemq/sharefs/group-${backup-group}/largemessages"/>
    <paging-directory path="/mnt/activemq/sharefs/group-${backup-group}/paging"/>
    <remote-connector name="netty-backup" socket-binding="messaging-backup"/>
    <remote-acceptor name="netty-backup" socket-binding="messaging-backup"/>
    <broadcast-group name="bg-group-backup" connectors="netty-backup" broadcast-period="1000" jgroups-channel="activemq-cluster"/>
    <discovery-group name="dg-group-backup" refresh-timeout="1000" jgroups-channel="activemq-cluster"/>
    <cluster-connection name="amq-cluster" discovery-group="dg-group-backup" connector-name="netty-backup" address="jms"/>

The live JMS server uses the shared-store-master tag and the backup JMS server use the shared-store-slave tag. With the shared-store HA policy, when the live and backup JMS servers share the same file path, they form a live-backup group.

<shared-store-master failover-on-server-shutdown="true"/>

<shared-store-slave failover-on-server-shutdown="true"/>

To have a JMS HA Cluster and to set up 3 nodes, one solution is to set up 3 copies of the full-ha profiles, almost identical to each other, but different in their messaging-activemq subsystem. This would allow the 3 profiles to be modified indiviually, following the diagrom below to form a 3 node JMS cluster: each profile will have 1 active jms server and 1 backup server.

However a simpler setup is possible that is both easier to understand and maintain. This reference architecture is set up with a single full-ha profile, with 2 JMS servers inside, one live and one backup JMS server. The same profile will apply to all 3 nodes, thus 1 live server and 1 backup server on each node. The key point is using variables to make sure that at runtime, the live and backup servers belong to different groups and form a live-backup pair with a different JVM.

Please note the use of variables in this reference architecture’s shared-store file path, which 's the way to make sure a backup JMS server won’t be in the same JVM as the live JMS server. Of course with a live-backup pair within the same JVM, failure at JVM or OS level will cause outage to both the live and its backup JMS server at same time and the backlog of messages being processed by that server will not be picked up by any other node, something that must be avoided in a true HA environment.

<bindings-directory path="/mnt/activemq/sharefs/group-${live-group}/bindings"/>

<bindings-directory path="/mnt/activemq/sharefs/group-${backup-group}/bindings"/>

These 2 variables, live-group and backup-group, are runtime values are from the host.xml of the DMC node and host-slave.xml of slave nodes, defined inside each server tag.

<server name="node1" group="demo-cluster" auto-start="true">
     <property name="live-group" value="3"/>
     <property name="backup-group" value="4"/>

Sample values for the active domain: in this setup, the second JMS server of node1 will be the backup server for the live server of node2, the second JMS server of node2 will be the backup server for the live server of node3, and the second JMS server of node3 will be the backup server for the live server of node1.

node1 (DMC node):
     live-group: 1
     backup-group: 2
     live-group: 2
     backup-group: 3
     live-group: 3
     backup-group: 1

The cluster-connection for both JMS servers use separately defined remote-connector configurations instead of the default http-connector, to avoid conflict on the http port.

<cluster-connection name="amq-cluster" discovery-group="dg-group1" connector-name="netty" address="jms"/>

<remote-acceptor name="netty" socket-binding="messaging"/>

<cluster-connection name="amq-cluster" discovery-group="dg-group-backup" connector-name="netty-backup" address="jms"/>

<remote-connector name="netty-backup" socket-binding="messaging-backup"/>

After messaging, the next subsystem configuration in the domain configuration file is mod_cluster, which is used to communicated with JWS apache http server:

<subsystem xmlns="urn:jboss:domain:modcluster:2.0">
    <mod-cluster-config advertise-socket="modcluster" proxies="mod-cluster" advertise="false" connector="ajp">
            <load-metric type="cpu"/>

The rest of the profile remains unchanged from the baseline full-ha configuration.

The domain configuration of the available interfaces is also based on the defaults provided in the full-ha configuration:

    <interface name="management"/>
    <interface name="public"/>
    <interface name="unsecure">
        <inet-address value="${jboss.bind.address.unsecure:}"/>

The four socket binding groups provided by default, remain configured in the setup, but only full-ha-sockets is used, and the other items are ignored.

The full-ha-sockets configures a series of port numbers for various protocols and services. Most of these port numbers are unchanged in the profiles and only three are modified. Jgroups-mping and jgroups-udp are changed to for cluster communication. mod-cluster affects full-ha profile’s modcluster subsystem, which is used to communicate with the JWS Apache http server.

   <socket-binding-group name="full-ha-sockets" default-interface="public">
       <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:}" multicast-port="45700"/>
       <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:}" multicast-port="45688"/>
       <outbound-socket-binding name="mod-cluster">
      <remote-destination host="" port="6661"/>

Finally, the new server group is added and replaces the default ones:

        <server-group name="demo-cluster" profile="full-ha">
            <jvm name="default">
                <heap size="66m" max-size="512m"/>
            <socket-binding-group ref="full-ha-sockets"/>
                <deployment name="postgresql-9.4.1208.jar" runtime-name="postgresql-9.4.1208.jar"/>
                <deployment name="clusterApp.war" runtime-name="clusterApp.war"/>
                <deployment name="chat.war" runtime-name="chat.war"/>

This server group has three applications deployed:

  • The first deployment, postgresql-9.4.1208.jar is the JDBC driver JAR file for PostgreSQL Database. The configured datasource called ClusterDS relies on this driver, and declares its dependency on it.
  • The second deployment clusterApp.war is a clustered application deployed by the configuration script, to validate and demonstrate clustering capabilities. It includes a distributable web application, stateless and clustered stateful sessions beans, a JPA bean with second-level caching enabled and MDBs to consume messages from a distributed queue.
  • The third deployment is chat.war, which is used to demonstrate the WebSocket feature of Java EE 7.

The second and passive cluster/domain is configured through the same script with a slightly modified property file. The only properties that is changed for the second domain are as follows:

domainController: the IP address of the domain controller, running on node2
domainName: passive for the passive domain
offsetUnit: 1 for the passive domain
modClusterProxy: The host:port value of the proxy for the passive cluster

5.4. Configuration Scripts (CLI)

5.4.1. Overview

The management Command Line Interface (CLI) is a command line administration tool for JBoss EAP 7. The Management CLI can be used to start and stop servers, deploy and undeploy applications, configure system settings, and perform other administrative tasks.

A CLI operation request consists of three parts:

  • an address, prefixed with a slash “/”.
  • an operation name, prefixed with a colon “:”.
  • an optional set of parameters, contained within parentheses “()”.

The configuration is presented as a hierarchical tree of addressable resources. Each resource node offers a different set of operations. The address specifies which resource node to perform the operation on. An address uses the following syntax:


  • node-type is the resource node type. This maps to an element name in the configuration XML.
  • node-name is the resource node name. This maps to the name attribute of the element in the configuration XML.
  • Separate each level of the resource tree with a slash “/”.

Each resource may also have a number of attributes. The read-attribute operation is a global operation used to read the current runtime value of a selected attribute.

5.4.2. Command-line interface (CLI) Scripts

This reference architecture uses CLI script to setup the whole domain structure.

JBoss EAP 7 introduced embed-host-controller for CLI scripts, also referred to as Offline CLI, which allows the setup of domain environments without the need of starting a domain first.

There are two ways to invoke the embed-host-controller to configure the domain.

1) To set up domain.xml on DMC box:

# /opt/jboss-eap-7.0/bin/ embed-host-controller --domain-config=domain.xml --host-config=host.xml --std-out=echo

2) To set up host-slave.xml on DMC box, so that after running the CLI, the host-slave.xml can be manually copied to slave boxes:

# /opt/jboss-eap-7.0/bin/ -Djboss.domain.master.address= embed-host-controller --domain-config=domain.xml --host-config=host-slave.xml --std-out=echo Shell Scripts

For this reference architecture, the CLI is invoked through the shell script in both cases.

The script performs the following tasks in order:

  • Stop all running jboss threads
# pkill -f 'jboss'
  • Clean up the environment by removing the CONF_HOME and server folder
  • Restore the last good configuration by copying it from CONF_BACKUP_HOME
   echo "========configuration to remove: " $CONF_HOME
   rm -rf $CONF_HOME
   echo "========restore from: " $CONF_BACKUP_HOME
   echo "========No configuration backup at " $CONF_BACKUP_HOME
ls -l  $JBOSS_HOME/domain
echo "========clean servers folder"
rm -rf $JBOSS_HOME/domain/servers/*
  • Build the test clients and cluster application using maven
echo "========build deployments if needed"
if [ -f ${SCRIPT_DIR}/code/webapp/target/clusterApp.war ]
   echo "========clusterApp.war is already built"
   mvn -f ${SCRIPT_DIR}/code/pom.xml install
  • Set up the master server’s domain.xml and host.xml by calling setup-master.cli
$JBOSS_HOME/bin/ --properties=$SCRIPT_DIR/ --file=$SCRIPT_DIR/setup-master.cli -Dhost.file=host.xml
THIS_HOSTNAME=`$JBOSS_HOME/bin/  -Djboss.domain.master.address= --commands="embed-host-controller --domain-config=domain.xml --host-config=host-slave.xml,read-attribute local-host-name"`
  • Set up the slave servers' host-slave.xml by calling setup-slave.cli
for slave in $(echo $slaves | sed "s/,/ /g")
        #Variable names for each node's name and identity secret:
        echo Will configure ${slave}
        #Back up host-slave as it will be overwritten by CLI:
        cp $CONF_HOME/host-slave.xml $CONF_HOME/host-slave-orig.xml

        $JBOSS_HOME/bin/ --file=$SCRIPT_DIR/setup-slave.cli -Djboss.domain.master.address= -Dhost.file=host-slave.xml$THIS_HOSTNAME${!name} -Dserver.identity=${!secret}${!liveGroup}${!backupGroup} --properties=$SCRIPT_DIR/

        #Copy final host-slave file to a directory for this node, restore the original
        mkdir $CONF_HOME/$slave
        mv $CONF_HOME/host-slave.xml $CONF_HOME/$slave/
        mv $CONF_HOME/host-slave-orig.xml $CONF_HOME/host-slave.xml
done CLI Scripts

1) setup-master.cli

  • Remove default server-group and servers that belong it, also remove the 3 profiles that are not needed in here.

  • Configure multicast address
  • Set up mod-cluster for apache
  • Create demo-cluster server group
/server-group=demo-cluster/jvm=default:add(heap-size=66m, max-heap-size=512m)
  • Add server to this server group
  • Configure host management interface
  • Add live-group and backup-group system parameters for ActiveMQ backup grouping
  • Add postgresql datasource
deploy ${env.SCRIPT_DIR}/postgresql-9.4.1208.jar  --server-groups=demo-cluster
data-source add --profile=full-ha --name=eap7 --driver-name=postgresql-9.4.1208.jar --connection-url=$psqlConnectionUrl --jndi-name=java:jboss/datasources/ClusterDS --user-name=jboss --password=password --use-ccm=false --max-pool-size=25 --blocking-timeout-wait-millis=5000 --new-connection-sql="set datestyle = ISO, European;"
holdback-batch deploy-postgresql-driver
  • Add 2 socket bindings for ActiveMQ
  • Set up ActiveMQ live jms server

add shared-store-master /profile=full-ha/subsystem=messaging-activemq/server=default/ha-policy=shared-store-master:add(failover-on-server-shutdown=true) #change the value from 1000 (default) to 0 /profile=full-ha/subsystem=messaging-activemq/server=default/address-setting=:write-attribute(name=redistribution-delay,value=0)

#add shared files path
#add remote connector netty
#	/profile=full-ha/subsystem=messaging-activemq/server=default/remote-connector=netty:write-attribute(name=params,value={name=,value=true})
#add remote acceptor netty

#update the connectors from http-connector to netty
  • Set up ActiveMQ backup jms server
#add shared-store-slave

#add shared files path
#add remote connector netty
#add remote acceptor netty
#add broadcast-group
#add discovery-group
#add cluster-connection
  • Update transactions subsytem’s attribute node-identifier to supress warning
  • Deploy WAR files
deploy ${env.SCRIPT_DIR}/code/webapp/target/clusterApp.war  --server-groups=demo-cluster
deploy ${env.SCRIPT_DIR}/chat.war  --server-groups=demo-cluster

2) setup-slave.cli

  • Update ManagementRealm serverIdentity with password hash from
  • Set management port
  • Add correct server and remove two default servers
  • Add live-group and backup-group system parameters for ActiveMQ backup grouping
  • Update host name in the host-slave.xml