Chapter 4. Creating the Environment
4.1. Prerequisites
This reference architecture assumes a supported platform (Operating System and JDK). For further information, refer to the Red Hat Supported Configurations for EAP 7.
As mentioned before, JBoss A-MQ is used as the Event Channel origin point herein. The following documentation encompasses setup for this choice. Other JMS servers are viable options at this time, though likely to be phased out in future HACEP versions. Should you choose to use an alternative, please refer to the product’s installation guide in lieu of Event Channel setup for your stack.
4.2. Downloads
This document makes reference to and use of several files included as an accompanying attachment to this document. Future versions of the framework can be found at Red Hat Italy’s GitHub project page, however, the attached files can be used as a reference point during the integration process and include a copy of the application developed herein, as well as a copy of the HACEP framework source code as of time of writing, version SNAPSHOT-1.0, which best pairs with the content of the paper:
https://access.redhat.com/node/2542881/40/0
If you do not have access to the Red Hat customer portal, please see the Comments and Feedback section to contact us for alternative methods of access to these files.
In addition to the accompanying attachment, download prerequisites, including:
4.3. Installation
4.3.1. ZooKeeper Ensemble
After downloading the ZooKeeper release archive, choose a location where it will reside. It is possible to utilize a single server node to host both A-MQ and ZooKeeper. However, in order to facilitate similarities to likely production environments, this document assumes separate server clusters for each technology. The root installation location used herein for ZooKeeper is /opt/zookeeper_3.4.8/.
Heap memory available to each ZooKeeper instance can be configured via creating a file in the conf directory called java.env containing the following text:
export JVMFLAGS="-Xms2G -Xmx3G"
The data directory and port of usage utilized by ZooKeeper are likewise specified in a file within the conf directory called zoo.cfg:
zoo.cfg
clientPort=2181 dataDir=/opt/zookeeper_3.4.8/data syncLimit=5 tickTime=2000 initLimit=10 dataLogDir=/opt/zookeeper_3.4.8/data server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
The minimum number of nodes suggested for a ZooKeeper ensemble is three instances, thus the server entries seen above, where 'zk[1-3]' are host entries aliasing the local network IP addresses of each ZooKeeper server. Each node within an ensemble should use an identical copy of the zoo.cfg file. Note the ports 2888 and 3888 given above, which in addition to the clientPort of 2181, allow for quorum and leader elections and thus may require firewall configuration on your part to ensure availability. If you desire to run multiple ZooKeeper instances on a single machine, rather than on three separate servers as done herein, each server instance should have a unique assignment of these two ports (2888:3888, 2889:3889, 2890:3890, etc).
Sample copies of both java.env and zoo.cfg have been included in the accompanying attachment file under the /config_files/zookeeper/ directory.
Once configured, the ZooKeeper instance can be started via the [install_dir]/bin directory with the following command:
# ./zkServer.sh start
You can tail the instance’s log file with the following command:
# tail -f [root_install_dir]/zookeeper.out
For further information on configuring, running, and administering ZooKeeper instances and ensembles, please refer to the Apache Zookeeper Getting Started Guide and Administrator’s Guide.
4.3.2. JBoss A-MQ Cluster
After downloading Red Hat JBoss A-MQ 6.2.1, choose a location where it will reside. The root installation location used herein for A-MQ is /opt/amq_6.2.1/.
Prior to starting the instance, further configuration is required for utilizing the replicated LevelDB store strategy. Should you choose to use a different persistence strategy, now is the time to configure it as detailed in the Red Hat JBoss A-MQ Guide to Configuring Broker Persistence. To follow along with the strategy adhered to within, proceed as follows:
First step is to ensure that your system has at least one user set up. Open the file [root_install]/etc/users.properties and ensure that at least an admin user is present and not commented out:
# You must have at least one users to be able to access JBoss A-MQ resources *admin=admin,admin,manager,viewer,Operator, Maintainer, Deployer, Auditor, Administrator, SuperUser*
If you would like to verify the basic installation of A-MQ at this time, steps for doing so can be found in the Red Hat A-MQ Installation Guide under the heading Verifying the Installation.
Next, configure the persistence mechanism by editing the [root_install]/etc/activemq.xml file to include the <persistenceAdapter> section as shown below:
<broker brokerName="broker" persistent="true" … >
…
<persistenceAdapter>
<replicatedLevelDB
directory="/opt/amq_6.2.1/data"
replicas="3"
bind="tcp://192.168.2.210:61619"
hostname="amq1"
zkAddress="zk1:2181,zk2:2181,zk3:2181"
zkPassword="password"
zkPath="/opt/zookeeper_3.4.8/activemq/leveldb-stores" />
</persistenceAdapter>
…
</broker>
Note that the binding is unique to the network interface IP address of the specific node, while the zkAddress will be identical across all A-MQ instances. The number of replicas should reflect the number of nodes within your ZooKeeper ensemble.
The last step in configuring the A-MQ instance involves editing the [root_install]/bin/setenv script file. Inside, you can specify the amount of Heap space available to the instance, as well as exporting a JVM arg called SERIALIZABLE_PACKAGES:
#!/bin/sh
...
if [ "$KARAF_SCRIPT" = "start" ]; then
JAVA_MIN_MEM=2G
JAVA_MAX_MEM=3G
export JAVA_MIN_MEM # Minimum memory for the JVM
export JAVA_MAX_MEM # Maximum memory for the JVM
fi
...
export KARAF_OPTS="-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=\"*\""The SERIALIZABLE_PACKAGES is related to HACEP’s usage of the ObjectMessage, which depends on Java [de]serialization to [un]marshal object payloads. Since this is generally considered unsafe, given that payloads could exploit the host system, the above option can be tuned to be more specific to those object types which we know the system will be dealing with. For more details on ObjectMessage, refer to the relevant ActiveMQ documentation. A more finely-tuned example would read as follows:
export KARAF_OPTS="-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=\"it.redhat.hacep,com.redhat.refarch.hacep,your.company.model.package\""
After configuration is complete, you can start the A-MQ instance by issuing the following command from the [root_install]/bin/ directory:
# ./start
You can tail the instance’s log file with the following command:
# tail -f [root_install_dir]/data/log/amq.log
At this point, your replicated A-MQ Event Channel is now fully established and running, ready to feed events to the HACEP framework integration application.
4.3.3. EAP 7.0.1
Given that HACEP requires, at a minimum, two clustered instances, EAP installations in a production environment will likely result in use of domain clustering functionality. However, given the documentation necessary to effectively cover active/passive clustering appropriate for a production environment, the reader is instead encouraged to utilize the Red Hat Reference Architecture for Configuring a JBoss 7 EAP Cluster when establishing production-ready environments. In order to maintain focus on the integration and functionality of the HACEP framework in a proof-of-concept environment, two separate EAP standalone server instances are used herein, allowing JGroups to do the inter-node communication necessary for HACEP to function.
4.3.3.1. Installer Script
After downloading the EAP 7 installer, execute it using the following command within the directory containing the .jar file:
java -jar jboss-eap-7.0.0-installer.jar
You will be presented with various options aimed at configuring your server instance, each followed by an opportunity to confirm your selection. The following choices should be used when installing each EAP instance:
- language: enter to select default for English or make a choice appropriate to your needs
- license: after reading the the End User License Agreement, enter 1 to continue
- installation path: /opt/EAP_7/
- pack installation: enter 0 to accept the default selection of packs to be installed
- administrative user name: enter to accept the default of admin
- administrative user password: enter password! twice to confirm the entry
- runtime environment configuration: no additional options required, enter to accept the default of 0
- automatic installation script: though not necessary, if you’d like to generate an installation script for the settings chosen, you can do so at this time
After completion, you should receive a confirmation message that the installation is done:
[ Console installation done ]
4.3.3.2. Patch Application
With basic installation complete, start the server in standalone mode in order to apply the 7.0.1 patch file by issuing the following command:
[root_install_dir]/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
Note that in the lines above, an IP binding of 0.0.0.0 is provided for both the server & management ports of the address, which in most environments allows the server to listen on all available interfaces. If you’ve chosen to run installations across various virtual machines, you may also find these parameters necessary to allow configuring the server away from localhost.
Once the server has started, you can access the management UI interface at http://[ip_address]:9990 with the credentials supplied during installation. From the landing page, you can then select Apply a Patch in the bottom-left to start the patching process. Provide the downloaded patch file when requested, then complete the process. Following completion, stop the standalone server.
Figure 4.1. EAP Management UI Landing Page

4.3.3.3. Resource Adapter Configuration
In order to leverage EAP’s resource adapters to connect and interact with the Event Channel cluster, further configuration is required. While the basic steps are outlined below, should you encounter issues or desire further description of what the process entails, reference the Red Hat JBoss AM-Q Guide for Integrating with JBoss Enterprise Application Platform.
The first step is to edit the file [root_install]/standalone/configuration/standalone.xml as follows. Near the top of the document, add the messaging-activemq extension entry just as the other extensions are enabled as shown below. Doing so enables the ActiveMQ Artemis messaging system embedded within EAP which can be used as an internal messaging broker. However, since we’ve chosen to utilize an external broker for the Event Channel, we’re simply leveraging the inclusion of various Java EE API dependencies required by the resource adapter we’ll be using by enabling this extension.
<extension module="org.wildfly.extension.messaging-activemq"/>
Further down the file, amongst the other subsystem entries, supply an empty entry for the messaging-activemq system as shown in bold below:
<subsystem xmlns="urn:jboss:domain:mail:2.0">
<mail-session name="default" jndi-name="java:jboss/mail/Default">
<smtp-server outbound-socket-binding-ref="mail-smtp"/>
</mail-session>
</subsystem>
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0"/>
<subsystem xmlns="urn:jboss:domain:naming:2.0">
<remote-naming/>
</subsystem>
Next, locate the ejb3 subsystem entry and add an mdb entry for the connection pool that we’ll be using as shown in bold below:
<subsystem xmlns="urn:jboss:domain:ejb3:4.0">
<session-bean>
<stateless>
<bean-instance-pool-ref pool-name="slsb-strict-max-pool"/>
</stateless>
<stateful default-access-timeout="5000" cache-ref="simple" + passivation-disabled-cache-ref="simple"/>
<singleton default-access-timeout="5000"/>
</session-bean>
<mdb>
<resource-adapter-ref resource-adapter-name="activemq-rar.rar"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
<pools>
<bean-instance-pools>
<strict-max-pool name="slsb-strict-max-pool" derive-size="from-worker-pools" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
<strict-max-pool name="mdb-strict-max-pool" derive-size="from-cpu-count" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>
Finally, add an entry to the resource-adapters subsystem section detailing the system we’re utilizing to communicate with the external Event Channel cluster as shown below:
<subsystem xmlns="urn:jboss:domain:resource-adapters:4.0">
<resource-adapters>
<resource-adapter id="activemq-rar.rar">
<archive>
activemq-rar.rar
</archive>
<transaction-support>XATransaction</transaction-support>
<config-property name="ServerUrl">
failover:(tcp://amq1:61616,tcp://amq2:61616,tcp://amq3:61616)?jms.rmIdFromConnectionId=true&maxReconnectAttempts=0
</config-property>
<config-property name="UserName">
admin
</config-property>
<config-property name="Password">
admin
</config-property>
<config-property name="UseInboundSession">
true
</config-property>
<connection-definitions>
<connection-definition
class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory"
jndi-name="java:/HacepConnectionFactory"
enabled="true"
pool-name="ConnectionFactory">
<config-property name="UseInboundSession">
false
</config-property>
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
<prefill>false</prefill>
<is-same-rm-override>false</is-same-rm-override>
</xa-pool>
<recovery>
<recover-credential>
<user-name>admin</user-name>
<password>admin</password>
</recover-credential>
</recovery>
</connection-definition>
</connection-definitions>
<admin-objects>
<admin-object class-name="org.apache.activemq.command.ActiveMQQueue"
jndi-name="java:/queue/hacep-facts"
use-java-context="true"
pool-name="hacep-facts">
<config-property name="PhysicalName">
hacep-facts
</config-property>
</admin-object>
</admin-objects>
</resource-adapter>
</resource-adapters>
</subsystem>
Note the ServerUrl value, which takes the form of the failover: protocol, allowing us to use the full A-MQ cluster in a fault-tolerant way. The username, password, and URL should be customized to fit your requirements. The & is also intentional as to escape the character to survive the system’s translation of the configuration resource. Further information about the values provided and changes made for working with a cluster can be found in the Red Hat JBoss AM-Q Guide for Integrating with JBoss Enterprise Application Platform.
The last step of configuring the needed resource adapter is to provide the archive file which was indicated in the last configuration step above. The file can be downloaded from the Red Hat Maven repository. Copy the activemq-rar.rar file into the [root_install]/standalone/deployments/ directory, then start the standalone server again. If everything was configured correctly, you should see information regarding the newly established JNDI connections as part of the server startup output:
13:11:43,390 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-1) WFLYJCA0002: Bound JCA AdminObject [java:/queue/hacep-facts] 13:11:43,390 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-1) WFLYJCA0002: Bound JCA ConnectionFactory [java:/HacepConnectionFactory] 13:11:43,430 INFO [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "activemq-rar.rar" (runtime-name : "activemq-rar.rar")
4.3.3.4. Environment Configuration
Lastly, to ensure that the JVM has ample heap space available, JGroups has the configuration needed, and the resource adapter knows how to deal with the ActiveMQ ObjectMessage caveat seen before, we need to edit the [root_install]/bin/standalone.conf file as shown below. Note that the SERIALIZABLE_PACKAGES option can be tailored accordingly as previously described.
…
#
# Specify options to pass to the Java VM.
#
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms2G -Xmx3G -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=256m
-Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS
-Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS
-Djava.awt.headless=true"
else
echo "JAVA_OPTS already set in environment; overriding default settings with values:
$JAVA_OPTS"
fi
…
JAVA_OPTS="$JAVA_OPTS -Djgroups.tcp.address=192.168.2.200
-Dorg.apache.activemq.SERIALIZABLE_PACKAGES=\"*\""
4.4. Conclusion
At this point, your environment has been fully established, configured, and made ready for deployment of any application integrating the HACEP framework. Assuming your ZooKeeper ensemble and A-MQ cluster are running and functional, you should now be ready to take advantage of the JNDI connection to the Event Channel via standard Resource lookup:
@Resource(lookup = "java:/HacepConnectionFactory") private ConnectionFactory connectionFactory;

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.