Chapter 5. Clustering
- GIT repository: virtual-file-system (VFS) repository that holds the business assets so that all cluster nodes use the same repository
- Web applications: The web applications need to be clustered so that nodes share the same runtime data.For instructions on clustering the application, refer to the container clustering documentation.
GIT Repository Clustering Mechanism
- Apache Zookeeper brings all parts together.
- Apache Helix is the cluster management component that registers all cluster details (the cluster itself, nodes, resources).
- uberfire framework which provides the backbone of the web applications
- Setting up the cluster itself using Zookeeper and Helix
- Configuring clustering on your container (this documentation provides only clustering instructions for Red Hat JBoss EAP 6)
Clustering Maven Repositories
rsync.
5.1. Clustering on JBoss EAP
5.1.1. Clustering using JAR Installer
Note
business-central.war, dashbuilder.war, and kie-server.war deployed. To customize the setup to fit your scenario, or to use clustering with the deployable ZIP, see the Section 5.1.2, “Custom Configuration (Deployable ZIP)”. You can also get more information in the JBoss EAP documentation.
Note
Select JDBC provider
On this screen, select the JDBC provider from the list. You need to provide the corresponding JDBC driver JAR(s) in one of these ways:- Select one or more files on the filesystem
- Provide one or more URLs. The installer downloads the files automatically.
The installer then copies the JAR(s) into the appropriate location under the directory$EAP_HOME/modules, where a correspondingmodule.xmlfile is also created automatically.Configure Quartz connection
On the next screen, provide the data to the database for Quartz. The installer automatically creates the Quartz definition file ($EAP_HOME/domain/configuration/quartz-definition.properties) and two Quartz datasources in the domain configuration file$EAP_HOME/domain/domain.xml. You may edit the files after having finished the installation.Note
During the installation, Quartz DDL scripts will be run on the database selected in this step. These scripts make changes needed for Quartz to operate (adding tables, etc.), and can be found in$EAP_HOME/jboss-brms-bpmsuite-6.2-supplementary-tools/ddl-scriptsfor reference (You do not need to modify them in any way).- Click next to initiate the installation.
Important
When using the JAR installer, thewararchives are automatically created from the applications residing in$EAP_HOME/standalone/deployments/. That means additional space is necessary as the applications exist both in uncompressed and compressed state in the storage during the installation.Three ZooKeeper instances are automatically created in$EAP_HOME/jboss-brms-bpmsuite-6.2-supplementary-tools/(directory nameszookeeper-one,zookeeper-two, andzookeeper-three).In the directory$EAP_HOME/jboss-brms-bpmsuite-6.2-supplementary-tools/helix-core, you can find the default Helix configuration and the scripts to launch the cluster—startCluster.shfor UNIX andstartCluster.batfor Windows.After the installation finishes, DO NOT select to run the server immediately. You first need to start the cluster by moving to the directory$EAP_HOME/jboss-brms-bpmsuite-6.2-supplementary-tools/helix-coreand executing the aforementioned launch script:On UNIX systems:./startCluster.sh
On Windows:./startCluster.bat
This script launches the Helix cluster and the ZooKeeper instances. Only after that, start the EAP server in domain mode by moving to the directory$EAP_HOME/binand running:On UNIX systems:./domain.sh
On Windows:./domain.bat
5.1.2. Custom Configuration (Deployable ZIP)
- Configure ZooKeeper and Helix according to Section 5.2.1, “Setting up a Cluster”.
- Configure individual server nodes in the
main-server-groupelement in the$EAP_HOME/domain/configuration/host.xmlfile with properties defined in Table 5.1, “Cluster node properties”:Note that a when configuring a JBoss EAP cluster with Zookeeper, a different number of JBoss EAP nodes than Zookeeper nodes is possible (keeping in mind that Zookeeper should to have an odd number of nodes). However, having the same node count for both Zookeeper and JBoss EAP is considered best practice.Table 5.1. Cluster node properties
Property name Value Description org.uberfire.nio.git.dir/home/jbrm/node[N]/repoGIT (VFS) repository location on node[N]jboss.node.namenodeOnenode name unique within the clusterorg.uberfire.cluster.idbrms-clusterHelix cluster nameorg.uberfire.cluster.zkserver1:2181Zookeeper locationorg.uberfire.cluster.local.idnodeOne_12345unique ID of the Helix cluster nodeNote that:is replaced with_.org.uberfire.cluster.vfs.lockvfs-reponame of the resource defined on the Helix clusterorg.uberfire.nio.git.daemon.port9418port used by the VFS repo to accept client connectionsThe port must be unique for each cluster member.org.uberfire.metadata.index.dir/home/jbrm/node[N]/indexlocation where the index for search is to be created (maintained by Apache Lucene)org.uberfire.nio.git.ssh.port8003the unique port number for ssh access to the GIT repo for a cluster running on physical machines. org.uberfire.nio.git.daemon.hostnodeOnethe name of the daemon host machine in a physical cluster. org.uberfire.nio.git.ssh.hostnodeOnethe name of the SSH host machine in a physical cluster. org.uberfire.nio.git.ssh.hostport and org.uberfire.nio.git.daemon.hostport8003 and 9418In a virtualized environment, the outside port to be used. Example 5.1. Cluster nodeOne configuration
<system-properties> <property name="org.uberfire.nio.git.dir" value="/tmp/brms/nodeone" boot-time="false"/> <property name="jboss.node.name" value="nodeOne" boot-time="false"/> <property name="org.uberfire.cluster.id" value="brms-cluster" boot-time="false"/> <property name="org.uberfire.cluster.zk" value="server1:2181,server2:2181,server3:2181" boot-time="false"/> <property name="org.uberfire.cluster.local.id" value="nodeOne_12345" boot-time="false"/> <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/> <property name="org.uberfire.metadata.index.dir" value="/tmp/jbrm/nodeone" boot-time="false"/> <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodeone" boot-time="false"/> <property name="org.uberfire.nio.git.ssh.port" value="8003" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.host" value="nodeOne" /> <property name="org.uberfire.nio.git.ssh.host" value="nodeOne" /> <property name="org.uberfire.nio.git.ssh.hostport" value="8003" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.hostport" value="9418" boot-time="false"/> </system-properties>
Example 5.2. Cluster nodeTwo configuration
<system-properties> <property name="org.uberfire.nio.git.dir" value="/tmp/brms/nodetwo" boot-time="false"/> <property name="jboss.node.name" value="nodeTwo" boot-time="false"/> <property name="org.uberfire.cluster.id" value="brms-cluster" boot-time="false"/> <property name="org.uberfire.cluster.zk" value="server1:2181,server2:2181,server3:2181" boot-time="false"/> <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346" boot-time="false"/> <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/> <property name="org.uberfire.metadata.index.dir" value="/tmp/jbrm/nodetwo" boot-time="false"/> <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodetwo" boot-time="false"/> <property name="org.uberfire.nio.git.ssh.port" value="8003" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.host" value="nodeTwo" /> <property name="org.uberfire.nio.git.ssh.host" value="nodeTwo" /> <property name="org.uberfire.nio.git.ssh.hostport" value="8003" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.hostport" value="9418" boot-time="false"/> </system-properties>
Example 5.3. Cluster nodeThree configuration
<system-properties> <property name="org.uberfire.nio.git.dir" value="/tmp/brms/nodethree" boot-time="false"/> <property name="jboss.node.name" value="nodeThree" boot-time="false"/> <property name="org.uberfire.cluster.id" value="brms-cluster" boot-time="false"/> <property name="org.uberfire.cluster.zk" value="server1:2181,server2:2181,server3:2181" boot-time="false"/> <property name="org.uberfire.cluster.local.id" value="nodeThree_12347" boot-time="false"/> <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/> <property name="org.uberfire.metadata.index.dir" value="/tmp/jbrm/nodethree" boot-time="false"/> <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodethree" boot-time="false"/> <property name="org.uberfire.nio.git.ssh.port" value="8003" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.host" value="nodeThree" /> <property name="org.uberfire.nio.git.ssh.host" value="nodeThree" /> <property name="org.uberfire.nio.git.ssh.hostport" value="8003" boot-time="false"/> <property name="org.uberfire.nio.git.daemon.hostport" value="9418" boot-time="false"/> </system-properties>
- Add management users as instructed in the Administration and Configuration Guide for Red Hat JBoss EAP and application users as instructed in Red Hat JBoss BRMS Administration and Configuration Guide.
- Move to the directory
$EAP_HOME/binand start the application server in domain mode:On UNIX systems:./domain.sh
On Windows:./domain.bat
- Check that the nodes are available.
- Log on as the management user to the server Administration console of your domain and add the new deployments using the Runtime view of the console. Once the deployment is added to the domain, assign it to the correct server group (
main-server-group).
Note
deployed. Any request targeting this deployment unit sent to a different cluster member fails with DeploymentNotFoundException.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.