6.5. Clustering on Red Hat JBoss EAP

To install Red Hat JBoss BPM Suite in the clustered mode, the JAR installer provides a sample setup. You can configure clustering with the deployable ZIP for EAP as well.

6.5.1. Clustering Using the JAR Installer

Note

The JAR installer provides sample setup only. Adjusting the configuration is necessary for it to suit your project’s needs.

Using the JAR installer, described in Section 3.1, “Red Hat JBoss BPM Suite Installer Installation”, you can set up a basic clustering configuration of Red Hat JBoss BPM Suite.

The automatic configuration creates:

  • ZooKeeper ensemble with three ZooKeeper nodes
  • A Helix cluster
  • Two Quartz datastores (one managed, one unmanaged)

This Red Hat JBoss BPM Suite setup consists of two EAP nodes that share a Maven repository, use Quartz for coordinating timed tasks, and have business-central.war, dashbuilder.war, and kie-server.war deployed. To customize the setup to fit your scenario, or to use clustering with the deployable ZIP, see Section 6.5.4, “Custom Configuration (Deployable ZIP)” and the clustering documentation of your container.

Follow the installation process described in Section 3.1.1, “Installing Red Hat JBoss BPM Suite Using Installer”.

  1. In Configure runtime environment, select Install clustered configuration and click Next.
  2. Select the JDBC vendor for your database.
  3. Provide the corresponding driver JAR(s):

    • Select one or more files on the filesystem.
    • Provide one or more URLs. The installer downloads the files automatically.

    The installer copies the JAR(s) into EAP_HOME/modules and creates corresponding module.xml file.

    Figure 6.2. JDBC Driver Setup

    Configure JDBC provider and drivers
  4. Enter the url, username, and password for accessing the database used by Quartz.

    The installer creates:

    • The Quartz definition file in EAP_HOME/domain/configuration/quartz-definition.properties
    • Two Quartz data sources in EAP_HOME/domain/domain.xml

      Edit the domain.xml file to customize the setup.

      Note

      During the installation, Quartz DDL scripts will be run on the database selected in this step. The scripts make changes needed for Quartz to operate, such as adding tables. You can view the scripts in EAP_HOME/jboss-brms-bpmsuite-6.3-supplementary-tools/ddl-scripts. No modifications should be necessary.

      Figure 6.3. Quartz Database Configuration

      7215
  5. Click Next to initiate the installation.

    Important

    When using the JAR installer, the war archives are automatically created from the applications in EAP_HOME/standalone/deployments/. That means additional space is necessary as the applications exist both in uncompressed and compressed state in the storage during the installation.

    Three ZooKeeper instances are created in EAP_HOME/jboss-brms-bpmsuite-6.3-supplementary-tools/ in directories zookeeper-one, zookeeper-two, and zookeeper-three.

After the installation finishes, do not start the server from the installer. To make Apache Helix aware of the cluster nodes, Apache ZooKeeper instances, and start the cluster:

  1. Change into EAP_HOME/jboss-brms-bpmsuite-6.3-supplementary-tools/helix-core.
  2. Execute the launch script:

    On UNIX systems:

    ./startCluster.sh

    On Windows:

    ./startCluster.bat
  3. Change into EAP_HOME/bin.
  4. Execute the following script to start Red Hat JBoss EAP:

    On UNIX systems:

    ./domain.sh

    On Windows:

    ./domain.bat

6.5.2. Starting a Cluster

The startCluster.sh script in EAP_HOME/jboss-brms-bpmsuite-6.3-supplementary-tools/helix-core initializes and starts the cluster. Once initialized, further usage of startCluster.sh results in errors. If you installed Red Hat JBoss BPM Suite cluster with the installer:

  • ZOOKEEPER_HOME is located in EAP_HOME/jboss-brms-bpmsuite-6.3-supplementary-tools/zookeeper-NUMBER
  • HELIX_HOME is located in EAP_HOME/jboss-brms-bpmsuite-6.3-supplementary-tools/helix-core

To start a cluster:

  1. Start all your ZooKeeper servers, for example:

    On UNIX systems:

    ./ZOOKEEPER_HOME_ONE/bin/zkServer.sh start &
    ./ZOOKEEPER_HOME_TWO/bin/zkServer.sh start &
    ./ZOOKEEPER_HOME_THREE/bin/zkServer.sh start &

    On Windows:

    ZOOKEEPER_HOME_ONE/bin/zkServer.cmd start
    ZOOKEEPER_HOME_TWO/bin/zkServer.cmd start
    ZOOKEEPER_HOME_THREE/bin/zkServer.cmd start
  2. Make the Helix Controller aware of the ZooKeeper instance(s). For example:

    ./HELIX_HOME/bin/run-helix-controller.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --cluster bpms-cluster 2>&1 > /tmp/controller.log &
  3. Change into EAP_HOME/bin and start Red Hat JBoss EAP:

    On UNIX systems:

    ./domain.sh

    On Windows:

    ./domain.bat
  4. You can access your Red Hat JBoss BPM Suite nodes. For example, if you created Red Hat JBoss BPM Suite cluster by using the installer, you can access your nodes at:

    localhost:8080/business-central
    localhost:8230/business-central

6.5.3. Stopping a Cluster

To stop your cluster, stop the components in the reversed order from starting it:

  1. Stop the instance of Red Hat JBoss EAP, or the container you are using.
  2. Stop the Helix Controller process.

    On UNIX systems, find the PID of the process:

    ps aux|grep HelixControllerMain

    Once you have the PID, terminate the process:

    kill -15 <pid of HelixControllerMain>

    On Windows, use the Task Manager to stop the process.

  3. Stop the ZooKeeper server(s). For each server instance, execute:

    On UNIX systems:

    ./ZOOKEEPER_HOME_ONE/bin/zkServer.sh stop
    ./ZOOKEEPER_HOME_TWO/bin/zkServer.sh stop
    ./ZOOKEEPER_HOME_THREE/bin/zkServer.sh stop

    On Windows:

    ZOOKEEPER_HOME_ONE/bin/zkServer.cmd stop
    ZOOKEEPER_HOME_TWO/bin/zkServer.cmd stop
    ZOOKEEPER_HOME_THREE/bin/zkServer.cmd stop

6.5.4. Custom Configuration (Deployable ZIP)

When using Red Hat JBoss EAP clustering, a single Red Hat JBoss EAP domain controller exists with other Red Hat JBoss EAP slaves connecting to it as management users. You can deploy Business Central and dashbuilder as a management user on a domain controller, and the WAR deployments will be distributed to other members of the Red Hat JBoss EAP cluster.

To configure clustering on Red Hat JBoss EAP 6, do the following:

  1. Configure ZooKeeper and Helix according to Section 6.6.1, “Setting a Cluster”.
  2. Configure Quartz according to Section 6.6.3, “Setting Quartz”.
  3. Install the JDBC driver. See the Install a JDBC Driver with the Management Console chapter of the Red Hat JBoss EAP Administration and Configuration Guide.
  4. Configure the data source for the server. Based on the mode you use, open domain.xml or standalone.xml, located at EAP_HOME/MODE/configuration.
  5. Locate the full profile, and do the following:

    1. Add the definition of the main data source used by Red Hat JBoss BPM Suite.

      Example 6.1. PostgreSQL Data Source Defined as Main Red Hat JBoss BPM Suite Data Source

      <datasource jndi-name="java:jboss/datasources/psbpmsDS"
                  pool-name="postgresDS" enabled="true" use-java-context="true">
        <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
          <user-name>bpms</user-name>
          <password>bpms</password>
        </security>
      </datasource>
    2. Add the definition of the data source for the Quartz service.

      Example 6.2. PostgreSQL Data Source Defined as Quartz Data Source

      <datasource jta="false" jndi-name="java:jboss/datasources/quartzNotManagedDS"
                  pool-name="quartzNotManagedDS" enabled="true" use-java-context="true">
        <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
          <user-name>bpms</user-name>
          <password>bpms</password>
        </security>
      </datasource>
    3. Define the data source driver.

      Example 6.3. PostgreSQL Driver Definition

      <driver name="postgres" module="org.postgresql">
        <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
      </driver>
  6. Configure individual server nodes in the main-server-group element in the EAP_HOME/domain/configuration/host.xml file with properties defined in _cluster_properties_BRMS.

    When configuring a Red Hat JBoss EAP cluster with Apache ZooKeeper, a different number of Red Hat JBoss EAP nodes than Apache ZooKeeper nodes is possible. However, having the same node count for both ZooKeeper and Red Hat JBoss EAP is considered best practice.

    Cluster Node Properties

    jboss.node.name

    A node name unique in a Red Hat JBoss BPM Suite cluster.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.id

    The name of the Helix cluster, for example: kie-cluster. You must set this property to the same value as defined in the Helix Controller.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.local.id

    The unique ID of the Helix cluster node. Note that ':' is replaced with '_', for example node1_12345.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.vfs.lock

    The name of the resource defined on the Helix cluster, for example: kie-vfs.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.zk

    The location of the Zookeeper servers.

    ValuesDefault

    String of the form host1:port1,host2:port2,host3:port3,…​

    N/A

    org.uberfire.metadata.index.dir

    The location of the .index directory, which Apache Lucene uses when indexing and searching.

    ValuesDefault

    Path

    Current working directory

    org.uberfire.nio.git.daemon.host

    If the Git daemon is enabled, it uses this property as the localhost identifier.

    ValuesDefault

    URL

    localhost

    org.uberfire.nio.git.daemon.hostport

    When running in a virtualized environment, the host’s outside port number for the Git daemon.

    ValuesDefault

    Port number

    9418

    org.uberfire.nio.git.daemon.port

    If the Git daemon is enabled, it uses this property as the port number.

    ValuesDefault

    Port number

    9418

    org.uberfire.nio.git.dir

    The location of the directory .niogit. Change the value for example for backup purposes.

    ValuesDefault

    Path

    Current working directory

    org.uberfire.nio.git.ssh.host

    If the SSH daemon is enabled, it uses this property as the localhost identifier.

    ValuesDefault

    URL

    localhost

    org.uberfire.nio.git.ssh.hostport

    When running in a virtualized environment, the host’s outside port number for the SSH daemon.

    ValuesDefault

    Port number

    8003

    org.uberfire.nio.git.ssh.port

    If the SSH daemon is enabled, it uses this property as the port number.

    ValuesDefault

    Port number

    8001

    Example 6.4. Cluster nodeOne Configuration

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="/tmp/bpms/nodeone"
                boot-time="false"/>
      <property name="jboss.node.name" value="nodeOne" boot-time="false"/>
      <property name="org.uberfire.cluster.id" value="bpms-cluster" boot-time="false"/>
      <property name="org.uberfire.cluster.zk"
                value="server1:2181,server2:2182,server3:2183" boot-time="false"/>
      <property name="org.uberfire.cluster.local.id" value="nodeOne_12345"
                boot-time="false"/>
      <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.host" value="nodeOne"/>
      <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.hostport" value="9418"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.port" value="8003" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.hostport" value="8003" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.host" value="nodeOne"/>
      <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodeone"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodeone"
                boot-time="false"/>
      <property name="org.quartz.properties"
                value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    </system-properties>

    Example 6.5. Cluster nodeTwo Configuration

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="/tmp/bpms/nodetwo"
                boot-time="false"/>
      <property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
      <property name="org.uberfire.cluster.id" value="bpms-cluster" boot-time="false"/>
      <property name="org.uberfire.cluster.zk"
                value="server1:2181,server2:2182,server3:2183" boot-time="false"/>
      <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346"
                boot-time="false"/>
      <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.host" value="nodeTwo" />
      <property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.hostport" value="9419"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.port" value="8004" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.hostport" value="8004" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.host" value="nodeTwo" />
      <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodetwo"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodetwo"
                boot-time="false"/>
      <property name="org.quartz.properties"
                value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    </system-properties>

    Example 6.6. Cluster nodeThree Configuration

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="/tmp/bpms/nodethree"
                boot-time="false"/>
      <property name="jboss.node.name" value="nodeThree" boot-time="false"/>
      <property name="org.uberfire.cluster.id" value="bpms-cluster" boot-time="false"/>
      <property name="org.uberfire.cluster.zk"
                value="server1:2181,server2:2182,server3:2183" boot-time="false"/>
      <property name="org.uberfire.cluster.local.id" value="nodeThree_12347"
                boot-time="false"/>
      <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.host" value="nodeThree" />
      <property name="org.uberfire.nio.git.daemon.port" value="9420" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.hostport" value="9420"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.port" value="8005" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.hostport" value="8005" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.host" value="nodeThree" />
      <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodethree"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodethree"
                boot-time="false"/>
      <property name="org.quartz.properties"
                value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    </system-properties>
  7. Add management users as instructed in the Administration and Configuration Guide for Red Hat JBoss EAP and application users as instructed in Red Hat JBoss BPM Suite Administration and Configuration Guide.
  8. Change to EAP_HOME/bin and start the application server in domain mode:

    On UNIX systems:

    ./domain.sh

    On Windows:

    ./domain.bat
  9. Check that the nodes are available.

Deploy the Business Central application to your servers:

  1. Change the predefined persistence of the application to the required database (PostgreSQL): in persistence.xml change the following:

    1. jta-data-source name to the source defined on the application server (java:jboss/datasources/psbpmsDS).
    2. Hibernate dialect to be match the data source dialect (org.hibernate.dialect.PostgreSQLDialect).
  2. Log in as the management user to the server Administration console of your domain and add the new deployments using the Runtime view of the console. Once the deployment is added to the domain, assign it to the correct server group (main-server-group).
Note

It is important users explicitly check deployment unit readiness with every cluster member.

When a deployment unit is created on a cluster node, it takes some time before it is distributed among all cluster members. Deployment status can be checked using the UI and REST, however, if the query goes to the node where the deployment was originally issued, the answer is deployed. Any request targeting this deployment unit sent to a different cluster member fails with DeploymentNotFoundException.

6.5.5. Clustering the Intelligent Process Server

The Intelligent Process Server is a lightweight and scalable component. Clustering it provides many benefits. For example:

  • You can partition your resources based on deployed containers.
  • You can scale individual instances independently from each other.
  • You can distribute the cluster across network and manage it by a single controller.

    • The controller can be clustered into a ZooKeeper ensemble.
  • No further components are required.

The basic runtime cluster consists of:

  • Multiple Red Hat JBoss EAP instances with Intelligent Process Server
  • A controller instance with Business Central
kieserver arch

This section describes how to start Intelligent Process Server cluster on Red Hat JBoss EAP 6.4.

Creating an Intelligent Process Server Cluster

  1. Change into CONTROLLER_HOME/bin.
  2. Add a user with the kie-server role:

    $ ./add-user.sh -a --user kieserver --password kieserver1! --role kie-server
  3. Start your controller:

    $ ./standalone.sh
  4. Change into SERVER_1_HOME.
  5. Deploy kie-server.war. Clustered servers do not need business-central.war or other applications.
  6. See the <servers> part of the following host.xml as an example of required properties:

    <server name="server-one" group="main-server-group">
     <system-properties>
      <property name="org.kie.server.location" value="http://localhost:8180/kie-server/services/rest/server"></property> 1
      <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"></property> 2
      <property name="org.kie.server.controller.user" value="kieserver"></property> 3
      <property name="org.kie.server.controller.pwd" value="kieserver1!"></property> 4
      <property name="org.kie.server.id" value="HR"></property> 5
     </system-properties>
     <socket-bindings port-offset="100"/>
    </server>
    
    <server name="server-two" group="main-server-group" auto-start="true">
     <system-properties>
      <property name="org.kie.server.location" value="http://localhost:8230/kie-server/services/rest/server"></property>
      <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"></property>
      <property name="org.kie.server.controller.user" value="kieserver"></property>
      <property name="org.kie.server.controller.pwd" value="kieserver1!"></property>
      <property name="org.kie.server.id" value="HR"></property>
     </system-properties>
     <socket-bindings port-offset="150"/>
    </server>
    1
    org.kie.server.location: URL of the server instance.
    2
    org.kie.server.controller: Comma-separated list of the controller URL(s).
    3
    org.kie.server.controller.user: Username you created for controller authentication. Uses kieserver by default.
    4
    org.kie.server.controller.pwd: Password for controller authentication. Uses kieserver1! by default.
    5
    org.kie.server.id: Server identifier that corresponds to template ID defined by the controller instance. Give the same ID to multiple server instances that represent one template.

    The example above is defined for Red Hat JBoss EAP domain mode. For further list of bootstrap switches, see Intelligent Process Server Setup of Red Hat JBoss BPM Suite User Guide.

  7. Repeat the previous step for as many servers as you need. To start Red Hat JBoss EAP in the domain mode, execute:
$ ./SERVER_HOME/bin/domain.sh

After connecting the servers to your controller, check the controller log:

13:54:40,315 INFO  [org.kie.server.controller.impl.KieServerControllerImpl] (http-localhost/127.0.0.1:8080-1) Server http://localhost:8180/kie-server/services/rest/server connected to controller
13:54:40,331 INFO  [org.kie.server.controller.impl.KieServerControllerImpl] (http-localhost/127.0.0.1:8080-2) Server http://localhost:8230/kie-server/services/rest/server connected to controller
13:54:40,348 INFO  [org.kie.server.controller.rest.RestKieServerControllerImpl] (http-localhost/127.0.0.1:8080-1) Server with id 'HR' connected
13:54:40,348 INFO  [org.kie.server.controller.rest.RestKieServerControllerImpl] (http-localhost/127.0.0.1:8080-2) Server with id 'HR' connected

Alternatively, to verify in controller Business Central:

  1. Log into the controller Business Central.
  2. Click DeployRule Deployments.
  3. View the remote servers connected to each template.