Chapter 5. Testing Installation

5.1. Starting Server

Note

If you installed Red Hat JBoss BPM Suite using the generic deployable package on Red Hat Java Web Server, Section 2.3, “Generic Deployable Bundle Installation” contains the instructions for starting the server. You can ignore the following discussion.

Once the Red Hat JBoss BPM Suite server is installed on Red Hat JBoss EAP, you can run it either in the standalone or the domain mode.

5.1.1. Standalone Mode

Note

If you chose the deployable ZIP package for Red Hat JBoss EAP, the configuration steps are described in Section 2.2, “Installing Red Hat JBoss BPM Suite on Red Hat JBoss Enterprise Application Platform”.

The default startup script, standalone.sh, is optimized for performance. To run your server in the performance mode, do the following:

  1. On the command line, change to the EAP_HOME/bin/ directory.
  2. In a Unix environment, run:

    ./standalone.sh

    In a Windows environment, run:

    ./standalone.bat

5.1.2. Domain Mode

If you used the JAR installer, referenced in Section 2.1, “Red Hat JBoss BPM Suite Installer Installation”, Red Hat JBoss BPM Suite is already configured for running in the domain mode.

Note

If you chose the deployable ZIP package for Red Hat JBoss EAP, the configuration steps for domain mode are described in Section 2.2, “Installing Red Hat JBoss BPM Suite on Red Hat JBoss Enterprise Application Platform”.

To start Red Hat JBoss BPM Suite in the domain mode, perform the following steps:

  1. On the command line, change to the EAP_HOME/bin/ directory.
  2. In a Unix environment, run:

    ./domain.sh

    In a Windows environment, run:

    ./domain.bat

5.2. Enabling the Security Manager

Red Hat JBoss BPM Suite ships with a standard security policy, located in the kie.policy file. The location of this file varies depending on your distribution. In order to use the Kie Policy for Java Security Manager, the application server must have its security manager activated. For Red Hat JBoss EAP 6.x or better, it is started using a valid security.policy file specified at java.security.policy and a valid kie.policy file specified at kie.security.policy.

This applies to all containers, even when using the rule and process engine in embedded mode.

Note

If you installed Red Hat JBoss BPM Suite using the installer, an option to apply the security policy is given to you at the time of installation. Applying the security policy using the installer will modify the standalone.conf file to include the security.policy and kie.policy security policies in the JBOSS_HOME/bin folder. These policies will be enabled at runtime using standalone.sh.

Enabling Security Manager in Red Hat JBoss EAP 6

Red Hat JBoss BPM Suite provides standalone-secure.sh, a separate script that is optimized for security. The script applies a security policy by default that protects against a known security vulnerability.

The standalone-secure.sh script is only available when using the Red Hat JBoss EAP Deployable package.

Important

It is recommended to use the standalone-secure.sh script in production environments.

The use of a security manager imposes a significant performance penalty that you should be aware of. The tradeoff between security and performance must be made by taking into consideration individual circumstances. See Section 5.2.1, “Java Security Manager and Performance Management”.

To run your server in the secure mode, do the following:

  1. On the command line, change to the EAP_HOME_/bin/ directory.
  2. In a Unix environment, run:

    ./standalone-secure.sh

    In a Windows environment, run:

    ./standalone-secure.bat

Enabling Security Manager in Red Hat JBoss EAP 7

If you are using Red Hat JBoss EAP in version 7, the standalone-secure.sh script is no longer available. To enable the security manager, start the server with the -secmgr and -Dkie.security.policy=./kie.policy flags. For example:

./standalone.sh -secmgr -Dkie.security.policy=./kie.policy

For further information about Java Security Manager in Red Hat JBoss EAP 7, see chapter Java Security Manager of Red Hat JBoss Enterprise Application Platform: How to Configure Server Security.

Enabling Security Manager for an embedded application

Complete the following procedure to enable Security Manager for an embedded application running on Red Hat JBoss EAP 7.

  1. Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options:

    • Product: Process Automation Manager
    • Version: 6.4
  2. Download the Red Hat JBoss BPM Suite 6.4.0 Deployable for EAP 7 (jboss-bpmsuite-6.4.0.GA-deployable-eap7.x.zip) file.
  3. Extract the jboss-bpmsuite-6.4.0.GA-deployable-eap7.x.zip file to a temporary directory. In the following commands this directory is called TEMP_DIR.
  4. Back up your EAP 7 installation, referred to as EAP_HOME in the following commands.
  5. Copy the contents of TEMP_DIR/bin to EAP_HOME/bin and merge the files if prompted.
  6. Edit the kie.policy file as required for your application. In the following example, replace AllPermission with the security policy that you require:

    grant {
    permission java.security.AllPermission;
    };

5.2.1. Java Security Manager and Performance Management

As noted earlier, enabling the Java Security Manager (JSM) to sandbox the evaluation of MVEL scripts in Red Hat JBoss BPM Suite introduces a performance hit in high load environments. Environments and performance markers must be kept in mind when deploying a Red Hat JBoss BPM Suite application. Use the following guidelines to deploy secure and high performance Red Hat JBoss BPM Suite applications.

  • In high load environments where performance is critical it is recommended to only deploy applications that have been developed on other systems and properly reviewed. It is also recommended not to create any users with analyst role on such systems. If these safeguards are followed, it is safe to leave JSM disabled on these systems so it does not introduce any performance degradation.
  • In testing and development environments without high loads, or in environments where rule and process authoring is exposed to external networks, it is recommended to have JSM enabled in order to achieve security benefits of properly sandboxed evaluation of MVEL.

Allowing users with analyst role to log in to the Business Central console with JSM disabled is not secure and not recommended.

5.3. Logging into Business Central

Log into Business Central after the server has successfully started.

  1. Navigate to http://localhost:8080/business-central in a web browser. If the user interface has been configured to run from a domain name, substitute localhost for the domain name. For example http://www.example.com:8080/business-central.
  2. Log in with the user credentials that were created during installation. For example, user: helloworlduser and password: Helloworld@123.
Troubleshooting
Loading…​ screen does not disappear

When you log into Business Central, it is possible that the Loading…​ screen does not disappear. This can be caused by your firewall interfering with Server Sent Events (SSE) used by Business Central.

To work around the problem, disable SSE usage by the Business Central:

  1. Create an ErraiService.properties file, which contains: errai.bus.enable_sse_support=false.
  2. Copy the file to INSTALL_PATH/standalone/deployments/business-central.war/WEB-INF/classes/.
  3. Redeploy business-central.war.

You can create two types of Red Hat JBoss BPM Suite clusters:

Design-Time Clustering

Allows you to share assets in the Git repository, such as processes, rules, data objects, and others, with all the Red Hat JBoss BPM Suite nodes in your cluster. It is suitable in case of concerns about a single point of failure and high availability during the development process. Design-time clustering makes use of Apache Helix and Apache ZooKeeper.

Design-time clustering is not required for runtime execution.

Runtime Clustering
Allows you to use the clustering capabilities of your container, such as Red Hat JBoss EAP. Runtime clustering does not require you to manage any Apache Helix or Apache ZooKeeper nodes. Quartz Enterprise Job Scheduler is supported if you use timers in your application.
Note

If you use the Websphere Application Server, Quartz setup is not necessary. Instead, use clustered EJB Timers. For more information, see the How to setup BPM Suite Timers to work in Websphere Application Server clustering support article.

You can cluster the following components of Red Hat JBoss BPM Suite:

  • Design-time cluster

    • Git repository: virtual-file-system (VFS) repository that holds the business assets.
  • Runtime cluster

    • Intelligent Process Server, or web applications: the web application nodes must share runtime data.

      For instructions on clustering the Intelligent Process Server, see Section 5.5.5, “Clustering the Intelligent Process Server”, or the clustering documentation of your container.

    • Back-end database: database with the state data, such as process instances, KIE sessions, history log, and similar.

5.4. Git Repository Clustering Mechanism

To cluster the Git repository, Red Hat JBoss BPM Suite uses:

Apache Helix

Provides cluster management functionality that allows you to synchronize and replicate data among the nodes in your cluster. Apache Helix cluster is managed by Apache ZooKeeper. With Apache Helix, you can define a cluster, add nodes to the cluster, remove nodes from the cluster, and perform other cluster-management tasks.

Additional information:

  • Apache Helix needs to be configured on a single node only. The configuration is then stored and distributed by ZooKeeper.
  • Apache Helix cluster is administered by the helix-admin.sh script. See Apache Helix documentation for the list of commands as well as alternative ways of managing Apache Helix cluster.
  • Apache Helix cluster needs exactly one controller, which must be aware of all the nodes. See Apache Helix controller documentation and Apache Helix architecture documentation.
Apache ZooKeeper

Allows you to synchronize and replicate data from the Apache Helix cluster. An Apache ZooKeeper cluster is known as an ensemble and requires a majority of the servers to be functional for the service to be available.

However, an ensemble is not required for any type of clustering. Only a single instance of ZooKeeper is required to allow Red Hat JBoss BPM Suite to replicate its data; the ZooKeeper ensemble serves to provide redundancy and protect against the failure of ZooKeeper itself.

Additional information:

The relationship between Apache Helix and Apache ZooKeeper:

Figure 5.1. Schema of Red Hat JBoss BPM Suite Cluster

3639

A typical clustering setup involves the following:

  1. Configuring the cluster using Apache ZooKeeper and Apache Helix. This is required only for design-time clustering.
  2. Configuring the back-end database with Quartz tables. This is required only for processes with timers.
  3. Configuring clustering on your container. Red Hat JBoss BPM Suite Installation Guide provides only clustering instructions for Red Hat JBoss EAP 6.

Clustering Maven Repositories

Various Business Central operations publish JAR files to the Business Central’s internal Maven Repository.

This repository exists on the application server file-system as regular files and is not cluster aware. This folder is not synchronized across the various nodes in the cluster and must be synchronized using external tools like rsync.

An alternative to the use of an external synchronization tool is to set the system property org.guvnor.m2repo.dir on each cluster node to point to a SAN or NAS. In such case, clustering of the Maven repository folder is not needed.

5.5. Clustering on Red Hat JBoss EAP

To install Red Hat JBoss BPM Suite in the clustered mode, the JAR installer provides a sample setup. You can configure clustering with the deployable ZIP for EAP as well.

5.5.1. Clustering Using the JAR Installer

Note

The JAR installer provides sample setup only. Adjusting the configuration is necessary for it to suit your project’s needs.

Using the JAR installer, described in Section 2.1, “Red Hat JBoss BPM Suite Installer Installation”, you can set up a basic clustering configuration of Red Hat JBoss BPM Suite.

The automatic configuration creates:

  • ZooKeeper ensemble with three ZooKeeper nodes
  • A Helix cluster
  • Two Quartz datastores (one managed, one unmanaged)

This Red Hat JBoss BPM Suite setup consists of two EAP nodes that share a Maven repository, use Quartz for coordinating timed tasks, and have business-central.war, dashbuilder.war, and kie-server.war deployed. To customize the setup to fit your scenario, or to use clustering with the deployable ZIP, see Section 5.5.4, “Custom Configuration (Deployable ZIP)” and the clustering documentation of your container.

Follow the installation process described in Section 2.1.1, “Installing Red Hat JBoss BPM Suite Using Installer”.

  1. In Configure runtime environment, select Install clustered configuration and click Next.
  2. Select the JDBC vendor for your database.
  3. Provide the corresponding driver JAR(s):

    • Select one or more files on the filesystem.
    • Provide one or more URLs. The installer downloads the files automatically.

    The installer copies the JAR(s) into EAP_HOME/modules and creates corresponding module.xml file.

    Figure 5.2. JDBC Driver Setup

    Configure JDBC provider and drivers
  4. Enter the url, username, and password for accessing the database used by Quartz.

    The installer creates:

    • The Quartz definition file in EAP_HOME/domain/configuration/quartz-definition.properties
    • Two Quartz data sources in EAP_HOME/domain/domain.xml

      Edit the domain.xml file to customize the setup.

      Note

      During the installation, Quartz DDL scripts will be run on the database selected in this step. The scripts make changes needed for Quartz to operate, such as adding tables. You can view the scripts in EAP_HOME/jboss-brms-bpmsuite-6.4-supplementary-tools/ddl-scripts. No modifications should be necessary.

      Figure 5.3. Quartz Database Configuration

      7215
  5. Click Next to initiate the installation.

    Important

    When using the JAR installer, the war archives are automatically created from the applications in EAP_HOME/standalone/deployments/. That means additional space is necessary as the applications exist both in uncompressed and compressed state in the storage during the installation.

    Three ZooKeeper instances are created in EAP_HOME/jboss-brms-bpmsuite-6.4-supplementary-tools/ in directories zookeeper-one, zookeeper-two, and zookeeper-three.

After the installation finishes, do not start the server from the installer. To make Apache Helix aware of the cluster nodes, Apache ZooKeeper instances, and start the cluster:

  1. Change into EAP_HOME/jboss-brms-bpmsuite-6.4-supplementary-tools/helix-core.
  2. Execute the launch script:

    On UNIX systems:

    ./startCluster.sh

    On Windows:

    ./startCluster.bat
  3. Change into EAP_HOME/bin.
  4. Execute the following script to start Red Hat JBoss EAP:

    On UNIX systems:

    ./domain.sh

    On Windows:

    ./domain.bat

5.5.2. Starting a Cluster

The startCluster.sh script in EAP_HOME/jboss-brms-bpmsuite-6.4-supplementary-tools/helix-core initializes and starts the cluster. Once initialized, further usage of startCluster.sh results in errors. If you installed Red Hat JBoss BPM Suite cluster with the installer:

  • ZOOKEEPER_HOME is located in EAP_HOME/jboss-brms-bpmsuite-6.4-supplementary-tools/zookeeper-NUMBER
  • HELIX_HOME is located in EAP_HOME/jboss-brms-bpmsuite-6.4-supplementary-tools/helix-core

To start a cluster:

  1. Start all your ZooKeeper servers, for example:

    On UNIX systems:

    ./ZOOKEEPER_HOME_ONE/bin/zkServer.sh start &
    ./ZOOKEEPER_HOME_TWO/bin/zkServer.sh start &
    ./ZOOKEEPER_HOME_THREE/bin/zkServer.sh start &

    On Windows:

    ZOOKEEPER_HOME_ONE/bin/zkServer.cmd start
    ZOOKEEPER_HOME_TWO/bin/zkServer.cmd start
    ZOOKEEPER_HOME_THREE/bin/zkServer.cmd start
  2. Make the Helix Controller aware of the ZooKeeper instance(s). For example:

    ./HELIX_HOME/bin/run-helix-controller.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --cluster bpms-cluster 2>&1 > /tmp/controller.log &
  3. Change into EAP_HOME/bin and start Red Hat JBoss EAP:

    On UNIX systems:

    ./domain.sh

    On Windows:

    ./domain.bat
  4. You can access your Red Hat JBoss BPM Suite nodes. For example, if you created Red Hat JBoss BPM Suite cluster by using the installer, you can access your nodes at:

    localhost:8080/business-central
    localhost:8230/business-central

5.5.3. Stopping a Cluster

To stop your cluster, stop the components in the reversed order from starting it:

  1. Stop the instance of Red Hat JBoss EAP, or the container you are using.
  2. Stop the Helix Controller process.

    On UNIX systems, find the PID of the process:

    ps aux|grep HelixControllerMain

    Once you have the PID, terminate the process:

    kill -15 <pid of HelixControllerMain>

    On Windows, use the Task Manager to stop the process.

  3. Stop the ZooKeeper server(s). For each server instance, execute:

    On UNIX systems:

    ./ZOOKEEPER_HOME_ONE/bin/zkServer.sh stop
    ./ZOOKEEPER_HOME_TWO/bin/zkServer.sh stop
    ./ZOOKEEPER_HOME_THREE/bin/zkServer.sh stop

    On Windows:

    ZOOKEEPER_HOME_ONE/bin/zkServer.cmd stop
    ZOOKEEPER_HOME_TWO/bin/zkServer.cmd stop
    ZOOKEEPER_HOME_THREE/bin/zkServer.cmd stop

5.5.4. Custom Configuration (Deployable ZIP)

When using Red Hat JBoss EAP clustering, a single Red Hat JBoss EAP domain controller exists with other Red Hat JBoss EAP slaves connecting to it as management users. You can deploy Business Central and dashbuilder as a management user on a domain controller, and the WAR deployments will be distributed to other members of the Red Hat JBoss EAP cluster.

To configure clustering on Red Hat JBoss EAP 6, do the following:

  1. Configure ZooKeeper and Helix according to Section 5.6.1, “Setting a Cluster”.
  2. Configure Quartz according to Section 5.6.3, “Setting Quartz”.
  3. Install the JDBC driver. See the Install a JDBC Driver with the Management Console chapter of the Red Hat JBoss EAP Administration and Configuration Guide.
  4. Configure the data source for the server. Based on the mode you use, open domain.xml or standalone.xml, located at EAP_HOME/MODE/configuration.
  5. Locate the full profile, and do the following:

    1. Add the definition of the main data source used by Red Hat JBoss BPM Suite.

      Example 5.1. PostgreSQL Data Source Defined as Main Red Hat JBoss BPM Suite Data Source

      <datasource jndi-name="java:jboss/datasources/psbpmsDS"
                  pool-name="postgresDS" enabled="true" use-java-context="true">
        <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
          <user-name>bpms</user-name>
          <password>bpms</password>
        </security>
      </datasource>
    2. Add the definition of the data source for the Quartz service.

      Example 5.2. PostgreSQL Data Source Defined as Quartz Data Source

      <datasource jta="false" jndi-name="java:jboss/datasources/quartzNotManagedDS"
                  pool-name="quartzNotManagedDS" enabled="true" use-java-context="true">
        <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
          <user-name>bpms</user-name>
          <password>bpms</password>
        </security>
      </datasource>
    3. Define the data source driver.

      Example 5.3. PostgreSQL Driver Definition

      <driver name="postgres" module="org.postgresql">
        <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
      </driver>
    4. If you are deploying Red Hat JBoss BPM Suite on Red Hat JBoss EAP 7.0, ensure that the data sources contain schemas. To create the data source schemas, you can use the DDL scripts located in jboss-bpmsuite-brms-6.4-supplementary-tools.zip. If your data source does not contain schemas, ensure your nodes start one at a time.

      Additionally, when deploying on Red Hat JBoss EAP 7.0, open EAP_HOME/domain/business-central.war/WEB-INF/classes/META-INF/persistence.xml and change the property hibernate.hbm2ddl.auto="update" to hibernate.hbm2ddl.auto="none".

  6. Configure individual server nodes that belong to the main-server-group in the EAP_HOME/domain/configuration/host.xml file with properties defined in Cluster Node Properties.

    When configuring a Red Hat JBoss EAP cluster with Apache ZooKeeper, a different number of Red Hat JBoss EAP nodes than Apache ZooKeeper nodes is possible. However, having the same node count for both ZooKeeper and Red Hat JBoss EAP is considered best practice.

    Cluster Node Properties

    jboss.node.name

    A node name unique in a Red Hat JBoss BPM Suite cluster.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.id

    The name of the Helix cluster, for example: kie-cluster. You must set this property to the same value as defined in the Helix Controller.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.local.id

    The unique ID of the Helix cluster node. Note that ':' is replaced with '_', for example node1_12345.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.vfs.lock

    The name of the resource defined on the Helix cluster, for example: kie-vfs.

    ValuesDefault

    String

    N/A

    org.uberfire.cluster.zk

    The location of the Zookeeper servers.

    ValuesDefault

    String of the form host1:port1,host2:port2,host3:port3,…​

    N/A

    org.uberfire.metadata.index.dir

    The location of the .index directory, which Apache Lucene uses when indexing and searching.

    ValuesDefault

    Path

    Current working directory

    org.uberfire.nio.git.daemon.host

    If the Git daemon is enabled, it uses this property as the localhost identifier.

    ValuesDefault

    URL

    localhost

    org.uberfire.nio.git.daemon.hostport

    When running in a virtualized environment, the host’s outside port number for the Git daemon.

    ValuesDefault

    Port number

    9418

    org.uberfire.nio.git.daemon.port

    If the Git daemon is enabled, it uses this property as the port number.

    ValuesDefault

    Port number

    9418

    org.uberfire.nio.git.dir

    The location of the directory .niogit. Change the value for example for backup purposes.

    ValuesDefault

    Path

    Current working directory

    org.uberfire.nio.git.ssh.host

    If the SSH daemon is enabled, it uses this property as the localhost identifier.

    ValuesDefault

    URL

    localhost

    org.uberfire.nio.git.ssh.hostport

    When running in a virtualized environment, the host’s outside port number for the SSH daemon.

    ValuesDefault

    Port number

    8003

    org.uberfire.nio.git.ssh.port

    If the SSH daemon is enabled, it uses this property as the port number.

    ValuesDefault

    Port number

    8001

    Example 5.4. Cluster nodeOne Configuration

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="/tmp/bpms/nodeone"
                boot-time="false"/>
      <property name="jboss.node.name" value="nodeOne" boot-time="false"/>
      <property name="org.uberfire.cluster.id" value="bpms-cluster" boot-time="false"/>
      <property name="org.uberfire.cluster.zk"
                value="server1:2181,server2:2182,server3:2183" boot-time="false"/>
      <property name="org.uberfire.cluster.local.id" value="nodeOne_12345"
                boot-time="false"/>
      <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.host" value="nodeOne"/>
      <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.hostport" value="9418"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.port" value="8003" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.hostport" value="8003" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.host" value="nodeOne"/>
      <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodeone"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodeone"
                boot-time="false"/>
      <property name="org.quartz.properties"
                value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    </system-properties>

    Example 5.5. Cluster nodeTwo Configuration

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="/tmp/bpms/nodetwo"
                boot-time="false"/>
      <property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
      <property name="org.uberfire.cluster.id" value="bpms-cluster" boot-time="false"/>
      <property name="org.uberfire.cluster.zk"
                value="server1:2181,server2:2182,server3:2183" boot-time="false"/>
      <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346"
                boot-time="false"/>
      <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.host" value="nodeTwo" />
      <property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.hostport" value="9419"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.port" value="8004" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.hostport" value="8004" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.host" value="nodeTwo" />
      <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodetwo"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodetwo"
                boot-time="false"/>
      <property name="org.quartz.properties"
                value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    </system-properties>

    Example 5.6. Cluster nodeThree Configuration

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="/tmp/bpms/nodethree"
                boot-time="false"/>
      <property name="jboss.node.name" value="nodeThree" boot-time="false"/>
      <property name="org.uberfire.cluster.id" value="bpms-cluster" boot-time="false"/>
      <property name="org.uberfire.cluster.zk"
                value="server1:2181,server2:2182,server3:2183" boot-time="false"/>
      <property name="org.uberfire.cluster.local.id" value="nodeThree_12347"
                boot-time="false"/>
      <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.host" value="nodeThree" />
      <property name="org.uberfire.nio.git.daemon.port" value="9420" boot-time="false"/>
      <property name="org.uberfire.nio.git.daemon.hostport" value="9420"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.port" value="8005" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.hostport" value="8005" boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.host" value="nodeThree" />
      <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodethree"
                boot-time="false"/>
      <property name="org.uberfire.nio.git.ssh.cert.dir" value="/tmp/jbpm/nodethree"
                boot-time="false"/>
      <property name="org.quartz.properties"
                value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    </system-properties>
  7. Add management users as instructed in the Administration and Configuration Guide for Red Hat JBoss EAP and application users as instructed in Red Hat JBoss BPM Suite Administration and Configuration Guide.
  8. Change to EAP_HOME/bin and start the application server in domain mode:

    On UNIX systems:

    ./domain.sh

    On Windows:

    ./domain.bat
  9. Check that the nodes are available.

Deploy the Business Central application to your servers:

  1. Change the predefined persistence of the application to the required database (PostgreSQL): in persistence.xml change the following:

    1. jta-data-source name to the source defined on the application server (java:jboss/datasources/psbpmsDS).
    2. Hibernate dialect to be match the data source dialect (org.hibernate.dialect.PostgreSQLDialect).
  2. Log in as the management user to the server Administration console of your domain and add the new deployments using the Runtime view of the console. Once the deployment is added to the domain, assign it to the correct server group (main-server-group).
Note

It is important users explicitly check deployment unit readiness with every cluster member.

When a deployment unit is created on a cluster node, it takes some time before it is distributed among all cluster members. Deployment status can be checked using the UI and REST, however, if the query goes to the node where the deployment was originally issued, the answer is deployed. Any request targeting this deployment unit sent to a different cluster member fails with DeploymentNotFoundException.

5.5.5. Clustering the Intelligent Process Server

The Intelligent Process Server is a lightweight and scalable component. Clustering it provides many benefits. For example:

  • You can partition your resources based on deployed containers.
  • You can scale individual instances independently from each other.
  • You can distribute the cluster across network and manage it by a single controller.

    • The controller can be clustered into a ZooKeeper ensemble.
  • No further components are required.

The basic runtime cluster consists of:

  • Multiple Red Hat JBoss EAP instances with Intelligent Process Server
  • A controller instance with Business Central
kieserver arch

This section describes how to start Intelligent Process Server cluster on Red Hat JBoss EAP 6.4.

Creating an Intelligent Process Server Cluster

  1. Change into CONTROLLER_HOME/bin.
  2. Add a user with the kie-server role:

    $ ./add-user.sh -a --user kieserver --password kieserver1! --role kie-server
  3. Start your controller:

    $ ./standalone.sh
  4. Change into SERVER_1_HOME.
  5. Deploy kie-server.war. Clustered servers do not need business-central.war or other applications.
  6. See the <servers> part of the following host.xml as an example of required properties:

    <server name="server-one" group="main-server-group">
     <system-properties>
      <property name="org.kie.server.location" value="http://localhost:8180/kie-server/services/rest/server"></property> 1
      <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"></property> 2
      <property name="org.kie.server.controller.user" value="kieserver"></property> 3
      <property name="org.kie.server.controller.pwd" value="kieserver1!"></property> 4
      <property name="org.kie.server.id" value="HR"></property> 5
     </system-properties>
     <socket-bindings port-offset="100"/>
    </server>
    
    <server name="server-two" group="main-server-group" auto-start="true">
     <system-properties>
      <property name="org.kie.server.location" value="http://localhost:8230/kie-server/services/rest/server"></property>
      <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"></property>
      <property name="org.kie.server.controller.user" value="kieserver"></property>
      <property name="org.kie.server.controller.pwd" value="kieserver1!"></property>
      <property name="org.kie.server.id" value="HR"></property>
     </system-properties>
     <socket-bindings port-offset="150"/>
    </server>
    1
    org.kie.server.location: URL of the server instance.
    2
    org.kie.server.controller: Comma-separated list of the controller URL(s).
    3
    org.kie.server.controller.user: Username you created for controller authentication. Uses kieserver by default.
    4
    org.kie.server.controller.pwd: Password for controller authentication. Uses kieserver1! by default.
    5
    org.kie.server.id: Server identifier that corresponds to template ID defined by the controller instance. Give the same ID to multiple server instances that represent one template.

    The example above is defined for Red Hat JBoss EAP domain mode. For further list of bootstrap switches, see section Bootstrap Switches of the Red Hat JBoss BPM Suite Administration and Configuration Guide.

  7. Repeat the previous step for as many servers as you need. To start Red Hat JBoss EAP in the domain mode, execute:
$ ./SERVER_HOME/bin/domain.sh

After connecting the servers to your controller, check the controller log:

13:54:40,315 INFO  [org.kie.server.controller.impl.KieServerControllerImpl] (http-localhost/127.0.0.1:8080-1) Server http://localhost:8180/kie-server/services/rest/server connected to controller
13:54:40,331 INFO  [org.kie.server.controller.impl.KieServerControllerImpl] (http-localhost/127.0.0.1:8080-2) Server http://localhost:8230/kie-server/services/rest/server connected to controller
13:54:40,348 INFO  [org.kie.server.controller.rest.RestKieServerControllerImpl] (http-localhost/127.0.0.1:8080-1) Server with id 'HR' connected
13:54:40,348 INFO  [org.kie.server.controller.rest.RestKieServerControllerImpl] (http-localhost/127.0.0.1:8080-2) Server with id 'HR' connected

Alternatively, to verify in controller Business Central:

  1. Log into the controller Business Central.
  2. Click DeployExecution Servers.
  3. View the remote servers connected to each template.

5.6. Generic Bundle Clustering

5.6.1. Setting a Cluster

Note

If you do not use Business Central, skip this section.

To cluster your Git (VFS) repository in Business Central:

  1. Download the jboss-bpmsuite-brms-VERSION-supplementary-tools.zip, which contains Apache ZooKeeper, Apache Helix, and Quartz DDL scripts.
  2. Unzip the archive: the ZooKeeper directory (ZOOKEEPER_HOME) and the Helix directory (HELIX_HOME) are created.
  3. Configure Apache ZooKeeper:

    1. In the ZooKeeper directory, change to conf and execute:

      cp zoo_sample.cfg zoo.cfg
    2. Edit zoo.cfg:

      # The directory where the snapshot is stored.
      dataDir=$ZOOKEEPER_HOME/data/
      
      # The port at which the clients connects.
      clientPort=2181
      
      # Defining ZooKeeper ensemble.
      # server.{ZooKeeperNodeID}={server}:{port:range}
      server.1=localhost:2888:3888
      server.2=localhost:2889:3889
      server.3=localhost:2890:3890
      Note

      Multiple ZooKeeper nodes are not required for clustering.

      Make sure the dataDir location exists and is accessible.

    3. Assign a node ID to each member that will run ZooKeeper. For example, use 1, 2, and 3 for node 1, node 2 and node 3 respectively.

      The ZooKeeper node ID is specified in a field called myid under the data directory of ZooKeeper on each node. For example, on node 1, execute:

      echo "1" > /zookeeper/data/myid
  4. Provide further ZooKeeper configuration if necessary.
  5. Change to ZOOKEEPER_HOME/bin/ and start ZooKeeper:

    ./zkServer.sh start

    You can check the ZooKeeper log in the ZOOKEEPER_HOME/bin/zookeeper.out file. Check this log to ensure that the ensemble (cluster) is formed successfully. One of the nodes should be elected as leader with the other two nodes following it.

  6. Once the ZooKeeper ensemble is started, configure and start Helix. Helix needs to be configured from a single node only. The configuration is then stored by the ZooKeeper ensemble and shared as appropriate.

    Configure the cluster with the ZooKeeper server as the master of the configuration:

    1. Create the cluster by providing the ZooKeeper Host and port as a comma-separated list:

      $HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addCluster <clustername>
    2. Add your nodes to the cluster:

      HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addNode <clustername> <name_uniqueID>

      Example 5.7. Adding Three Cluster Nodes

      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeOne:12345
      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeTwo:12346
      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeThree:12347
  7. Add resources to the cluster.

    helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT  --addResource <clustername> <resourceName> <numPartitions> <stateModelName>

    Learn more about state machine configuration at Helix Tutorial: State Machine Configuration.

    Example 5.8. Adding vfs-repo as Resource

    ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addResource bpms-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  8. Rebalance the cluster with the three nodes.

    helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --rebalance <clustername> <resourcename> <replicas>

    Learn more about rebalancing at Helix Tutorial: Rebalancing Algorithms.

    Example 5.9. Rebalancing bpms-cluster

    ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --rebalance bpms-cluster vfs-repo 3

    In this command, 3 stands for three bpms-cluster nodes.

  9. Start the Helix controller in all the nodes in the cluster.

    Example 5.10. Starting Helix Controller

    ./run-helix-controller.sh --zkSvr server1:2181,server2:2182,server3:2183 --cluster bpms-cluster 2>&1 > ./controller.log &
Note

In case you decide to cluster ZooKeeper, add an odd number of instances in order to recover from failure. After a failure, the remaining number of nodes still need to be able to form a majority. For example a cluster of five ZooKeeper nodes can withstand loss of two nodes in order to fully recover. One ZooKeeper instance is still possible, replication will work, however no recover possibilities are available if it fails.

5.6.2. Starting and Stopping a Cluster

To start your cluster, see Section 5.5.2, “Starting a Cluster”. To stop your cluster, see Section 5.5.3, “Stopping a Cluster”.

5.6.3. Setting Quartz

Note

If you are not using Quartz (timers) in your business processes, or if you are not using the Intelligent Process Server, skip this section. If you want to replicate timers in your business process, use the Quartz component.

Before you can configure the database on your application server, you need to prepare the database for Quartz to create Quartz tables, which will hold the timer data, and the Quartz definition file.

To configure Quartz:

  1. Configure the database. Make sure to use one of the supported non-JTA data sources. Since Quartz needs a non-JTA data source, you cannot use the Business Central data source. In the example code, PostgreSQL with the user bpms and password bpms is used. The database must be connected to your application server.
  2. Create Quartz tables on your database to allow timer events synchronization. To do so, use the DDL script for your database, which is available in the extracted supplementary ZIP archive in QUARTZ_HOME/docs/dbTables.
  3. Create the Quartz configuration file quartz-definition.properties in JBOSS_HOME/MODE/configuration/ directory and define the Quartz properties.

    Example 5.11. Quartz Configuration File for PostgreSQL Database

    #============================================================================
    # Configure Main Scheduler Properties
    #============================================================================
    
    org.quartz.scheduler.instanceName = jBPMClusteredScheduler
    org.quartz.scheduler.instanceId = AUTO
    
    #============================================================================
    # Configure ThreadPool
    #============================================================================
    
    org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
    org.quartz.threadPool.threadCount = 5
    org.quartz.threadPool.threadPriority = 5
    
    #============================================================================
    # Configure JobStore
    #============================================================================
    
    org.quartz.jobStore.misfireThreshold = 60000
    
    org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
    org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
    org.quartz.jobStore.useProperties=false
    org.quartz.jobStore.dataSource=managedDS
    org.quartz.jobStore.nonManagedTXDataSource=notManagedDS
    org.quartz.jobStore.tablePrefix=QRTZ_
    org.quartz.jobStore.isClustered=true
    org.quartz.jobStore.clusterCheckinInterval = 20000
    
    #============================================================================
    # Configure Datasources
    #============================================================================
    org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psbpmsDS
    org.quartz.dataSource.notManagedDS.jndiURL=jboss/datasources/quartzNotManagedDS

    Note the configured data sources that will accommodate the two Quartz schemes at the very end of the file.

    Note

    For MicroSoft SQL Server, add the acquireTriggersWithinLock property to the quartz-definition.properties file:

    org.quartz.jobStore.acquireTriggersWithinLock=true

    Cluster Node Check Interval

    The recommended interval for cluster discovery is 20 seconds and is set in the org.quartz.jobStore.clusterCheckinInterval of the quartz-definition.properties file. Depending on your set up consider the performance impact and modify the setting as necessary.

    The org.quartz.jobStore.driverDelegateClass property that defines the database dialect. If you use Oracle, set it to org.quartz.impl.jdbcjobstore.oracle.OracleDelegate.

  4. Provide the absolute path to your quartz-definition.properties file in the org.quartz.properties property. For further details, see _cluster_properties_BRMS.

Note: To configure the number of retries and delay for the Quartz trigger, you can update the following system properties:

  • org.jbpm.timer.quartz.retries (default value is 5)
  • org.jbpm.timer.quartz.delay in milliseconds (default value is 1000)