6.6. Generic Bundle Clustering

6.6.1. Setting a Cluster

Note

If you do not use Business Central, skip this section.

To cluster your Git (VFS) repository in Business Central:

  1. Download the jboss-bpmsuite-brms-VERSION-supplementary-tools.zip, which contains Apache ZooKeeper, Apache Helix, and Quartz DDL scripts.
  2. Unzip the archive: the ZooKeeper directory (ZOOKEEPER_HOME) and the Helix directory (HELIX_HOME) are created.
  3. Configure Apache ZooKeeper:

    1. In the ZooKeeper directory, change to conf and execute:

      cp zoo_sample.cfg zoo.cfg
    2. Edit zoo.cfg:

      # The directory where the snapshot is stored.
      dataDir=$ZOOKEEPER_HOME/data/
      
      # The port at which the clients connects.
      clientPort=2181
      
      # Defining ZooKeeper ensemble.
      # server.{ZooKeeperNodeID}={server}:{port:range}
      server.1=localhost:2888:3888
      server.2=localhost:2889:3889
      server.3=localhost:2890:3890
      Note

      Multiple ZooKeeper nodes are not required for clustering.

      Make sure the dataDir location exists and is accessible.

    3. Assign a node ID to each member that will run ZooKeeper. For example, use 1, 2, and 3 for node 1, node 2 and node 3 respectively.

      The ZooKeeper node ID is specified in a field called myid under the data directory of ZooKeeper on each node. For example, on node 1, execute:

      echo "1" > /zookeeper/data/myid
  4. Provide further ZooKeeper configuration if necessary.
  5. Change to ZOOKEEPER_HOME/bin/ and start ZooKeeper:

    ./zkServer.sh start

    You can check the ZooKeeper log in the ZOOKEEPER_HOME/bin/zookeeper.out file. Check this log to ensure that the ensemble (cluster) is formed successfully. One of the nodes should be elected as leader with the other two nodes following it.

  6. Once the ZooKeeper ensemble is started, configure and start Helix. Helix needs to be configured from a single node only. The configuration is then stored by the ZooKeeper ensemble and shared as appropriate.

    Configure the cluster with the ZooKeeper server as the master of the configuration:

    1. Create the cluster by providing the ZooKeeper Host and port as a comma-separated list:

      $HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addCluster <clustername>
    2. Add your nodes to the cluster:

      HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addNode <clustername>:<name_uniqueID>

      Example 6.7. Adding Three Cluster Nodes

      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeOne:12345
      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeTwo:12346
      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeThree:12347
  7. Add resources to the cluster.

    helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT  --addResource <clustername> <resourceName> <numPartitions> <stateModelName>

    Learn more about state machine configuration at Helix Tutorial: State Machine Configuration.

    Example 6.8. Adding vfs-repo as Resource

    ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addResource bpms-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  8. Rebalance the cluster with the three nodes.

    helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --rebalance <clustername> <resourcename> <replicas>

    Learn more about rebalancing at Helix Tutorial: Rebalancing Algorithms.

    Example 6.9. Rebalancing bpms-cluster

    ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --rebalance bpms-cluster vfs-repo 3

    In the above command, 3 stands for three ZooKeeper nodes.

  9. Start the Helix controller in all the nodes in the cluster.

    Example 6.10. Starting Helix Controller

    ./run-helix-controller.sh --zkSvr server1:2181,server2:2182,server3:2183 --cluster bpms-cluster 2>&1 > ./controller.log &
Note

In case you decide to cluster ZooKeeper, add an odd number of instances in order to recover from failure. After a failure, the remaining number of nodes still need to be able to form a majority. For example a cluster of five ZooKeeper nodes can withstand loss of two nodes in order to fully recover. One ZooKeeper instance is still possible, replication will work, however no recover possibilities are available if it fails.

6.6.2. Starting and Stopping a Cluster

To start your cluster, see Section 6.5.2, “Starting a Cluster”. To stop your cluster, see Section 6.5.3, “Stopping a Cluster”.

6.6.3. Setting Quartz

Note

If you are not using Quartz (timers) in your business processes, or if you are not using the Intelligent Process Server, skip this section. If you want to replicate timers in your business process, use the Quartz component.

Before you can configure the database on your application server, you need to prepare the database for Quartz to create Quartz tables, which will hold the timer data, and the Quartz definition file.

To configure Quartz:

  1. Configure the database. Make sure to use one of the supported non-JTA data sources. Since Quartz needs a non-JTA data source, you cannot use the Business Central data source. In the example code, PostgreSQL with the user bpms and password bpms is used. The database must be connected to your application server.
  2. Create Quartz tables on your database to allow timer events synchronization. To do so, use the DDL script for your database, which is available in the extracted supplementary ZIP archive in QUARTZ_HOME/docs/dbTables.
  3. Create the Quartz configuration file quartz-definition.properties in JBOSS_HOME/MODE/configuration/ directory and define the Quartz properties.

    Example 6.11. Quartz Configuration File for PostgreSQL Database

    #============================================================================
    # Configure Main Scheduler Properties
    #============================================================================
    
    org.quartz.scheduler.instanceName = jBPMClusteredScheduler
    org.quartz.scheduler.instanceId = AUTO
    
    #============================================================================
    # Configure ThreadPool
    #============================================================================
    
    org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
    org.quartz.threadPool.threadCount = 5
    org.quartz.threadPool.threadPriority = 5
    
    #============================================================================
    # Configure JobStore
    #============================================================================
    
    org.quartz.jobStore.misfireThreshold = 60000
    
    org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
    org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
    org.quartz.jobStore.useProperties=false
    org.quartz.jobStore.dataSource=managedDS
    org.quartz.jobStore.nonManagedTXDataSource=notManagedDS
    org.quartz.jobStore.tablePrefix=QRTZ_
    org.quartz.jobStore.isClustered=true
    org.quartz.jobStore.clusterCheckinInterval = 20000
    
    #============================================================================
    # Configure Datasources
    #============================================================================
    org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psbpmsDS
    org.quartz.dataSource.notManagedDS.jndiURL=jboss/datasources/quartzNotManagedDS

    Note the configured data sources that will accommodate the two Quartz schemes at the very end of the file.

    Cluster Node Check Interval

    The recommended interval for cluster discovery is 20 seconds and is set in the org.quartz.jobStore.clusterCheckinInterval of the quartz-definition.properties file. Depending on your set up consider the performance impact and modify the setting as necessary.

    The org.quartz.jobStore.driverDelegateClass property that defines the database dialect. If you use Oracle, set it to org.quartz.impl.jdbcjobstore.oracle.OracleDelegate.

  4. Provide the absolute path to your quartz-definition.properties file in the org.quartz.properties property. For further details, see _cluster_properties_BRMS.