6.6. Generic Bundle Clustering

6.6.1. Setting a Cluster

Note

If you do not use Business Central, skip this section.

To cluster your Git (VFS) repository in Business Central:

  1. Download the jboss-bpmsuite-brms-VERSION-supplementary-tools.zip, which contains Apache ZooKeeper, Apache Helix, and Quartz DDL scripts.
  2. Unzip the archive: the ZooKeeper directory (ZOOKEEPER_HOME) and the Helix directory (HELIX_HOME) are created.
  3. Configure Apache ZooKeeper:

    1. In the ZooKeeper directory, change to conf and execute:

      cp zoo_sample.cfg zoo.cfg
    2. Edit zoo.cfg:

      # The directory where the snapshot is stored.
      dataDir=$ZOOKEEPER_HOME/data/
      
      # The port at which the clients connects.
      clientPort=2181
      
      # Defining ZooKeeper ensemble.
      # server.{ZooKeeperNodeID}={server}:{port:range}
      server.1=localhost:2888:3888
      server.2=localhost:2889:3889
      server.3=localhost:2890:3890
      Note

      Multiple ZooKeeper nodes are not required for clustering.

      Make sure the dataDir location exists and is accessible.

    3. Assign a node ID to each member that will run ZooKeeper. For example, use 1, 2, and 3 for node 1, node 2 and node 3 respectively.

      The ZooKeeper node ID is specified in a field called myid under the data directory of ZooKeeper on each node. For example, on node 1, execute:

      echo "1" > /zookeeper/data/myid
  4. Provide further ZooKeeper configuration if necessary.
  5. Change to ZOOKEEPER_HOME/bin/ and start ZooKeeper:

    ./zkServer.sh start

    You can check the ZooKeeper log in the ZOOKEEPER_HOME/bin/zookeeper.out file. Check this log to ensure that the ensemble (cluster) is formed successfully. One of the nodes should be elected as leader with the other two nodes following it.

  6. Once the ZooKeeper ensemble is started, configure and start Helix. Helix needs to be configured from a single node only. The configuration is then stored by the ZooKeeper ensemble and shared as appropriate.

    Configure the cluster with the ZooKeeper server as the master of the configuration:

    1. Create the cluster by providing the ZooKeeper Host and port as a comma-separated list:

      $HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addCluster <clustername>
    2. Add your nodes to the cluster:

      HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addNode <clustername>:<name_uniqueID>

      Example 6.4. Adding Two Cluster Nodes

      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode brms-cluster nodeOne:12345
      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode brms-cluster nodeTwo:12346
      ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode brms-cluster nodeThree:12347
  7. Add resources to the cluster.

    helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT  --addResource <clustername> <resourceName> <numPartitions> <stateModelName>

    Learn more about state machine configuration at Helix Tutorial: State Machine Configuration.

    Example 6.5. Adding vfs-repo as Resource

    ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addResource brms-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  8. Rebalance the cluster with the three nodes.

    helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --rebalance <clustername> <resourcename> <replicas>

    Learn more about rebalancing at Helix Tutorial: Rebalancing Algorithms.

    Example 6.6. Rebalancing brms-cluster

    ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --rebalance brms-cluster vfs-repo 3

    In the above command, 3 stands for three ZooKeeper nodes.

  9. Start the Helix controller in all the nodes in the cluster.

    Example 6.7. Starting Helix Controller

    ./run-helix-controller.sh --zkSvr server1:2181,server2:2182,server3:2183 --cluster brms-cluster 2>&1 > ./controller.log &
Note

In case you decide to cluster ZooKeeper, add an odd number of instances in order to recover from failure. After a failure, the remaining number of nodes still need to be able to form a majority. For example a cluster of five ZooKeeper nodes can withstand loss of two nodes in order to fully recover. One ZooKeeper instance is still possible, replication will work, however no recover possibilities are available if it fails.

6.6.2. Starting and Stopping a Cluster

To start your cluster, see Section 6.5.2, “Starting a Cluster”. To stop your cluster, see Section 6.5.3, “Stopping a Cluster”.