Red Hat Training

A Red Hat training course is available for Red Hat Fuse

Chapter 5. Clustering JBI Endpoints

Important
The Java Business Integration components of Red Hat JBoss Fuse are considered deprecated. You should consider migrating any JBI applications to OSGi.

Overview

Red Hat JBoss Fuse provides a clustering engine that enables you to use Apache ActiveMQ, or any other JMS broker, to specify the endpoints to cluster in a JBI application. The Red Hat JBoss Fuse clustering engine works in conjunction with the normalized message router (NMR), and uses Apache ActiveMQ and specifically configured JBI endpoints to build clusters.
A cluster is defined as two or more JBI containers networked together. Implementing clustering between JBI containers gives you access to features including load balancing and high availability, rollback and redelivery, and remote container awareness.

Features

Clustering provides the following features that can be implemented in your applications:
  • Connect JBI containers to form a network, and dynamically add and remove the containers from the network.
  • Enable rollback and redelivery when a JBI exchange fails.
  • Implement load balancing among JBI containers capable of handling a given exchange. For example:
    • Install the same component in multiple JBI containers to provide increased capacity and high availability (if one container fails, the same component in another container can service the request).
    • Partition the workload among multiple JBI container instances to enable different containers to handle different tasks, spreading the workload across multiple containers.
  • Remote component awareness means each clustered JBI container is aware of the components in its peer containers. Networked containers listen for remote component registration/deregistration events and can route requests to those components.

Steps to set up clustering

Complete the following steps to set up JBI endpoint clustering:
  1. Install the jbi-cluster feature included in Red Hat JBoss Fuse. See the section called “Installing the clustering feature”.
  2. Optionally, configure the clustering engine with a JMS broker other than the Red Hat JBoss A-MQ. See the section called “Changing the JMS broker”.
  3. Optionally, change the default clustering engine configuration to specify different cluster and destination names. See the section called “Changing the default configuration”.
  4. Add endpoints and register the endpoint definition in the Spring configuration. See the section called “Using clustering in an application”.
See the following sections for additional information:

Installing the clustering feature

To install the jbi-cluster feature, use the install command from the command console:
  1. Start Red Hat JBoss Fuse.
  2. At the JBossFuse:karaf@root> prompt, type:
    features:install jbi-cluster
  3. Type featuresL:list to list the existing features and their installation state. Verify that the jbi-cluster feature is installed.
The cluster configuration bundle is automatically installed when you install the jbi-cluster feature.

Default clustering engine configuration

Red Hat JBoss Fuse has a pre-installed clustering engine that is configured to use the included Red Hat JBoss A-MQ. The default configuration for the Red Hat JBoss Fuse cluster engine is defined in the jbi-cluster.xml file in the org.apache.servicemix.jbi.cluster.config bundle. This bundle is located in the installation directory in \system\org\apache\servicemix\jbi\cluster.
The default cluster engine configuration, shown in Example 5.1, is designed to meet most basic requirements.

Example 5.1. Default cluster engine configuration

<bean id="clusterEngine" class="org.apache.servicemix.jbi.cluster.engine.ClusterEngine">
  <property name="pool">
    <bean class="org.apache.servicemix.jbi.cluster.requestor.ActiveMQJmsRequestorPool">
     <property name="connectionFactory" ref="connectionFactory" />
     <property name="destinationName" value="${destinationName}" />
    </bean>
  </property>
  <property name="name" value="${clusterName}" />
</bean>
<osgi:list id="clusterRegistrations"
      interface="org.apache.servicemix.jbi.cluster.engine.ClusterRegistration" 
      cardinality="0..N">
  <osgi:listener ref="clusterEngine" bind-method="register" unbind-method="unregister" />
</osgi:list>        
<osgi:reference id="connectionFactory" interface="javax.jms.ConnectionFactory" />       
<osgi:service ref="clusterEngine">
  <osgi:interfaces>
    <value>org.apache.servicemix.nmr.api.Endpoint</value>
    <value>org.apache.servicemix.nmr.api.event.Listener</value>
    <value>org.apache.servicemix.nmr.api.event.EndpointListener</value>
    <value>org.apache.servicemix.nmr.api.event.ExchangeListener</value>
  </osgi:interfaces>
  <osgi:service-properties>
    <entry key="NAME" value="${clusterName}" />
  </osgi:service-properties>
</osgi:service>        
<osgix:cm-properties id="clusterProps" 
        persistent-id="org.apache.servicemix.jbi.cluster.config">
  <prop key="clusterName">${servicemix.name}</prop>
  <prop key="destinationName">org.apache.servicemix.jbi.cluster</prop>
</osgix:cm-properties>       
<ctx:property-placeholder properties-ref="clusterProps" />
</beans>
Red Hat JBoss Fuse has a preconfigured Red Hat JBoss A-MQ instance that automatically starts when the container is started. This means you do not have to start a broker instance for the clustering engine to work.

Changing the default configuration

You can alter the default configuration by adding a configuration file to the bundle org.apache.servicemix.jbi.cluster.config. This added configuration file enables you to change both the clusterName and the destinationName.

Changing the JMS broker

You can configure the cluster engine with another JMS broker by adding a Spring XML file containing the full configuration to the InstallDir\deploy directory.

Using clustering in an application

When using an OSGi packaged JBI service assembly, you can include the clustered endpoints definitions directly in the Spring configuration. In addition to the endpoint definition, you must add a bean that registers the endpoint with the clustering engine.
Example 5.2 shows an OSGi packaged HTTP consumer endpoint that is part of a cluster.

Example 5.2. OSGi packaged JBI endpoint

<http:consumer id="myHttpConsumer" service="test:myService" endpoint="myEndpoint" />
<bean class="org.apache.servicemix.jbi.cluster.engine.OsgiSimpleClusterRegistration">
  <property name="endpoint" ref="myHttpConsumer" />
</bean>
When using a JBI packaged service assembly, you must create a Spring application to register the endpoint as a clustered endpoint. This configuration requires that you provide additional information about the endpoint.
Example 5.3 shows a JBI packaged HTTP consumer endpoint that is part of a cluster.

Example 5.3. JBI packaged endpoint

<http:consumer id="myHttpConsumer" service="test:myService" endpoint="myEndpoint" />
<bean class="org.apache.servicemix.jbi.cluster.engine.OsgiSimpleClusterRegistration">
  <property name="serviceName" value="test:myService" />
  <property name="endpointName" value="myEndpoint" />
</bean>

Establishing network connections between containers

To create a network of JBI containers, you must establish network connections between each of the containers in the network, and then establish a network connection between the active containers. You can configure these network connections as either static or multicast connections.
  • Static network connections — Configure each networkConnector in the cluster in the broker configuration file install_dir/conf/activemq.xml.
    Example 5.4 shows an example of a static networkConnector discovery configuration.

    Example 5.4. Static configuration

    <!-- Set the brokerName to be unique for this container -->
    <amq:broker id="broker" brokerName="host1_broker1" depends-on="jmxServer">
                  
                  ....
                  
      <networkConnectors>
                <networkConnector name="host1_to_host2" uri="static://(tcp://host2:61616)"/>
                    
        <!-- A three container network would look like this -->
        <!-- (Note it is not necessary to list the hostname in the uri list) -->
        <!-- networkConnector name="host1_to_host2_host3"
              uri="static://(tcp://host2:61616,tcp://host3:61616)"/ -->
                    
      </networkConnectors>
                  
    </amq:broker>
    
    
  • Multicast network connections — Enable multicast on your network and configure multicast in the broker configuration file installation_directory/conf/activemq.xml for each container in the network. When the containers start they detect each other and transparently connect to one another.
    Example 5.5 shows an example of a multicast networkConnector discovery configuration.

    Example 5.5. Multicast configuration

    <networkConnectors>
      <!-- by default just auto discover the other brokers -->
                <networkConnector name="default-nc" uri="multicast://default"/>
        </networkConnectors>
When a network connection is established, each container discovers the other containers' remote components and can route to them.

High availability

You can cluster JBI containers to implement high availability by configuring two distinct Red Hat JBoss Fuse container instances in a master-slave configuration. In all cases, the master is in ACTIVE mode and the slave is in STANDBY mode waiting for a failover event to trigger the slave to take over.
You can configure the master and the slave one of the following ways:
  • Shared file system master-slave — In a shared database master-slave configuration, two containers use the same physical data store for the container state. You should ensure that the file system supports file level locking, as this is the mechanism used to elect the master. If the master process exits, the database lock is released and the slave acquires it. The slave then becomes the master.
  • JDBC master-slave — In a JDBC master-slave configuration, the master locks a table in the backend database. The failover event in this case is that the lock is released from the database.
  • Pure master-slave — A pure master-slave configuration can use either a shared database or a shared file system. The master replicates all state changes to the slave so additional overhead is incurred. The failover trigger in a pure master-slave configuration is that the slave loses its network connection to its master. Because of the additional overhead and maintenance involved, this option is less desirable than the other two options.

Cluster configuration conventions

The following conventions apply to configuring clustering:
  • Don't use static and multicast networkConnectors at the same time. If you enable static networkConnectors, then you should disable any multicast networkConnectors, and vice versa.
  • When configuring a network of containers in installation_directory/conf/activemq.xml, ensure that the brokerName attribute is unique for each node in the cluster. This will enable the instances in the network to uniquely identify each other.
  • When configuring a network of containers you must ensure that you have unique persistent stores for each ACTIVE instance. If you have a JDBC data source, you must use a separate database for each ACTIVE instance. For example:
    <property name="url"
              value="jdbc:mysql://localhost/broker_activemq_host1?relaxAutoCommit=true"/>
  • You can setup a network of containers on the same host. To do this, you must change the JMS ports and transportConnector ports to avoid any port conflicts. Edit the installation_directory/conf/activemq.xml file, changing the rmi.port and activemq.port as appropriate. For example:
    rmi.port = 1098
    rmi.host       = localhost
    jmx.url        = service:jmx:rmi:///jndi/rmi://${rmi.host}:${rmi.port}/jmxrmi
              
    activemq.port = 61616
    activemq.host  = localhost
    activemq.url   = tcp://${activemq.host}:${activemq.port}