Red Hat Training

A Red Hat training course is available for JBoss Enterprise Application Platform Common Criteria Certification

Chapter 16. Introduction and Quick Start

Clustering allows you to run an application on several parallel servers (a.k.a cluster nodes) while providing a single view to application clients. Load is distributed across different servers, and even if one or more of the servers fails, the application is still accessible via the surviving cluster nodes. Clustering is crucial for scalable enterprise applications, as you can improve performance by adding more nodes to the cluster. Clustering is crucial for highly available enterprise applications, as it is the clustering infrastructure that supports the redundancy needed for high availability.
The JBoss Enterprise Application Platform comes with clustering support out of the box, as part of the all configuration. The all configuration includes support for the following:
  • A scalable, fault-tolerant JNDI implementation (HA-JNDI).
  • Web tier clustering, including:
    • High availability for web session state via state replication.
    • Ability to integrate with hardware and software load balancers, including special integration with mod_jk and other JK-based software load balancers.
    • Single Sign-on support across a cluster.
  • EJB session bean clustering, for both stateful and stateless beans, and for both EJB3 and EJB2.
  • A distributed cache for JPA/Hibernate entities.
  • A framework for keeping local EJB2 entity caches consistent across a cluster by invalidating cache entries across the cluster when a bean is changed on any node.
  • Distributed JMS queues and topics via JBoss Messaging.
  • Deploying a service or application on multiple nodes in the cluster but having it active on only one (but at least one) node is called a HA Singleton.
  • Keeping deployed content in sync on all nodes in the cluster via the Farm service.
In this Clustering Guide we aim to provide you with an in depth understanding of how to use JBoss Enterprise Application Platform's clustering features. In this first part of the guide, the goal is to provide some basic "Quick Start" steps to encourage you to start experimenting with JBoss Enterprise Application Platform Clustering, and then to provide some background information that will allow you to understand how JBoss Enterprise Application Platform Clustering works. The next part of the guide then explains in detail how to use these features to cluster your JEE services. Finally, we provide some more details about advanced configuration of JGroups and JBoss Cache, the core technologies that underlie JBoss Enterprise Application Platform Clustering.

16.1. Quick Start Guide

The goal of this section is to give you the minimum information needed to let you get started experimenting with JBoss Enterprise Application Platform Clustering. Most of the areas touched on in this section are covered in much greater detail later in this guide.

16.1.1. Initial Preparation

Preparing a set of servers to act as a JBoss Enterprise Application Platform cluster involves a few simple steps:
  • Install JBoss Enterprise Application Platform on all your servers. In its simplest form, this is just a matter of unzipping the JBoss download onto the filesystem on each server.
    If you want to run multiple JBoss Enterprise Application Platform instances on a single server, you can either install the full JBoss distribution onto multiple locations on your filesystem, or you can simply make copies of the all configuration. For example, assuming the root of the JBoss distribution was unzipped to /var/jboss, you would:
    $ cd /var/jboss/server
    $ cp -r all node1
    $ cp -r all node2
  • For each node, determine the address to bind sockets to. When you start JBoss, whether clustered or not, you need to tell JBoss on what address its sockets should listen for traffic. (The default is localhost which is secure but isn't very useful, particularly in a cluster.) So, you need to decide what those addresses will be.
  • Ensure multicast is working. By default JBoss Enterprise Application Platform uses UDP multicast for most intra-cluster communications. Make sure each server's networking configuration supports multicast and that multicast support is enabled for any switches or routers between your servers. If you are planning to run more than one node on a server, make sure the server's routing table includes a multicast route. See the JGroups documentation at http://www.jgroups.org for more on this general area, including information on how to use JGroups' diagnostic tools to confirm that multicast is working.

    Note

    JBoss Enterprise Application Platform clustering does not require the use of UDP multicast; the Enterprise Application Platform can also be reconfigured to use TCP unicast for intra-cluster communication.
  • Determine a unique integer "ServerPeerID" for each node. This is needed for JBoss Messaging clustering, and can be skipped if you will not be running JBoss Messaging (i.e. you will remove JBM from your server configuration's deploy directory). JBM requires that each node in a cluster has a unique integer id, known as a "ServerPeerID", that should remain consistent across server restarts. A simple 1, 2, 3, ..., x naming scheme is fine. We'll cover how to use these integer ids in the next section.
Beyond the above required steps, the following two optional steps are recommended to help ensure that your cluster is properly isolated from other JBoss Enterprise Application Platform clusters that may be running on your network:
  • Pick a unique name for your cluster. The default name for a JBoss Enterprise Application Platform cluster is "DefaultPartition". Come up with a different name for each cluster in your environment, e.g. "QAPartition" or "BobsDevPartition". The use of "Partition" is not required; it's just a semi-convention. As a small aid to performance try to keep the name short, as it gets included in every message sent around the cluster. We'll cover how to use the name you pick in the next section.
  • Pick a unique multicast address for your cluster. By default JBoss Enterprise Application Platform uses UDP multicast for most intra-cluster communication. Pick a different multicast address for each cluster you run. Generally a good multicast address is of the form 239.255.x.y. See http://www.29west.com/docs/THPM/multicast-address-assignment.html for a good discussion on multicast address assignment. We'll cover how to use the address you pick in the next section.
See Section 25.6.2, “Isolating JGroups Channels” for more on isolating clusters.

16.1.2. Launching a JBoss Enterprise Application Platform Cluster

The simplest way to start a JBoss server cluster is to start several JBoss instances on the same local network, using the -c all command line option for each instance. Those server instances will detect each other and automatically form a cluster.
Let's look at a few different scenarios for doing this. In each scenario we'll be creating a two node cluster, where the ServerPeerID for the first node is 1 and for the second node is 2 . We've decided to call our cluster "DocsPartition" and to use 239.255.100.100 as our multicast address. These scenarios are meant to be illustrative; the use of a two node cluster shouldn't be taken to mean that is the best size for a cluster; it's just that's the simplest way to do the examples.
  • Scenario 1: Nodes on Separate Machines
    This is the most common production scenario. Assume the machines are named "node1" and "node2", while node1 has an IP address of 192.168.0.101 and node2 has an address of 192.168.0.102. Assume the "ServerPeerID" for node1 is 1 and for node2 it's 2. Assume on each machine JBoss is installed in /var/jboss.
    On node1, to launch JBoss:
    $ cd /var/jboss/bin
    $ ./run.sh -c all -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=1
    On node2, it's the same except for a different -b value and ServerPeerID:
    $ cd /var/jboss/bin
    $ ./run.sh -c all -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.102 -Djboss.messaging.ServerPeerID=2
    The -c switch says to use the all config, which includes clustering support. The -g switch sets the cluster name. The -u switch sets the multicast address that will be used for intra-cluster communication. The -b switch sets the address on which sockets will be bound. The -D switch sets system property jboss.messaging.ServerPeerID, from which JBoss Messaging gets its unique id.
  • Scenario 2: Two Nodes on a Single, Multihomed, Server
    Running multiple nodes on the same machine is a common scenario in a development environment, and is also used in production in combination with Scenario 1. (Running all the nodes in a production cluster on a single machine is generally not recommended, since the machine itself becomes a single point of failure.) In this version of the scenario, the machine is multihomed, i.e. has more than one IP address. This allows the binding of each JBoss instance to a different address, preventing port conflicts when the nodes open sockets.
    Assume the single machine has the 192.168.0.101 and 192.168.0.102 addresses assigned, and that the two JBoss instances use the same addresses and ServerPeerIDs as in Scenario 1. The difference from Scenario 1 is we need to be sure each Enterprise Application Platform instance has its own work area. So, instead of using the all config, we are going to use the node1 and node2 configs we copied from all earlier in the previous section.
    To launch the first instance, open a console window and:
    $ cd /var/jboss/bin
    $ ./run.sh -c node1 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=1
    For the second instance, it's the same except for different -b and -c values and a different ServerPeerID:
    $ cd /var/jboss/bin
    $ ./run.sh -c node2 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.102 -Djboss.messaging.ServerPeerID=2
  • Scenario 3: Two Nodes on a Single, Non-Multihomed, Server
    This is similar to Scenario 2, but here the machine only has one IP address available. Two processes can't bind sockets to the same address and port, so we'll have to tell JBoss to use different ports for the two instances. This can be done by configuring the ServiceBindingManager service by setting the jboss.service.binding.set system property.
    To launch the first instance, open a console window and:
    $ cd /var/jboss/bin
    $ ./run.sh -c node1 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=1 \
        -Djboss.service.binding.set=ports-default
    For the second instance:
    $ cd /var/jboss/bin
    $ ./run.sh -c node2 -g DocsPartition -u 239.255.100.100 \
        -b 192.168.0.101 -Djboss.messaging.ServerPeerID=2 \
        -Djboss.service.binding.set=ports-01
    This tells the ServiceBindingManager on the first node to use the standard set of ports (e.g. JNDI on 1099). The second node uses the "ports-01" binding set, which by default for each port has an offset of 100 from the standard port number (e.g. JNDI on 1199). See the conf/bindingservice.beans/META-INF/bindings-jboss-beans.xml file for the full ServiceBindingManager configuration.
    Note that this setup is not advised for production use, due to the increased management complexity that comes with using different ports. But it is a fairly common scenario in development environments where developers want to use clustering but cannot multihome their workstations.

    Note

    Including -Djboss.service.binding.set=ports-default on the command line for node1 isn't technically necessary, since ports-default is the default value. But using a consistent set of command line arguments across all servers is helpful to people less familiar with all the details.
That's it; that's all it takes to get a cluster of JBoss Enterprise Application Platform servers up and running.

16.1.3. Web Application Clustering Quick Start

JBoss Enterprise Application Platform supports clustered web sessions, where a backup copy of each user's HttpSession state is stored on one or more nodes in the cluster. In case the primary node handling the session fails or is shut down, any other node in the cluster can handle subsequent requests for the session by accessing the backup copy. Web tier clustering is discussed in detail in Chapter 22, HTTP Services.
There are two aspects to setting up web tier clustering:
  • Configuring an External Load Balancer. Web applications require an external load balancer to balance HTTP requests across the cluster of JBoss Enterprise Application Platform instances (see Section 17.2.2, “External Load Balancer Architecture” for more on why that is). JBoss Enterprise Application Platform itself doesn't act as an HTTP load balancer. So, you will need to set up a hardware or software load balancer. There are many possible load balancer choices, so how to configure one is really beyond the scope of a Quick Start. But see Section 22.1, “Configuring load balancing using Apache and mod_jk” for details on how to set up the popular mod_jk software load balancer.
  • Configuring Your Web Application for Clustering. This aspect involves telling JBoss you want clustering behavior for a particular web app, and it couldn't be simpler. Just add an empty distributable element to your application's web.xml file:
    <?xml version="1.0"?> 
    <web-app  xmlns="http://java.sun.com/xml/ns/javaee"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
              xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
                                  http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" 
              version="2.5">
              
        <distributable/>
        
    </web-app>
    Simply doing that is enough to get the default JBoss Enterprise Application Platform web session clustering behavior, which is appropriate for most applications. See Section 22.2, “Configuring HTTP session state replication” for more advanced configuration options.

16.1.4. EJB Session Bean Clustering Quick Start

JBoss Enterprise Application Platform supports clustered EJB session beans, whereby requests for a bean are balanced across the cluster. For stateful beans a backup copy of bean state is maintained on one or more cluster nodes, providing high availability in case the node handling a particular session fails or is shut down. Clustering of both EJB2 and EJB3 beans is supported.
For EJB3 session beans, simply add the org.jboss.ejb3.annotation.Clustered annotation to the bean class for your stateful or stateless bean:
@javax.ejb.Stateless
@org.jboss.ejb3.annotation.Clustered
public class MyBean implements MySessionInt {
   
   public void test() {
      // Do something cool
   }
}
For EJB2 session beans, or for EJB3 beans where you prefer XML configuration over annotations, simply add a clustered element to the bean's section in the JBoss-specific deployment descriptor, jboss.xml:
<jboss>    
    <enterprise-beans>      
        <session>        
            <ejb-name>example.StatelessSession</ejb-name>        
            <jndi-name>example.StatelessSession</jndi-name>        
            <clustered>true</clustered>
        </session>
    </enterprise-beans>
</jboss>
See Chapter 20, Clustered Session EJBs for more advanced configuration options.

16.1.5. Entity Clustering Quick Start

One of the big improvements in the clustering area in JBoss Enterprise Application Platform 5 is the use of the new Hibernate/JBoss Cache integration for second level entity caching that was introduced in Hibernate 3.3. In the JPA/Hibernate context, a second level cache refers to a cache whose contents are retained beyond the scope of a transaction. A second level cache may improve performance by reducing the number of database reads. You should always load test your application with second level caching enabled and disabled to see whether it has a beneficial impact on your particular application.
If you use more than one JBoss Enterprise Application Platform instance to run your JPA/Hibernate application and you use second level caching, you must use a cluster-aware cache. Otherwise a cache on server A will still hold out-of-date data after activity on server B updates some entities.
JBoss Enterprise Application Platform provides a cluster-aware second level cache based on JBoss Cache. To tell JBoss Enterprise Application Platform's standard Hibernate-based JPA provider to enable second level caching with JBoss Cache, configure your persistence.xml as follows:
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
   http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
   version="1.0"> 
   <persistence-unit name="somename" transaction-type="JTA">
      <jta-data-source>java:/SomeDS</jta-data-source>
      <properties>
         <property name="hibernate.cache.use_second_level_cache" value="true"/>
         <property name="hibernate.cache.region.factory_class" 
                   value="org.hibernate.cache.jbc2.JndiMultiplexedJBossCacheRegionFactory"/>
         <property name="hibernate.cache.region.jbc2.cachefactory" value="java:CacheManager"/>
         <!-- Other configuration options ... -->
      </properties>
   </persistence-unit>
</persistence>
That tells Hibernate to use the JBoss Cache-based second level cache, but it doesn't tell it what entities to cache. That can be done by adding the org.hibernate.annotations.Cache annotation to your entity class:
package org.example.entities;
 
import java.io.Serializable;
import javax.persistence.Entity;
import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;
 
@Entity
@Cache(usage=CacheConcurrencyStrategy.TRANSACTIONAL)
public class Account implements Serializable {
See Chapter 21, Clustered Entity EJBs for more advanced configuration options and details on how to configure the same thing for a non-JPA Hibernate application.

Note

Clustering can add significant overhead to a JPA/Hibernate second level cache, so don't assume that just because second level caching adds a benefit to a non-clustered application that it will be beneficial to a clustered application. Even if clustered second level caching is beneficial overall, caching of more frequently modified entity types may be beneficial in a non-clustered scenario but not in a clustered one. Always load test your application.