38.2. Dedicated Symmetrical Live and Backup Clusters

Note

JBoss Enterprise Application Platform ships with an example configuration for this topology, located in $JBOSS_HOME/extras/hornetq/resources/examples/cluster-with-dedicated-backup.
In a dedicated symmetrical topology, the backup server resides on a separate JBoss Enterprise Application Platform instance, rather than colocated with another live server.
This means the JBoss Enterprise Application Platform instance is passive, and not used until the backup becomes live. The passive instance is therefore only useful for pure JMS applications.
The following diagram shows a possible configuration for this:

Example 38.3. Single Instance, Pure JMS, Dedicated Symmetrical Configuration

When the HornetQ live server on the EAP1 node stops responding, the HornetQ backup instance on the EAP1(B) node activates, and becomes the live server. The Remote JMS client routes all messages destined for the HornetQ live node to the HornetQ backup node.
Example 38.3, “Single Instance, Pure JMS, Dedicated Symmetrical Configuration” describes how a dedicated symmetrical topology works with applications that are pure JMS, and have no JMS components (for example, Message Driven Beans).
For topologies that contain JMS components, there are two approaches you can take for dedicated symmetrical clusters containing JMS components:
  1. Dedicated JCA Server, as described in Example 38.4, “Dedicated JCA Server”
  2. Remote JCA Server, as described in Example 38.5, “Remote JCA Server”.

38.2.1. Dedicated JCA Live Server

Example 38.4. Dedicated JCA Server

The EAP1(B) and EAP2(B) instances are only running backup HornetQ instances, therefore it does not make sense to host any applications on these instances. Applications are instead hosted on EAP1 and EAP2, closely located to the live HornetQ instances.
When a live HornetQ server fails on EAP1, traffic fails over to the backup HornetQ servers EAP1(B). The remote JMS client reroutes messages sent from, or destined for Java EE components on the live server to the backup server using the HornetQ cluster connections.
The live server configuration is essentially the same as in Section 38.1.1, “Colocated Live Server”. The only difference is there is no backup in the same JBoss Enterprise Application Platform node running the live HornetQ instance. This topology also requires multiple JBoss Enterprise Application Platform instances.

Procedure 38.7. Create Dedicated Live Server Profile

Important

If nodes can not discover each other, verify that you have configured the firewall and UDP ports correctly. The network configuration should allow nodes on a cluster to communicate with each other.
You must create a copy of the production profile to customize the live server configuration.

Important

Always copy an included profile rather than editing it directly for your custom profile. If you make a critical mistake during configuration, you can fall back to the base configuration at any time.
  1. Navigate to $JBOSS_HOME/server/
  2. Copy the production profile, and rename it to HornetQ_Dedicated

Procedure 38.8. Configure Shared Store and Journaling

Follow this procedure to specify HornetQ must use a shared store for fail over, and define the location of the journal files each HornetQ instance in the live backup group uses.
  1. Navigate to $JBOSS_HOME/server/HornetQ_Dedicated/deploy/hornetq/
  2. Open hornetq-configuration.xml
  3. Add the <shared-store> element as a child of the <configuration> element.
    <shared-store>true</shared-store>
  4. Ensure the bindings, journal, and large messages path locations are set to a location the live backup group can access.
    You can set absolute paths as the example describes, or use the JBoss parameters that exist in the configuration file.
    If you choose the parameter option, and you do not use the default paths that these parameters resolve to, you must specify the path your bindings, journal, and large messages reside in each time you start the server.
    <large-messages-directory>/media/shared/data/large-messages</large-messages-directory>
    
    <bindings-directory>/media/shared/data/bindings</bindings-directory>
    
    <journal-directory>/media/shared/data/journal</journal-directory>
    
    <paging-directory>/media/shared/data/paging</paging-directory>

    Note

    Ensure you specify paths that are accessible to the live backup groups on your network.

Procedure 38.9. Configure JMS Client Graceful Shutdown

Follow this procedure to configure how JMS clients re-establish a connection if a server is shut down gracefully.
  1. Navigate to $JBOSS_HOME/server/HornetQ_Dedicated/deploy/hornetq/
  2. Open hornetq-configuration.xml
  3. Specify the <fail over-on-shutdown> element in the area near the journal directory configuration in Procedure 38.2, “Configure Shared Store and Journaling”.
    <failover-on-shutdown>true</failover-on-shutdown>

    Note

    You are not constrained where you put the element in the hornetq-configuration.xml file, however it is easier to find the less detailed settings if they are all located at the top of the file.
  4. Save and close the file.

Procedure 38.10. Configure HA Connection Factories

  1. Navigate to $JBOSS_HOME/server/HornetQ_Dedicated/deploy/hornetq/
  2. Open hornetq-jms.xml.
  3. Add the following attributes and values as specified below.
    <ha>true</ha>
    Specifies the client must support high availability, and must always be true for fail over to occur.
    <retry-interval>1000</retry-interval>
    Specifies how long the client must wait (in milliseconds) before it can reconnect to the server.
    <retry-interval-multiplier>1.0</retry-interval-multiplier>
    Specifies the multiplier <retry-interval> uses for each subsequent reconnection pauses. By setting the value to 1.0, the retry interval is the same for each client reconnection request.
    <reconnect-attempts>-1</reconnect-attempts>
    Specifies how many reconnect attempts a client should make before failing. Setting -1 means unlimited reconnection attempts.
    <?xml version='1.0' encoding='UTF-8'?>
    <configuration xmlns="urn:hornetq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
    
       <connection-factory name="NettyConnectionFactory">
          <xa>true</xa>
          <connectors>
             <connector-ref connector-name="netty"/>
          </connectors>
          <entries>
             <entry name="/ConnectionFactory"/>
             <entry name="/XAConnectionFactory"/>
          </entries>
          <ha>true</ha>
    
          <!-- Pause 1 second between connect attempts -->
    
          <retry-interval>1000</retry-interval>
    
          <!-- Multiply subsequent reconnect pauses by this multiplier. This can be used 
           to implement an exponential back-off. For our purposes we just set to 1.0 so 
           each reconnect pause is the same length -->
    
          <retry-interval-multiplier>1.0</retry-interval-multiplier>
    
          <!-- Try reconnecting an unlimited number of times (-1 means unlimited) -->
    
          <reconnect-attempts>-1</reconnect-attempts>
       </connection-factory>
    
    </configuration>
    
  4. Define new queues in both master and backup nodes by adding one of the following configuration blocks to the specified file.
    For production/deploy/hornetq/hornetq-jms.xml
    <queue name="testQueue">
       <entry name="/queue/testQueue"/>
       <durable>true</durable>
    </queue>
    For production/deploy/customName-hornetq-jms.xml

    Note

    Ensure the file is well-formed from an XML validation perspective by ensure the XML Namespace is present and correct in the file as specified.
    <configuration xmlns="urn:hornetq"
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
    
       <queue name="testQueue">
          <entry name="/queue/testQueue"/>
          <durable>true</durable>
       </queue>
    
    </configuration>