Chapter 38. Colocated and Dedicated Symmetrical Cluster Configuration

Read this chapter to configure HornetQ live backup groups within JBoss Enterprise Application Platform. HornetQ only supports a shared store for live backup nodes, therefore the chapter covers configuring the live backup groups in this way.
Live Backup Group

An instance of HornetQ running on JBoss Enterprise Application Platform that is configured to fail over to a specified group of HornetQ instances.

HornetQ live backup groups are configured using one of the following topologies:
Colocated
Topology containing one live, and at least one back-up server running concurrently. Each backup node belongs to a live node on another JBoss Enterprise Application Platform instance.
Dedicated
Topology containing one live and at least one backup server. Only one server can run at any given time.

38.1. Colocated Symmetrical Live and Backup Cluster

Note

JBoss Enterprise Application Platform ships with an example configuration for this topology, located in $JBOSS_HOME/extras/hornetq/resources/examples/symmetric-cluster-with-backups-colocated. The readme in this directory provides basic configuration required to run the example.
The colocated symmetrical topology contains an operational live node, and one or more backup nodes. Each backup node belongs to a live node on another JBoss Enterprise Application Platform instance.
In a simple cluster of two JBoss Enterprise Application Platform instances, each JBoss Enterprise Application Platform instance would have a live server and one backup server. as described in Example 38.1, “Two Instance Configuration”.

Example 38.1. Two Instance Configuration

The continuous lines in Example 38.1, “Two Instance Configuration”show the state of the cluster before fail over occurs. The dotted lines show the state of the cluster after fail over has occurred.
Before fail over occurs, the two live servers are connected forming a cluster. Each live server is connected to its local applications through J2EE Connector Architecture (JCA). The remote clients residing are connected to their respective live servers.
When fail over occurs, the HornetQ Backup connects to the still available live server (which happens to be in the same virtual machine) and takes over as the live server in the cluster. Any remote clients also fail over.
Depending on what consumers and producers, and Message Driven Beans (MDBs) are available on each node, messages are distributed between the nodes to satisfy Java Message Service (JMS) requirements. For example, if a producer is sending messages to a queue on a backup server with no consumers, the messages will be distributed to a live node advertising the required consumers.
Example 38.2, “Three Instance Configuration” is slightly more complex. It extends the same configuration in Example 38.1, “Two Instance Configuration” and adds a third live and backup HornetQ instance.

Note

The live cluster connections between each server have been removed to make the diagram clearer. All live servers form a cluster in the example.

Example 38.2. Three Instance Configuration

In this example, the instance contains three separate JBoss Enterprise Application Platform servers, with a live and backup HornetQ instance in each server.
Three node topology enables you to configure two backup instances for each server, which are shared across any of the three servers in the live backup group in case of a fail over event.
While it is possible to configure multiple live backup groups for a server, one backup per live instance is considered sufficient for most deployments.

38.1.1. Colocated Live Server

Important

If nodes can not discover each other, verify that you have configured the firewall and UDP ports correctly. The network configuration should allow nodes on a cluster to communicate with each other.
Follow the procedures to configure a Colocated Live Server instance.

Procedure 38.1. Create Live Server Profile

You must create a copy of the production profile to customize the live server configuration.

Important

Always copy an included profile rather than editing it directly for your custom profile. If you make a critical mistake during configuration, you can fall back to the base configuration at any time.
  1. Navigate to $JBOSS_HOME/server/
  2. Copy the production profile, and rename it to HornetQ_Colocated

Procedure 38.2. Configure Shared Store and Journaling

Follow this procedure to specify HornetQ must use a shared store for fail over, and define the location of the journal files each HornetQ instance in the live backup group uses.
  1. Navigate to $JBOSS_HOME/server/HornetQ_Colocated/deploy/hornetq/
  2. Open
  3. Add the <shared-store> element as a child of the <configuration> element.
    <shared-store>true</shared-store>
  4. Ensure the bindings, journal, and large messages path locations are set to a location the live backup group can access.
    You can set absolute paths as the example describes, or use the JBoss parameters that exist in the configuration file.
    If you choose the parameter option, and you do not use the default paths that these parameters resolve to, you must specify the path your bindings, journal, and large messages reside in each time you start the server.
    <large-messages-directory>/media/shared/data/serverA/large-messages</large-messages-directory>
    
    <bindings-directory>/media/shared/data/serverA/bindings</bindings-directory>
    
    <journal-directory>/media/shared/data/serverA/journal</journal-directory>
    
    <paging-directory>/media/shared/data/ServerA/paging</paging-directory>

    Note

    Ensure you specify paths that are accessible to the live backup groups on your network.

    Note

    Change ServerA to the name suitable to your server instance.
By default, JMS clients will not fail over if a live server is shut down gracefully. Depending on the connection factory settings, a client will either fail, or try to reconnect to the live server.
If you need clients to fail over on a normal server shutdown, you must alter the file according to Procedure 38.3, “Configure JMS Client Graceful Shutdown”.

Procedure 38.3. Configure JMS Client Graceful Shutdown

Follow this procedure to configure how JMS clients re-establish a connection if a server is shut down gracefully.
  1. Navigate to $JBOSS_HOME/server/HornetQ_Colocated/deploy/hornetq/
  2. Open hornetq-configuration.xml
  3. Specify the <fail-over-on-shutdown> element in the area near the journal directory configuration in Procedure 38.2, “Configure Shared Store and Journaling”.
    <failover-on-shutdown>true</failover-on-shutdown>

    Note

    You are not constrained where you put the element in the hornetq-configuration.xml file, however it is easier to find the less detailed settings if they are all located at the top of the file.
  4. Save and close the file.

Note

If you have set <fail-over-on-shutdown> to false (the default setting) but still want fail over to occur, terminate the server process directly, or call forceFailover through the JMX Console or the Admin Console on the core server object.
The connection factories used by the client must be configured to be Highly Available. This is done by configuring connection factory attributes in the JBOSS_DIST/jboss-as/server/<PROFILE>/deploy/hornetq/hornetq-jms.xml.

Procedure 38.4. Configure HA Connection Factories

  1. Navigate to $JBOSS_HOME/server/HornetQ_Colocated/deploy/hornetq/
  2. Open hornetq-jms.xml.
  3. Add the following attributes and values as specified below.
    <ha>true</ha>
    Specifies the client must support high availability, and must always be true for fail over to occur.
    <retry-interval>1000</retry-interval>
    Specifies how long the client must wait (in milliseconds) before it can reconnect to the server.
    <retry-interval-multiplier>1.0</retry-interval-multiplier>
    Specifies the multiplier <retry-interval> used for each subsequent reconnection pauses. By setting the value to 1.0, the retry interval is the same for each client reconnection request.
    <reconnect-attempts>-1</reconnect-attempts>
    Specifies how many reconnect attempts a client should make before failing. Setting -1 means unlimited reconnection attempts.
    <?xml version='1.0' encoding='UTF-8'?>
    <configuration xmlns="urn:hornetq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
    
       <connection-factory name="NettyConnectionFactory">
          <xa>true</xa>
          <connectors>
             <connector-ref connector-name="netty"/>
          </connectors>
          <entries>
             <entry name="/ConnectionFactory"/>
             <entry name="/XAConnectionFactory"/>
          </entries>
          <ha>true</ha>
    
          <!-- Pause 1 second between connect attempts -->
    
          <retry-interval>1000</retry-interval>
    
          <!-- Multiply subsequent reconnect pauses by this multiplier. This can be used 
           to implement an exponential back-off. For our purposes we just set to 1.0 so 
           each reconnect pause is the same length -->
    
          <retry-interval-multiplier>1.0</retry-interval-multiplier>
    
          <!-- Try reconnecting an unlimited number of times (-1 means unlimited) -->
    
          <reconnect-attempts>-1</reconnect-attempts>
       </connection-factory>
    
    </configuration>
    
  4. Define new queues in both master and backup nodes by adding one of the following configuration blocks to the specified file.
    For production/deploy/hornetq/hornetq-jms.xml
    <queue name="testQueue">
       <entry name="/queue/testQueue"/>
       <durable>true</durable>
    </queue>
    For production/deploy/customName-hornetq-jms.xml

    Note

    Ensure the file is well-formed from an XML validation perspective by ensuring the XML Namespace is present and correct in the file as specified.
    <configuration xmlns="urn:hornetq"
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
    
       <queue name="testQueue">
          <entry name="/queue/testQueue"/>
          <durable>true</durable>
       </queue>
    
    </configuration>

38.1.2. Colocated Backup Server

Read this section and the contained procedures to configure a colocated backup HornetQ server.
The backup server runs on the same JBoss Enterprise Application Platform instance as the live server configured in Section 38.1.1, “Colocated Live Server”. The important thing to understand is the backup server is the fail over point for a live server running on a different JBoss Enterprise Application Platform instance.
Likewise, the backup server instance does not service any Java EE components on the JBoss Enterprise Application Platform instance it is colocated on. When the live server fails over, any existing messages are redistributed within the live backup group, to service any remote clients connected to the live server at the time it became unavailable.
A backup HornetQ instance only needs a hornetq-jboss-beans.xml and a hornetq-configuration.xml configuration file. Any JMS components are created from the shared journal when the backup server becomes live (configured in Procedure 38.2, “Configure Shared Store and Journaling”).

Procedure 38.5. Create Backup Server

Important

If nodes can not discover each other, verify that you have configured the firewall and UDP ports correctly. The network configuration should allow nodes on a cluster to communicate with each other.
You must set up the live server first as specified in Section 38.1.1, “Colocated Live Server”. After you have configured the live server, continue with this procedure.
  1. Navigate to $JBOSS_HOME/server/HornetQ_Colocated/deploy/
  2. Create a new directory called hornetq-backup1. Move into that directory.
  3. Open a text editor and create a new file called hornetq-jboss-beans.xml in the hornetq-backup1 directory.
  4. Copy the following configuration into hornetq-jboss-beans.xml.
    <?xml version="1.0" encoding="UTF-8"?>
       <deployment xmlns="urn:jboss:bean-deployer:2.0">
          <!-- The core configuration -->
          <bean name="BackupConfiguration" class="org.hornetq.core.config.impl.FileConfiguration">
             <property name="configurationUrl">${jboss.server.home.url}/deploy/hornetq-backup1/hornetq-configuration.xml</property>
          </bean>
          <!-- The core server -->
          <bean name="BackupHornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
             <constructor>
                <parameter>
                   <inject bean="BackupConfiguration"/>
                </parameter>
                <parameter>
                   <inject bean="MBeanServer"/>
                </parameter>
                <parameter>
                   <inject bean="HornetQSecurityManager"/>
                </parameter>
             </constructor>
             <start ignored="true"/>
             <stop ignored="true"/>
          </bean>
    
          <!-- The JMS server -->
          <bean name="BackupJMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
             <constructor>
                <parameter>
                   <inject bean="BackupHornetQServer"/>
                </parameter>
             </constructor>
          </bean>
    
    </deployment>
    
  5. Save and close the file.
The hornetq-jboss-beans.xml file in Procedure 38.5, “Create Backup Server” contains configuration worth exploring in more detail. The BackupConfiguration bean is configured to pick up the configuration in hornetq-configuration.xml. This file is created in the next procedure: Procedure 38.6, “Create Backup Server Configuration File”.
The HornetQ Server and JMS server beans are added after the BackupConfiguration.

Note

The names of the backup instance beans have been changed from those in the live server configuration. This is to prevent problems with beans sharing the same name. If you add more backup servers, you must rename these instances uniquely as well (for example: backup1, backup2).
The next task involves creating the backup server configuration according to Procedure 38.6, “Create Backup Server Configuration File”.

Procedure 38.6. Create Backup Server Configuration File

  1. Navigate to $JBOSS_HOME/server/HornetQ_Colocated/deploy/hornetq-backup1
  2. Open a text editor and create a new file called hornetq-configuration.xml in the hornetq-backup1 directory.
  3. Copy the following configuration into hornetq-configuration.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration xmlns="urn:hornetq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
    
       <jmx-domain>org.hornetq.backup1</jmx-domain>
    
       <clustered>true</clustered>
    
       <backup>true</backup>
    
       <shared-store>true</shared-store>
    
       <allow-failback>true</allow-failback>
    
       <file-deployment-enabled>true</file-deployment-enabled>
    
       <log-delegate-factory-class-name>
        org.hornetq.integration.logging.Log4jLogDelegateFactory
       </log-delegate-factory-class-name>
    
       <bindings-directory>
        /media/shared/data/hornetq-backup/bindings
       </bindings-directory>
    
       <journal-directory>/media/shared/data/hornetq-backup/journal</journal-directory>
    
       <journal-min-files>10</journal-min-files>
    
       <large-messages-directory>
        /media/shared/data/hornetq-backup/largemessages
       </large-messages-directory>
    
       <paging-directory>/media/shared/data/hornetq-backup/paging</paging-directory>
    
       <connectors>
          <connector name="netty-connector">
             <factory-class>
              org.hornetq.core.remoting.impl.netty.NettyConnectorFactory
             </factory-class>
             <param key="host" value="${jboss.bind.address:localhost}"/>
             <param key="port" value="${hornetq.remoting.netty.backup.port:5446}"/>
          </connector>
    
          <connector name="in-vm">
             <factory-class>
              org.hornetq.core.remoting.impl.invm.InVMConnectorFactory
             </factory-class>
             <param key="server-id" value="${hornetq.server-id:0}"/>
          </connector>
       </connectors>
    
       <acceptors>
          <acceptor name="netty">
             <factory-class>
              org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory
             </factory-class>
             <param key="host" value="${jboss.bind.address:localhost}"/>
             <param key="port" value="${hornetq.remoting.netty.backup.port:5446}"/>
          </acceptor>
       </acceptors>
    
       <broadcast-groups>
          <broadcast-group name="bg-group1">
             <group-address>231.7.7.7</group-address>
             <group-port>9876</group-port>
             <broadcast-period>1000</broadcast-period>
             <connector-ref>netty-connector</connector-ref>
          </broadcast-group>
       </broadcast-groups>
    
       <discovery-groups>
          <discovery-group name="dg-group1">
             <group-address>231.7.7.7</group-address>
             <group-port>9876</group-port>
             <refresh-timeout>60000</refresh-timeout>
          </discovery-group>
       </discovery-groups>
    
       <cluster-connections>
          <cluster-connection name="my-cluster">
             <address>jms</address>
             <connector-ref>netty-connector</connector-ref>
             <discovery-group-ref discovery-group-name="dg-group1"/>
          </cluster-connection>
       </cluster-connections>
    
       <security-settings>
          <security-setting match="#">
             <permission type="createNonDurableQueue" roles="guest"/>
             <permission type="deleteNonDurableQueue" roles="guest"/>
             <permission type="consume" roles="guest"/>
             <permission type="send" roles="guest"/>
          </security-setting>
       </security-settings>
    
       <address-settings>
       <!--default for catch all-->
          <address-setting match="#">
             <dead-letter-address>jms.queue.DLQ</dead-letter-address>
             <expiry-address>jms.queue.ExpiryQueue</expiry-address>
             <redelivery-delay>0</redelivery-delay>
             <max-size-bytes>10485760</max-size-bytes>
             <message-counter-history-day-limit>10</message-counter-history-day-limit>
             <address-full-policy>BLOCK</address-full-policy>
          </address-setting>
       </address-settings>
    
    </configuration>
    
  4. Save and close the file.
The hornetq-configuration.xml file in Procedure 38.6, “Create Backup Server Configuration File” contains specific configuration which is discussed in hornetq-configuration.xml Configuration Points.

hornetq-configuration.xml Configuration Points

<jmx-domain>org.hornetq.backup1</jmx-domain>
Specifies the object name (in this case the backup server) in the Java Management Extensions (JMX) service. The default value is org.hornetq, however this name is already in use in other parts of HornetQ. You must change the name to a unique, system-wide name to avoid naming conflicts with the live server.
<clustered>true</clustered>
Specifies whether the server should join a cluster. This configuration is the same as the live server.
<backup>true</backup>
Specifies whether the server starts as a backup server, and not a live server. Specifying true sets the server to start as a backup server.
<shared-store>true</shared-store>
Specifies whether the server should reference a shared store for journaling. This configuration is the same as the live server.
<allow-failback>true</allow-failback>
Specifies whether the backup server automatically stops and returns to standby mode when the live server becomes available again. If set to false, the server must be stopped manually to trigger a return to standby mode.
<bindings-directory>, <journal-directory>, <large-messages-directory>, <paging-directory>
The paths in these elements must all resolve to the same paths the live server references. This ensures the backup server uses the same journaling files as the live server.
<connectors>
Two connectors are defined that allow clients to connect to the backup server once live: one connector for the netty connector factory (to allow client and server connections across different Virtual Machines); and one connector to allow the server to accept connections within the VM.
<acceptors>
The NettyAcceptorFactory is chosen here for VM compatibility.
<broadcast-groups>, <discovery-groups>, <cluster-connections>, <security-settings>, <address-settings>
The settings in these configuration blocks are standard settings.

Task: Create Configuration for Second Server Instance

Complete this task to configure the second server instance to cluster with the first server.
  1. Navigate to <JBOSS_HOME> /server/.
  2. Copy the HornetQ_Colocated directory, and rename it to HornetQ_Colocated_Second.
  3. Rename <JBOSS_HOME>/server/HornetQ_Colocated_Second/hornetq-backup1/ to <JBOSS_HOME>/server/HornetQ_Colocated_Second/hornetq-backup-serverA/
  4. Open <JBOSS_HOME>/server/HornetQ_Colocated_Second/hornetq/hornetq-configuration.xml
  5. For all parameters with data directories specified in hornetq-configuration.xml, change the data paths to /media/shared/data/hornetq-backup.
    For example change:
    <bindings-directory> /media/shared/data/serverA/bindings </bindings-directory>
    to <bindings-directory> /media/shared/data/hornetq-backup/bindings </bindings-directory>
  6. Open <JBOSS_HOME>/server/HornetQ_Colocated_Second/hornetq-backup-serverA/hornetq-configuration.xml
  7. For all parameters with data directories specified in hornetq-configuration.xml, change the data paths to /media/shared/data/serverA.
    For example change:
    <bindings-directory> /media/shared/data/hornetq-backup/bindings </bindings-directory>
    to <bindings-directory> /media/shared/data/serverA/bindings </bindings-directory>