Red Hat DocumentationFuse Message BrokerToggle FramesPrintFeedback

Shared Nothing Master/Slave

Overview

A shared nothing master/slave group replicates data between a pair of brokers using a dedicated connection. The advantage of this approach is that it does not require a shared database or a shared file system and thus does not have a single point of failure.

The disadvantage of this approach are:

  • Reintroducing a failed master requires manually synchronizing the persistence stores and restarting the entire cluster.

  • Persistent messaging suffers additional latency because producers must wait for messages to be replicated to the slave and be stored in the slave's persistent store

Initial state

Figure 1 shows the initial state of a shared nothing master/slave group.

Figure 1. Shared Nothing Master/Slave Group Initial State

Master/Slave group with clients connected to the master

In this topology, the master broker does not require any special configuration. Unless specifically configured to wait for a slave, the master broker functions like an ordinary broker until a slave broker connects to it. Once a slave connects to the master broker, the master broker forwards all events to the slave. It will not respond to a client request until the associated event has been successfully forwarded.

The slave broker is configured with a master connector, which connects to the master broker in order to duplicate the data stored in the master. While the connection is active, the slave consumes all events from the master: including messages, acknowledgments, and transactional states. The slave does not start any transport connectors or network connectors. Its sole purpose is to duplicate the state of the master.

State after failure of the master

When the master fails, the slave can be configured to behave in one of two ways:

  • take over—the slave starts up all of its transport connectors and network connectors and takes the place of the master broker. Clients that are configured to fail over experience no down time.

  • close down—the slave stays unreachable and the client connections are shutdown until the master is reintroduced. The slave's data store is used as a back-up for the master in the case of a catastrophic hardware failure.

Figure 2 shows the state of the master/slave group after the master broker has failed and the slave broker has taken over from the master.

Figure 2. Shared Nothing Master/Slave Group after Master Failure

Master/slave group where the slave broker has replaced the failed master broker

Configuring the master

In a shared nothing master/slave group the master broker does not require any special configuration. When a slave broker opens a master connector to a broker, the broker is automatically turned into a master.

There are optional attributes you can set on the master's broker element that controls how the master behaves in relation to a slave broker. Table 4 describes these attributes.

Table 4. Configuration Options for a Master in a Shared Nothing Master/Slave Group

AttributeDefaultDescription
waitForSlave falseSpecifies if the master will wait for a slave to connect before it will accept client connections.
shutdownOnSlaveFailurefalseSpecifies if the master will stop processing client requests if it loses the connection to the slave broker.

Example 16 shows a sample configuration for a master broker in a shared nothing master/slave group.

Example 16. Master Configuration for Shared Nothing Master/Slave Group

<broker brokerName="master"
        waitForSlave="true"
        shutdownOnSlaveFailure="false"
        xmlns="http://activemq.apache.org/schema/core">
  ...
  <transportConnectors>
  	  <transportConnector uri="tcp://masterhost:61616"/>
  </transportConnectors>
  ...
</broker>

Important

You should not configure a network connector between the master and its slave. If you configure a network connector, you may encounter race conditions when the master broker is under heavy load.

Configuring the slave

When using shared nothing master/slave there are two approaches to configuring the slave:

  • Configure the master connector as a broker service.

    In this approach you configure the master connector by adding a masterConnector child to the broker's services element.

    The advantage of this approach is that it allows you to provide user credentials to a secure master broker. The disadvantage is that you cannot configure the slave to shutdown on master failure. It will always takeover the master role.

    The masterConnector element has three attributes, described in Table 5, that are used to configure the connector.

    Table 5. Attributes for Configuring the Master Connector Service

    AttributeDescription
    remoteURI Specifies the master's transport connector that will be used by the master connector.
    userName Specifies the user name used to connect to the master.
    password Specifies the password used to connect to the master.

    Example 17 shows how to configure the slave using the masterConnector element.

    Example 17. Configuring the Master Connector as a Service

    <broker brokerName="slave"
            xmlns="http://activemq.apache.org/schema/core">
      ...
      <services>
        <masterConnector
          remoteURI="tcp://localhost:62001"
          userName="James"
          password="Cheese" />
      </services>
    
      <transportConnectors>
        <transportConnector uri="tcp://slavehost:61616"/>
      </transportConnectors>
      ...
    </broker>

  • Configure the master connector directly on the broker.

    In this approach you configure the master connector by setting the attributes described in Table 6 directly on the broker element.

    Table 6. Attributes for Configuring a Master Connector on the Broker

    AttributeDescription
    masterConnectorURI Specifies the master's transport connector that will be used by the master connector.
    shutdownOnMasterFailure Specifies if the slave shuts down when it loses the connection to the master.

    The advantage of this approach is that you can configure the slave to simply serve as a back-up for the master broker and shut down when the master shuts down. The disadvantage is that you cannot connect to masters that require authentication.

    Example 18 shows how to configure the master connector by setting attributes on the broker element.

    Example 18. Configuring the Master Connector Directly

    <broker brokerName="slave"
            masterConnectorURI="tcp://masterhost:62001"
            shutdownOnMasterFailure="false"
            xmlns="http://activemq.apache.org/schema/core">
      ...
      <transportConnectors>
      	  <transportConnector uri="tcp://slavehost:61616"/>
      </transportConnectors>
      ...
    </broker>

Configuring the clients

Assuming that you choose the mode of operation where the slave takes over from the master, your clients will need to include logic for failing over to the new master. Adding the fail over logic requires that the clients use the masterslave protocol. This protocol is an instance of the failover protocol described in ???? that is specifically tailored for shared noting master/slave pairs.

If you had a two broker cluster where the master is configured to accept client connections on tcp://masterhost:61616 and the slave is configured accept client connections on tcp://slavehost:61616, you would use the masterslave URI shown in Example 19 for your clients.

Example 19. URI for Connecting to a Master/Slave Cluster

masterslave://(tcp://masterhost:61616,tcp://slavehost:61616)

Reintroducing the master

Reintroducing the master broker after a failure is a manual process. Perform the following steps:

  1. Shut down the slave.

  2. Copy the slave's data directory over to the master's data directory.

  3. Start the master and the slave.

Comments powered by Disqus