Chapter 3. Using a Multi KahaDB Persistence Adapter

Abstract

When you have destinations with different performance profiles or different persistence requirements you can distribute them across multiple KahaDB message stores.

Overview

The stock KahaDB persistence adapter works well when all of the destinations being managed by the broker have similar performance and reliability profiles. When one destination has a radically different performance profile, for example its consumer is exceptionally slow compared to the consumers on other destinations, the message store's disk usage can grow rapidly. When one or more destinations don't require disc synchronization and the others do require it, all of the destinations must take the performance hit.
The multi KahaDB persistence adapter allows you to distribute a broker's destinations across multiple KahaDB message stores. Using multiple message stores allows you to tailor the message store more precisely to the needs of the destinations using it. Destinations and stores are matched using filters that take standard wild card syntax.

Configuration

The multi KahaDB persistence adapter configuration wraps more than one KahaDB message store configuration.
The multi KahaDB persistence adapter configuration is specified using the mKahaDB element. The mKahaDB element has a single attribute, directory, that specifies the location where the adapter writes its data stores. This setting is the default value for the directory attribute of the embedded KahaDB message store instances. The individual message stores can override this default setting.
The mKahaDB element has a single child filteredPersistenceAdapters. The filteredPersistenceAdapters element contains multiple filteredKahaDB elements that configure the KahaDB message stores that are used by the persistence adapter.
Each filteredKahaDB element configures one KahaDB message store (except in the case where the perDestination attribute is set to true). The destinations matched to the message store are specified using attributes on the filteredKahaDB element:
  • queue—specifies the name of queues
  • topic—specifies the name of topics
The destinations can be specified either using explicit destination names or using wildcards. For information on using wildcards see the section called “Wildcard syntax”. If no destinations are specified the message store will match any destinations that are not matched by other filters.
The KahaDB message store configured inside a filteredKahaDB element is configured using the standard KahaDB persistence adapter configuration. It consists of a kahaDB element wrapped in a persistenceAdapter element. For details on configuring a KahaDB message store see Section 2.2, “Configuring the KahaDB Message Store”.

Wildcard syntax

You can use wildcards to specify a group of destination names. This is useful for situations where your destinations are set up in federated hierarchies.
For example, imagine you are sending price messages from a stock exchange feed. You might name your destinations as follows:
  • PRICE.STOCK.NASDAQ.ORCL to publish Oracle Corporation's price on NASDAQ
  • PRICE.STOCK.NYSE.IBM to publish IBM's price on the New York Stock Exchange
You could use exact destination names to specify which message store will be used to persist message data, or you could use wildcards to define hierarchical pattern matches to the pair the destinations with a message store.
Red Hat JBoss A-MQ uses the following wild cards:
  • . separates names in a path
  • * matches any name in a path
  • > recursively matches any destination starting from this name
For example using the names above, these filters are possible:
  • PRICE.>—any price for any product on any exchange
  • PRICE.STOCK.>—any price for a stock on any exchange
  • PRICE.STOCK.NASDAQ.*—any stock price on NASDAQ
  • PRICE.STOCK.*.IBM—any IBM stock price on any exchange

Example

Example 3.1, “Multi KahaDB Persistence Adapter Configuration” shows a multi KahaDB persistence adapter that distributes destinations across two KahaDB message stores. The first message store is used for all queues managed by the broker. The second message store is used for all other destinations (in this case, for all topics).

Example 3.1. Multi KahaDB Persistence Adapter Configuration

<persistenceAdapter>
  <mKahaDB directory="${activemq.base}/data/kahadb">
    <filteredPersistenceAdapters>
      <!-- match all queues -->
      <filteredKahaDB queue=">">
        <persistenceAdapter>
          <kahaDB journalMaxFileLength="32mb"/>
        </persistenceAdapter>
      </filteredKahaDB>
      
      <!-- match all destinations -->
      <filteredKahaDB>
        <persistenceAdapter>
          <kahaDB enableJournalDiskSyncs="false"/>
        </persistenceAdapter>
      </filteredKahaDB>
    </filteredPersistenceAdapters>
  </mKahaDB>
</persistenceAdapter>

Automatic per-destination persistence adapter

When the perDestination attribute is set to true on the catch-all filteredKahaDB element (that is, the instance of filteredKahaDB that specifies neither a queue nor a topic attribute), every matching destination gets its own kahaDB instance. For example, the following sample configuration shows how to configure a per-destination persistence adapter:
<broker brokerName="broker" ... >
 <persistenceAdapter>
  <mKahaDB directory="${activemq.base}/data/kahadb">
    <filteredPersistenceAdapters>
      <!-- kahaDB per destinations -->
      <filteredKahaDB perDestination="true" >
        <persistenceAdapter>
          <kahaDB journalMaxFileLength="32mb" />
        </persistenceAdapter>
      </filteredKahaDB>
    </filteredPersistenceAdapters>
  </mKahaDB>
 </persistenceAdapter>
 ...
</broker>
Note
Combining the perDestination attribute with either the queue or topic attributes has not been verified to work and could cause runtime errors.

Transactions

Transactions can span multiple journals if the destinations are distributed. This means that two phase completion is required. This does incur the performance penalty of the additional disk sync to record the commit outcome.
If only one journal is involved in the transaction, the additional disk sync is not used. The performance penalty is not incurred in this case.