Chapter 5. Design and Development

5.1. Overview

The source code for the Red Hat JBoss AMQ 7 example project is made available in a public github repository. This chapter briefly covers each topology, accompanying tests, and functionality. Note that in the example client tests, JMS is used exclusively for connectivity, however, AMQ 7 also offers clients for C++, JavaScript, Python, and .NET. More information on these clients can be found in their respective guides located on the AMQ 7 Documentation page.

5.2. Prerequisites

The following topology and routing explanations assume that the provided OpenShift Container Platform example environment has been utilized. If Red Hat Enterprise Linux instances have been used instead, the tests will require modification of server addresses and ports prior to usage.

Warning

The AMQ/OpenShift configuration described and/or provided herein uses community software and thus is not an officially supported Red Hat configuration at the time of writing.

The included tests which showcase various features of each topology assume that Maven 3.X and Java JDK 1.8+ are installed and configured. An example Maven settings.xml configuration file, allowing resolution of Red Hat’s GA and Early Access repositories, is provided as an appendix.

The tests mentioned are part of the code available in the public github repository, under the Test-Suite directory. All mvn commands which follow are assumed to be ran from that directory.

5.3. Common Broker Configurations

Various components within the broker.xml configuration file will be commonly seen across the different OpenShift Container Platform example scenarios. These sections are as follows:

5.3.1. Directory and Persistence Configuration

<persistence-enabled>true</persistence-enabled> 1

<paging-directory>./data/paging</paging-directory> 2
<bindings-directory>./data/bindings</bindings-directory>
<large-messages-directory>./data/large-messages</large-messages-directory>

<journal-type>ASYNCIO</journal-type> 3
<journal-directory>./data/journal</journal-directory>
<journal-min-files>2</journal-min-files>
<journal-pool-files>-1</journal-pool-files>
<journal-buffer-timeout>59999</journal-buffer-timeout>
1
Enables the default highly performant option of writing messages to journals on the file system.
2
Configures multiple directories for journaling of paging, binding, and large messages.
3
Journaling configuration specifying the ASYNCIO type and various parameters.

5.3.2. Connectors and Acceptors

<connectors>
    <connector name="artemis">tcp://0.0.0.0:${ARTEMIS_PORT}</connector> 1
</connectors>
<acceptors>
    <acceptor name="artemis">tcp://0.0.0.0:${ARTEMIS_PORT}</acceptor> 2
</acceptors>
1
Configures server-to-server connectivity protocol via group discovery.
2
Configures a default endpoint for the broker listening for ActiveMQ Artemis, OpenWire, STOMP, AMQP, MQTT, and HornetQ protocol connections.

5.3.3. Security Configuration

<management-address>activemq.management</management-address> 1
<management-notification-address> 2
  jms.topic.notificationsTopic
</management-notification-address>

<security-settings>
    <security-setting match="#"> 3
        <permission type="createNonDurableQueue" roles="admin"/>
        <permission type="deleteNonDurableQueue" roles="admin"/>
        <permission type="createDurableQueue" roles="admin"/>
        <permission type="deleteDurableQueue" roles="admin"/>
        <permission type="createAddress" roles="admin"/>
        <permission type="deleteAddress" roles="admin"/>
        <permission type="consume" roles="admin"/>
        <permission type="browse" roles="admin"/>
        <permission type="send" roles="admin"/>
        <permission type="manage" roles="admin"/>
    </security-setting>
    <security-setting match="jms.queue.activemq.management"> 4
        <permission type="consume" roles="admin"/>
        <permission type="send" roles="admin"/>
        <permission type="manage" roles="admin"/>
    </security-setting>
</security-settings>
1
Indicates that a special, internal address should be enabled thus enabling AMQP clients to send Broker management commands to the server. A topic is also specified.
2
Specifies that authenticated clients may receive broker system notifications by subscribing to the specified topic.
3
Security settings for the wildcard matcher allowing the admin role for all functionalities.
4
Security settings used to authenticate clients wishing to utilize the management-address and notification-address mechanisms.

5.3.4. Queue and Topic Configuration

<addresses>
    <address name="test_topic"> 1
        <multicast/>
    </address>
    <address name="test_queue"> 2
        <anycast>
            <queue name="jms.queue.test_queue"/>
        </anycast>
    </address>
</addresses>
1
Configures an example multicast (publish-subscribe) topic on the broker.
2
Configures an example queue (point-to-point) on the broker.

5.3.5. Dead Letter and Expiry Queue Configuration

<address-settings>
    <address-setting match="activemq.management#"> 1
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
    </address-setting>
    <address-setting match="#"> 2
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
    </address-setting>
</address-settings>
1
Configures dead letter and expiry queues for the management addresses.
2
Configured dead letter and expiry queues with a wildcard matcher to serve all other queues and topics.

The full broker.xml configuration file used for the single broker example in the included OpenShift Container Platform environment can be found in the appendices for further review of overall file structure.

5.4. Single-Broker Instance

The single-broker instance is configured for client connectivity via its artemis connector. The OpenShift service is configured with an ingress port of 30201 which forwards all requests to the container port 61616.

5.4.1. Broker Configuration

Given that no further highly-available or fault-tolerant options are required for the single broker, the common configuration elements detailed above make up the entirety of its configuration.

5.4.2. Exercising the Instance

The following tests demonstrate two basic messaging patterns encountered when working with AMQP via JMS: queues and topics. The first test demonstrates point-to-point messaging by establishing one client as a queue producer and another client as a queue consumer, then asserting that all messages sent to the queue only reach the intended recipient. The second test demonstrates the publish-subscribe pattern by similarly establishing a single topic publisher, but with multiple recipient subscribers who each receive a copy of all dispatched messages.

To run the tests, use the following command:

$ mvn -Dtest=SingleBrokerTest test
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.redhat.refarch.amq7.single.SingleBrokerTest
...SingleBrokerTest:24 - instantiating clients...
...SingleBrokerTest:33 - sending 25 messages to queue...
...SingleBrokerTest:36 - verifying single queue consumer received all msgs...
...SingleBrokerTest:40 - terminating clients...
...SingleBrokerTest:49 - instantiating clients...
...SingleBrokerTest:60 - sending 25 messages to topic...
...SingleBrokerTest:63 - verifying both topic consumers received all msgs...
...SingleBrokerTest:68 - terminating clients...
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.669 sec

5.5. Symmetric Cluster Topology

The highly-available symmetric cluster offers round-robin message distribution to all brokers sharing a common queue. Each broker is configured for client connectivity via its artemis connector. Each OpenShift service representing a broker is configured with an ingress port of 3040X which forwards all requests to the corresponding container port 6161X.

5.5.1. Broker Configuration

<broadcast-groups>
    <broadcast-group name="test-broadcast-group"> 1
        <group-address>${udp-address:231.7.7.7}</group-address> 2
        <group-port>9876</group-port>
        <broadcast-period>100</broadcast-period>
        <connector-ref>artemis</connector-ref> 3
    </broadcast-group>
</broadcast-groups>

<discovery-groups>
    <discovery-group name="test-discovery-group"> 4
        <group-address>${udp-address:231.7.7.7}</group-address>
        <group-port>9876</group-port>
        <refresh-timeout>10000</refresh-timeout>
    </discovery-group>
</discovery-groups>

<cluster-connections>
    <cluster-connection name="test-cluster"> 5
        <connector-ref>artemis</connector-ref>
        <retry-interval>500</retry-interval>
        <use-duplicate-detection>true</use-duplicate-detection>
        <message-load-balancing>ON_DEMAND</message-load-balancing> 6
        <max-hops>1</max-hops>
        <discovery-group-ref discovery-group-name="test-discovery-group"/>
    </cluster-connection>
</cluster-connections>
1
Broadcast Group configuration; A broker uses a broadcast group to push information about its cluster-related connection to other potential cluster members on the network.
2
A broadcast-group can use TCP, UDP or JGroups, but the choice must match its discovery-group counterpart.
3
Connection information: matches a connector element as defined in the shared configuration section above.
4
Discovery Group configuration: while the broadcast group defines how cluster-related information is transmitted, a discovery group defines how connector information is received.
5
Uses a discovery-group to make the initial connection to each broker in the cluster.
6
Cluster connections allow brokers to load balance their messages. If message load balancing is OFF or ON_DEMAND, messages are not moved to queues that do not have consumers to consume them. However, if a matching consumer on a queue closes after the messages have been sent to the queue, the messages will stay in the queue without being consumed.

Further information about configuring AMQ 7 clusters can be found in the Using Red Hat JBoss AMQ 7 Broker Guide.

5.5.2. Exercising the Cluster

The first included test demonstrates both point-to-point and publish-subscribe patterns utilizing JMS clients connecting to different cluster members. The second test demonstrates and asserts the round-robin distribution pattern of messages across all cluster members, from a client connected to a single broker.

$ mvn -Dtest=SymmetricClusterTest test
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.redhat.refarch.amq7.cluster.SymmetricClusterTest
- instantiate clients, s2 & s3 sub to queue, s1 writes to queue...
- sending 20 messages via s1 producer to queue...
- verifying both queue subscribers received half of the messages...
- verifying there are no more than half of the messages on one of the receivers...
- terminating clients...
- instantiate clients, all sub to topic, s1 subs to queue, s2 writes to both...
- sending 25 messages via s2 producer to queue & topic...
- verifying all 3 topic subscribers & single queue consumer received all msgs...
- terminating clients...
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.386 sec

5.6. Replication Cluster Topology

The highly-available and fault-tolerant Replication topology offers the same features as seen with the Symmetric cluster, but with an added master-slave node pairing that serves to handle instances when a master broker node is shut down, stopped, or fails completely. This failover protection mechanism allows slave brokers to automatically match to master nodes within their broadcast group and receive replication of the master’s journals as operations are performed. Should the cluster detect the loss of communication with a master node, a quorum vote is executed, potentially leading to the promotion of the paired slave node into the master broker position.

5.6.1. Broker Configuration

The only section differing from that already seen in the symmetric cluster example is a <ha-policy> configuration specifying master/slave positions and other possible failover parameters.

5.6.1.1. Master Nodes

<ha-policy>
    <replication>
        <master/>
    </replication>
</ha-policy>

5.6.1.2. Slave Nodes

<ha-policy>
    <replication>
        <slave/>
    </replication>
</ha-policy>

Further configuration options, such as possible failback scenarios where a master node returns to a cluster group and shared master/slave persistence, are detailed within the Using Red Hat JBoss AMQ 7 Broker Guide.

5.6.2. Exercising the Cluster

The included test for the Replication topology demonstrates the behavior visible to a JMS client in a scenario where the master broker fails after receiving messages from a clustered connection. The cluster consists of 3 master/slave pairs, with the master nodes connecting via a symmetric group. The test sends twenty messages to each master node, then acknowledges half of the messages received on each. The remaining ten messages from each node are received, but not acknowledged, an important differentiation when working within a cluster/fault-tolerant environment.

At this junction, the AMQP management queue is used by the client to send a forceFailover command to one master node in the cluster. The test then asserts that the ten remaining messages that had not been acknowledged via the now-failed master node still cannot be acknowledged since the node is offline, while the remaining ten messages on the 2 surviving master nodes are able to be acknowledged without issue. Lastly, the client that was connected to the failed master node demonstrates that seamless failover has occurred by re-consuming the ten non-acknowledged messages from the newly promoted replacement node without need for reestablishing a connection.

$ mvn -Dtest=ReplicatedFailoverTest test
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.redhat.refarch.amq7.cluster.ha.ReplicatedFailoverTest
- creating clients...
- send 20 messages via producer to all 3 master brokers...
- receiving all, but only acknowledging half of messages on all 3 master brokers...
- shutting down master broker m2 via Mgmt API forceFailover()...
17-10-09 23:22:56:270  WARN Thread-0 (ActiveMQ-client-global-threads-110771485)
    core.client:198 - AMQ212037: Connection failure has been detected:
    AMQ119015: The connection was disconnected because of server shutdown
- verifying rec'd-only messages fail to ack if on a failover broker...
- attempting to ack 10 messages from broker m1 post-shutdown...
- attempting to ack 10 messages from broker m2 post-shutdown...
- messages from m2 post-failover correctly failed to ack
- attempting to ack 10 messages from broker m3 post-shutdown...
- re-consuming the non-ack'd m2 messages with slave broker now promoted...
- terminating clients...
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.887 sec
Note

Given the nature of the broker containers on OpenShift Container Platform, once the replication test has been ran, the pod will indicate a crashed state, thus requiring a redeployment of the amq-replicated deployment to reset the replication cluster back to its original 3-pair master/slave state for further test runs.

5.7. Interconnect Network Topology

As a refresher, refer back to the following diagram showing the example Interconnect network and the role that each router plays.

Figure 5.1. Interconnect Router Network

Interconnect Router Network

As seen, three routers represent connection points to three major regions, US, EU, and AU. Each of these three routes are connected to one another directly. The US and EU routers each have two dependent connections representing an area within the larger defined region, while AU serves as a single entry point into the network for its whole region.

5.7.1. Router Configuration

Since each major region router and its dependent routers, if any, can accept client requests, a listener on the amqp port is defined at the top of each configuration. Since the network also needs to handle routing of any messages that need to reach non-direct portions of the network, the major region routers also feature an inter-router listener that allows router-to-router connectivity and defines any connector configurations needed to establish the bridges between regions. Note that inter-router connections do not need to be defined in both directions.

Those routers dependent on major region connections to reach other portions of the network solely connect to their respective region’s router, meaning that no inter-router listener is required, but a single inter-router connector is set up to establish the necessary network connection.

Every router on the network is configured to handle three different types of routing patterns:

  • closest: An anycast method in which even if there are more receivers for the same address, every message is sent along the shortest path to reach the destination. This means that only one receiver will get the message. Each message is delivered to the closest receivers in terms of topology cost. If there are multiple receivers with the same lowest cost, deliveries will be spread evenly among those receivers.
  • multicast: Having multiple consumers on the same address at the same time, messages are routed such that each consumer receives one copy of the message.
  • balanced: An anycast method which allows multiple receivers to use the same address. In this case, messages (or links) are routed to exactly one of the receivers and the network attempts to balance the traffic load across the set of receivers using the same address. This routing delivers messages to receivers based on how quickly they settle the deliveries. Faster receivers get more messages.

Each of the separate router configurations can be viewed in full at the publicly-available repository for further examination.

5.7.2. Exercising the Network

The included tests represent several scenarios playing on different routing patterns and network pathings:

  • testSendDirect: A client connects to the CA router and publishes ten messages intended for a receiving client connected to the DE router.
  • testSendMulticast: A client connects to the AU router and publishes ten messages intended for multiple subscribed clients connected to the CA, ME, UK, and DE routers respectively.
  • testSendBalancedSameHops: A client connects to the AU router and publishes twenty messages to be balanced between the CA and ME recipients equally, given their equal route pathing.
  • testSendBalancedDifferentHops: A client connects to the AU router and publishes twenty messages to be balanced between the CA, ME, UK, and DE recipients, resulting in UK and DE consumers each receiving seven messages, while the CA and ME consumers each receive only three due to the extra links present in their route pathing.
  • testSendClosest: Publisher clients are connected to the CA and DE routers, while consumer clients are simultaneously connected to their sister routers, ME and UK. A message is dispatched from each publisher with the closest prefix, resulting in ME always consuming from the CA publications and UK always consuming those from DE.
$ mvn -Dtest=InterconnectTest test
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.redhat.refarch.amq7.interconnect.InterconnectTest
- routing path: CA -> US -> EU -> DE
- creating CA and DE router connections...
- sending 10 messages via CA...
- receiving 10 messages from DE router...
- terminating clients...
- routing path: AU -> EU -> UK,DE,US -> CA,ME
- creating router connections...
- sending 10 messages via CA...
- receiving 10 messages from all 4 multicast recipient clients...
- terminating clients...
- creating router connections...
- sending 20 messages via AU...
- receiving equal number of balanced messages from CA and ME clients...
- verifying there are no more than half of msgs on one of the receivers...
- terminating clients...
- creating router connections...
- sending 20 messages via AU...
- due to routing weight, rcv fewer msgs from CA/ME than from UK/DE clients...
- asserting yield is 3 messages to CA/ME each, and 7 messages to UK/DE each...
- terminating clients...
- creating router connections...
- sending closest message from CA and asserting ME receives...
- sending closest message from DE and asserting UK receives...
- terminating clients...
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.623 sec

5.8. AMQ Console

Each AMQ 7 broker instance can be configured to expose a web console based on hawtio that enables management and monitoring of AMQ Broker and Interconnect instances via web browser. By adding the configuration below to the BROKER_INSTANCE_DIR/etc/bootstrap.xml configuration file, the necessary port configuration and war deployments will be performed on broker initialization.

Example Web Binding Configuration:

<web bind="http://0.0.0.0:${JOLOKIA_PORT}" path="web">
    <app url="redhat-branding" war="redhat-branding.war"/>
    <app url="jolokia" war="jolokia.war"/>
    <app url="hawtio" war="hawtio-no-slf4j.war"/>
    <app url="artemis-plugin" war="artemis-plugin.war"/>
    <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/>
</web>

5.8.1. Broker Monitoring

In the OpenShift Container Platform environment that’s been established, the single-broker node exposes the AMQ Console via the URL cluster-ingress.example.com:30351/hawtio/. Credentials admin/admin can be used to log in. The web console offers various JMX operations, thread monitoring, and router, cluster, address, and acceptor statistics and properties lists for the hosting router.

Figure 5.2. AMQ Console

AMQ Console

Other routers hosting AMQ Console can also be reached from within the web console via the Connect link, shown below:

Figure 5.3. Broker Connect

Broker Connect

5.8.2. Interconnect Network Monitoring

Given that an interconnect installation does not include the necessary components to host AMQ Console, a broker that exists on the same server, or different server altogether, can be used to connect to an http-enabled port listener of any single Interconnect router in order to view and manage the entire network topology.

Figure 5.4. Dispatch Router Connect

Dispatch Router Connect

Once connected to the Interconnect network, various statistics on addresses, connections, and other components throughout the network may be viewed from the single connected AMQ Console. Additionally, a live network topology diagram may be viewed, which displays all realtime network members and clients.

Figure 5.5. Interconnect Live Topology

Interconnect Live Topology