Chapter 4. The Major Widgets Use Case

Abstract

This chapter introduces the Major Widgets use case to demonstrate how Red Hat JBoss A-MQ can be used to solve a simple integration problem.

Overview

When Major Widgets, a small auto parts supply store, decided to buy three more auto part supply stores, they knew they'd have to change their business model and integrate the systems located in all four stores. They needed a low-cost, flexible, and easy to maintain solution that could reliably handle a high volume of message traffic. They also lacked the IT knowledge to dive head first into an open-source solution without some support.

Major Widgets business model

Major Widgets, and each of the three stores it bought, routinely supply a number of auto repair shops that are located near them. Each store delivers parts to customers free of charge, as long as the customer is located within twenty-five miles of the store. Each store has its own databases for storing auto repair customer accounts, store inventory, and part suppliers.
Business was done over the phone, but Major Widgets wants to implement a Web-based order service so that their regular customers can order parts more quickly. The Web-based service will take orders, schedule deliveries, bill customers, and allow customers to check the status of their orders.
All four stores also sell parts to walk-in customers. The in-store ordering system will also be tied to the central ordering system.
In the long run, Major Widgets would also like to centralize all of the inventory tracking and ordering for each of the stores. This will make it easier to keep inventory at each store at an optimal level. It will also make it easier to analyze trends in their network.

Major Widgets integration solution

Figure 4.1, “Major Widgets integration solution” shows how the Major Widgets integration might be implemented using Red Hat JBoss A-MQ. Specifically, it shows that:
  • Web service clients are provided for customers to make orders
  • a content-based router is used to receive orders from the Web service ordering clients and to send the order to the appropriate store's message queue
  • two JBoss A-MQ instances are deployed in a master/slave cluster to ensure that orders are never lost due to a broker failure
  • each store uses a JBoss A-MQ client to do the in-store order processing

Figure 4.1. Major Widgets integration solution

Major Widgets integration solution
The first piece in the order processing system consists of a pair of brokers deployed in a shared database master/slave cluster. Each broker in the cluster is configured to host the content-based route used to receive and distribute the orders. The router uses persistent messages when distributing messages. Each broker also maintains five message queues: one for each store in the network and one for bad orders. The combination of a master/slave cluster and persistent messages provide a high-degree of reliability to the system as explained in the section called “Fault tolerance”.
The second piece of the order processing system is distributed across the four stores in the Major Widgets chain. Each of the stores (A-D) run a back-end order processing application that listens for messages on the store's order queue using a JBoss A-MQ client. This application consumes order messages from the store's order queue, checks the order against the store's inventory, and determines how to process the order. The back-end processing logic can be implemented using any one of the JBoss A-MQ client APIs or a dynamic router with a JBoss A-MQ entry point.

Fault tolerance

Figure 4.1, “Major Widgets integration solution” shows that the Major Widgets integration plan uses a master/slave cluster as a fault tolerant mechanism to protect against loss of orders due to broker failure.
For this to provide maximum resiliency, each broker would be running on its own server. The shared database would be hosted on a third server or on a dedicated SAN. Separating the brokers and the shared database means that a single hardware failure cannot bring down the order processing system. At least two pieces of hardware would need to fail before the system stopped functioning. Major Widgets could add more brokers to the cluster to provide even more resiliency.
When the brokers initially start up, they determine who is master by attempting to grab a lock on the shared database. The first one to get the lock becomes master and begins listening for messages. The other broker(s) in the cluster become slaves and wait until the lock becomes available. If the master fails, the slave will be able to grab the lock on the shared database and will then start listening for messages. For details on how master/slave clusters work see Master/Slave in Fault Tolerant Messaging on the Red Hat Customer Portal.
For this to work smoothly, the back-end order processing clients must be configured to switch to the new master so that they can continue to receive messages. JBoss A-MQ makes this easy by providing a failover transport. The failover transport allows you to provide a JBoss A-MQ client with a list of broker URIs. The client will attempt to connect to the first URI in the list. If it cannot, or if the connection subsequently fails, the client will automatically move to the next URI. For details on using the failover transport see Failover Protocol in Fault Tolerant Messaging on the Red Hat Customer Portal.
In addition to the master/slave cluster, the plan uses persistent messaging to ensure that messages are not lost before the back-end processing system can consume them. Every message is stored in the cluster's shared persistence store until it is consumed by a back-end ordering system. If there is a broker failure, or even a cluster failure, all messages that have not been processed will be redelivered when the system recovers.