Chapter 4. Design and Development
4.1. Overview
For a discussion of microservices as a software architectural style, as well as the design and composition of the sample application, refer to the previously published reference architecture on Building JBoss EAP 7 Microservices on OpenShift Container Platform. For further information regarding Fuse Integration Services 2.0 and its role in building Enterprise patterns and solutions, refer to the Red Hat JBoss Fuse Integration Services 2.0 for OpenShift documentation.
OpenShift provides an ideal platform for deploying, hosting and managing microservices. By deploying each service as an individual docker container, OpenShift helps isolate each service and decouples its lifecycle and deployment from that of other services. OpenShift can configure the desired number of replicas for each service and provide intelligent scaling to respond to varying load. Introducing Fuse Integration Services allows inclusion of a singular API Gateway pattern implementation to upstream clients and the ability to work with both event-driven and API-driven components. Monitoring workloads and container operations also becomes more manageable due to FIS’s inclusion of HawtIO and OpenShift’s templates for the EFK Logging Stack.
This sample application uses the Source-to-Image (S2I) mechanism to build and assemble reproducible container images from the application source and on top of supported OpenShift images, as well as OpenShift-provided templates for inclusion of both messaging and logging infrastructures.
4.2. Application Structure
The source code for the sample application is checked in to a public GitHub repository. The code is organized as six separate directories, each structured as a Maven project with a pom file at the root. An aggregation pom file is provided at the top level of the root directory to build all six projects if needed, although this build file is neither required nor used by OpenShift.
4.3. Presentation Module
The Presentation module of the project represents an upstream consumer instance that communicates with all of the microservices via the API Gateway pattern implementation (gateway-service). It’s a graphical interface representing an eCommerce storefront where users can register or login, proceed to build and complete transactions, then track order histories through to completion. All communication with the Gateway is done through RESTful calls via Apache’s HttpClient.
Figure 4.1. Presentation Module Landing Page

4.4. Customizing Red Hat JBoss EAP for Persistence
The Product and Sales services are both API-driven EAP applications and each have a database dependency and use their respective MySQL database to store and retrieve data. For the Sales service, the supported Red Hat xPaaS EAP Image bundles MySQL JDBC drivers, but the driver should be declared in the server configuration file, and a datasource would need to be described, in order for the application to access the database through a connection pool.
To make the necessary customizations, provide an updated server configuration file in the source code of the project built onto the image via S2I. The replacement configuration file should be named standalone-openshift.xml and placed in a directory called configuration at the root of the project.
Some configuration can be performed by simply providing descriptive environment variables. For example, supplying the DB_SERVICE_PREFIX_MAPPING variable and value instructs the script to add MySQL and/or PostgreSQL datasources to the EAP instance. Refer to the documentation for details.
In order to make the required changes to the correct baseline, obtain the latest server configuration file. The supported image is available via the Red Hat Registry. To view the original copy of the file, you can run this container directly:
# docker run -it registry.access.redhat.com/jboss-eap-7/eap70-openshift cat /opt/eap/standalone/configuration/standalone-openshift.xml
<?xml version="1.0" ?>
<server xmlns="urn:jboss:domain:4.0">
<extensions>
...Declare the datasource with parameterized variables for the database credentials. For example, to configure the product datasource for the product service:
<subsystem xmlns="urn:jboss:domain:datasources:1.2">
<datasources>
<datasource jndi-name="java:jboss/datasources/ProductDS" enabled="true"
use-java-context="true" pool-name="ProductDS">
<connection-url>
jdbc:mysql://${env.DATABASE_SERVICE_HOST:product-db}
:${env.DATABASE_SERVICE_PORT:3306}/${env.MYSQL_DATABASE:product}
</connection-url>
<driver>mysql</driver>
<security>
<user-name>${env.MYSQL_USER:product}</user-name>
<password>${env.MYSQL_PASSWORD:password}</password>
</security>
</datasource>
The datasource simply refers to the database driver as mysql. Declare the driver class in the same section after the datasource:
…
</datasource>
<drivers>
<driver name="mysql" module="com.mysql">
<xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>
</driver>
With the above configuration, environment variables are substituted to specify connection details and the database host name is resolved to the name of the OpenShift service hosting the database.
The default EAP welcome application is disabled in the Red Hat xPaaS EAP image. To deploy the Presentation application to the root context, rename the warName to ROOT in the Maven pom file:
<build>
<finalName>${project.artifactId}</finalName>
<plugins>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<version>${version.war.plugin}</version>
<configuration>
<warName>ROOT</warName>
<failOnMissingWebXml>false</failOnMissingWebXml>
</configuration>
</plugin>
</plugins>
</build>
4.5. Gateway Service
The Gateway Service, along with the other message-oriented services, utilize Apache Camel Java DSL to define the various needed endpoints and routes for the service. This particular service constructs a series of routes utilizing the Camel Rest DSL to accept inbound RESTful requests on port 9091. The spark-rest component is used with the Rest DSL in order to take advantage of URI wildcard capabilities when proxying multiple methods to the same context.
String customersUri = "http4://sales-service:8080/customers/?bridgeEndpoint=true";
String productsUri = "http4://product-service:8080/products/?bridgeEndpoint=true";
restConfiguration().component("spark-rest").host("0.0.0.0").port(9091);
rest("/billing/process")
.post()
.route()
.to("amq:billing.orders.new?transferException=true&jmsMessageType=Text")
.wireTap("direct:warehouse");
rest("/billing/refund/") .post() .to("amq:billing.orders.refund?transferException=true&jmsMessageType=Text"); rest("/customers") .get().toD(customersUri) .post().to(customersUri); rest("/customers/")
.get().toD(customersUri)
.post().toD(customersUri)
.patch().toD(customersUri)
.delete().toD(customersUri);
rest ("/products")
.get().toD(productsUri)
.post().toD(productsUri);
rest ("/products/*")
.get().toD(productsUri)
.post().toD(productsUri);
from("direct:warehouse")
.routeId("warehouseMsgGateway")
.filter(simple("${bodyAs(String)} contains 'SUCCESS'"))
.inOnly("amq:topic:warehouse.orders?jmsMessageType=Text");Each rest() declaration configures a new Rest endpoint consumer to feed the corresponding Camel route. Billing has two possible method calls, process and refund. While process is a POST with no parameters, refund uses a wildcard to account for the orderId which will be included in the URI of the request. Similarly, both customers and products have a variety of "verb" handlers (get, post, etc.) featuring a mixture of URI parameter patterns, all of which are mirrored through to the proxied service intact via wildcard. Finally, a direct route is used to filter successful transaction messages that have been wire tapped from the billing process route to pass on for fulfillment via messaging queue.
4.6. Billing & Warehouse Services
The billing and warehouse services follow an approach that is similar to, although simpler than, the gateway service. As mentioned, both utilize the Camel Java DSL to configure routes for consuming from queues hosted on Red Hat JBoss A-MQ via the messaging component of our project. The billing service’s route takes messages from both the billing.orders.new and billing.orders.refund queues, demonstrates the unmarshaling capabilities of Camel to convert from JSON message format into POJO objects. It then passes them to a bean for further handling and logic. Once the bean is complete, the result is then marshaled back into JSON format and sent back to the waiting gateway service. The process order route and accompanying bean process is shown below. Note that the conversion between object types is often done by Camel implicitly via Type Converters, but may not always be optimal in an intermediate gateway. In these cases, explicit conversion can be accomplished as demonstrated below.
Process requests and bean handler:
from("amq:billing.orders.new")
.routeId("processNewOrders")
.unmarshal(dataFormatFactory.formatter(Transaction.class))
.bean(billingService, "process")
.marshal(dataFormatFactory.formatter(Result.class));
---
public Result process(Transaction transaction) {
Result result = new Result();
logInfo("Asked to process credit card transaction: " + transaction);
result.setName(transaction.getCustomerName());
result.setOrderNumber(transaction.getOrderNumber());
result.setCustomerId(transaction.getCustomerId());
Calendar now = Calendar.getInstance();
Calendar calendar = Calendar.getInstance();
calendar.clear();
calendar.set(transaction.getExpYear(), transaction.getExpMonth(), 1);
if (calendar.after(now)) {
result.setTransactionNumber(random.nextInt(9000000) + 1000000);
result.setTransactionDate(now.getTime());
result.setStatus(Status.SUCCESS);
} else {
result.setStatus(Status.FAILURE);
}
return result;
}After tagging an ownership header to the message as the simplistic means of claiming the message for processing in our example application, the warehouse route directly passes the transaction result message on to a processing bean, where the order is "fulfilled", and then a new call to the gateway is performed in order to allow the sales service to mark the order as shipped.
from("amq:topic:warehouse.orders?clientId=" + warehouseId)
.routeId("fulfillOrder")
.unmarshal(dataFormatFactory.formatter(Result.class))
.process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
/*
In production cases, multiple warehouse instances would be subscribed to
the warehouse.orders topic, so this processor could be used to
referenced a shared data grid clustered over all warehouse instances.
With proper geographical and inventory level information, a decision
could be made as to whether this specific instance is the optimal
warehouse to fulfill the request or not. Note that doing so would
require a lock mechanism in the shared cache if the choice algorithm
could potentially allow duplicate optimal choices.
*/
// in this demo, only a single warehouse instance will be used, so just
// claim all messages and return them
exchange.getIn().setHeader("ownership", "true");
}
})
.filter(simple("${headers.ownership} == 'true'"))
.bean(warehouseService, "fulfillOrder");
---
public void fulfillOrder(Result result) throws Exception {
HttpClient client = new DefaultHttpClient();
JSONObject jsonObject = new JSONObject();
jsonObject.put("status", "Shipped");
URIBuilder uriBuilder = new URIBuilder("http://gateway-service:9091/customers/"
+ result.getCustomerId()
+ "/orders/" + result.getOrderNumber());
HttpPatch patch = new HttpPatch(uriBuilder.build());
patch.setEntity(new StringEntity(jsonObject.toString(), ContentType.APPLICATION_JSON));
logInfo("Executing " + patch);
HttpResponse response = client.execute(patch);
String responseString = EntityUtils.toString(response.getEntity());
logInfo("Got response " + responseString);
}4.7. Application Monitoring
When originally configuring services in Chapter 3, each Fuse Integration Services component was modified to include jolokia as the name attribute for the container’s port 8778. In doing so, the system enables and exposes the integrated HawtIO console for monitoring and analytics at the container level. Once enabled, the tool can be reached in the OpenShift web console by opening the Logs tab of the target Pod and following the View Archive link now shown directly above the console log output:
Figure 4.2. Pod Log & View Archive

The individual container and the aggregated container logging, built on top of the EFK Stack within the logging project, offer a very similar interface. However, as Fluentd has been distributed across all nodes to collect and analyze all container output and feed it to Elasticsearch, the logging project version of the web console is all-inclusive, allowing for greater analyzation and side-by-side comparison and visualization of all actions across the nodes' containers.
Figure 4.3. Aggregated Console View

As shown below, with all logs centralized, metrics are collected and presented based on specific keywords defined by Fluentd. In this case, the chart shows a side-by-side count of events from each container, categorized with a labeled name.
Figure 4.4. Aggregated Visual View


Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.