Chapter 2. Microservice Architecture

2.1. Definition

Microservice Architecture (MSA) is a software architectural style that combines a mixture of well-established and modern patterns and technologies to achieve a number of desirable goals.

Some aspects, for example a divide and conquer strategy to decrease system complexity by increasing modularity, are universally accepted and have long been cornerstones of other competing paradigms.

Other choices carry trade-offs that have to be justified based on the system requirements as well as the overall system design.

General characteristics of microservices include:

  • Applications are developed as a suite of small services, each running as an independent process in its own logical machine (or Linux container)
  • Services are built around capabilities: single responsibility principle
  • Each microservice has a separate codebase and is owned by a separate team
  • One can independently replace / upgrade / scale / deploy services
  • Standard lightweight communication is used, often REST calls over HTTP
  • Potentially heterogeneous environments are supported

2.2. Tradeoffs

The defining characteristic of a Microservice Architecture environment is that modular services are deployed individually and each can be replaced independent of other services or other instances of the same service. Modularity and other best practices yield a number of advantages but the most unique tradeoffs from MSA are the result of this characteristic.

2.2.1. Advantages

  • Faster and simpler deployment and rollback with smaller services. Taking advantage of the divide and conquer paradigm in software delivery and maintenance.
  • Ability to horizontally scale out individual services. Not sharing the same deployment platform with other services allows each service to be scaled out as needed.
  • Selecting the right tool, language and technology per service, without having to conform to a homogeneous environment being dictated by shared infrastructure.
  • Potential for fault isolation at microservice level by shielding services from common infrastructure failure due to the fault of one service. Where a system is designed to withstand the failure of some microservices, the result is Higher Availability for the system.
  • Goes hand in hand with Continuous Delivery and Integration.
  • Promotes DevOps culture with higher service self-containment and less common infrastructure maintenance.
  • More autonomous teams lead to faster/better development.
  • Facilitates A/B testing and canary deployment of services.
  • Traditional divide and conquer benefits.

2.2.2. Disadvantages

The downsides of MSA are direct results of higher service distribution. There is also a higher cost to having less common infrastructure. Disadvantage may be enumerated as follows:

  • Network reliability is always a concern.
  • Less tooling / IDE support given the distributed nature.
  • Tracing, monitoring and addressing cascading failures are complex.
  • QA, particularly integration testing can be difficult.
  • Debugging is always more difficult for distributed systems.
  • Higher complexity – higher fixed cost and overhead.
  • Heterogenous environments are difficult and costly to maintain.

2.3. Distributed Modularity Model

2.3.1. Overview

While modular design is a common best practice that is appropriate in just about all circumstances and environments, the logical and physical distribution of the modular units greatly vary, depending on the system architecture.

Some factors to consider:

  • The number of developers: The ideal size of a development team is between 5 and 10 people and each team can focus on one or more microservices. In an organization with only 1 or 2 development teams, the case for decoupling the work is less compelling and the resulting overhead from the architectural choices may be too costly.
  • Are you comfortable on the cutting edge of technology? In its specific shape and form, Microservice Architecture is a new paradigm with only a handful of success stories behind it. The tools and infrastructure to support MSA are neither abundant nor mature, and the cost of adoption is still high.
  • Can you adapt your staffing to DevOps? One of the benefits of MSA is its amenability to a DevOps method and the resulting higher agility. This requires lines to be blurred between development and operations. Not every organization is prepared for the required cultural change.
  • Do you have a production-grade cloud infrastructure: self-service, on-demand, elastic, API-based consumption model with multi-tenant billing capabilities? How easily can independent teams deploy services to production?
  • How skilled are you at troubleshooting system errors? Like any distributed system, an MSA environment can be very difficult to analyze and troubleshoot.
  • Can you afford higher up-front costs? Just about every software methodology and paradigm seeks to maximize the return on investment and minimize the costs. However, costs are not always evenly distributed in various stages of the software lifecycle. Individual service deployment and a distributed architecture increases complexity and the fixed cost associated with the environment.
  • Do you have a network that can support the architecture? The distributed nature of an MSA environment puts more stress on the network and conversely, a more reliable network is required to support such an architecture.

2.3.2. Monolithic Applications

While many Microservices advocates may use the term monolithic disparagingly, this paper reserves judgement on this design and views it as the result of a series of legitimate trade-offs. This style of architecture may be preferable for certain situations and not for others.

Monolithic applications may be just as modular as microservices, but those modules are typically bundled as a single EAR or WAR file and deployed on a single application server and therefore the same logical machine. In this model, all the modules take advantage of the same infrastructure and maximize efficiency by minimize network traffic and latency. In some situations, it may even be possible to pass arguments by reference and avoid serialization and data transfer costs.

This diagram shows a traditional Java EE application deployed on a logical machine. It is emphasized that this single application consists of two web applications as well as three business services, each being modular (containing six embedded modules each):

Figure 2.1. Java Enterprise Application

Java Enterprise Application

This deployment model minimizes overhead by sharing the application server and environment resources between various components.

Horizontal scaling of such an architecture is simple and often benefits from the clustering capabilities of the underlying application server. Most often, the entire environment is duplicated and the application server replicates any stateful application data that is held in memory:

Figure 2.2. Clustered Java EE Application

Clustered Java EE Application

The uniformity and consistency of the replicated environment can be as much a handicap, as it is an asset. Deployment, testing and maintenance is simplified and the consistency avoids various technical and logistics issues.

Things begin to change when one service is much less stable and requires more resources than others. Imagine that the first of the three business services is ten times more likely to hang or otherwise display unpredictable behavior. If that service can crash and bring down the server, it would also take down the other two services. Scaling out this service would likewise require scaling out the entire environment including services that may not have as much load or resource requirements. These issues are some of the biggest drivers of the microservice architecture.

2.3.3. Tactical Microservices

One possible strategy is to address the weaknesses of a traditional monolithic application architecture while continuing to take advantage of its benefits. Instead of proactively decomposing the application into microservices to allow separate lifecycle and deployment, isolate them or separately scale each out, some organizations prefer to take advantage of the common infrastructure and environment uniformity where possible, while explicitly identifying and extracting components that warrant separation. For example, if one of the business services in the application depicted in Figure 2.1, “Java Enterprise Application” is unstable or requires more resources, or if it is best maintained and upgraded as a small and separate unit that is managed by a dedicated team, it may be deployed separately. Similarly, a component within another business service may be extracted and separated:

Figure 2.3. Tactical Microservices

Tactical Microservices

Notice that in this architecture, each new deployment is self-encapsulated and includes its own persistence. The business service continues to be called from the web application, although new restrictions are imposed and this call is now necessarily a remote call. It is preferable to follow RESTful practices and communicate using XML or JSON, over HTTP or a similar transport.

This architecture allows the business service and the newly independent microservice to be scaled out separately:

Figure 2.4. Tactical Microservices, HA

Tactical Microservices HA

In the simple (and unrealistic) scenario that the remainder of the application requires a single instance while the business service needs two and the new microservice has three instances, the application directs its calls to load balancer, which in turn distribute the load between available services and provide the necessary failover.

The new services are isolated and the rest of the application is at least partially protected from their failure. These services may be scaled out dynamically as required without the overhead of replicating the entire environment.

2.3.4. Strategic Microservices

The architecture previously described in Tactical Microservices is either reactively separating out microservices that require complete isolation or have separate scaling needs, or anticipating such scenarios and proactively deploying them as individual microservices.

The Microservice Architecture paradigm can be fully embraced by decomposing entire applications into microservices and implementing entire systems as separately deployed microservices regardless of actual or anticipated isolation needs of individual services:

Figure 2.5. Strategic Microservices

Strategic Microservices

In this architecture, each microservice includes its own persistence, which is at least logically encapsulated within the service. Each such service can be independently deployed, scaled, upgraded and replaced. The environment is fundamentally heterogeneous, so while frameworks and infrastructure services may be available to provide features and functions, each microservice is free to use its preferred technology. Some of these microservices may be running on a Java EE Server but overhead costs can be exorbitant and should be taken into consideration.

In this architecture, each microservice is easy to deploy, roll back and upgrade. Separate teams can work on separate microservices and the divide and conquer philosophy is used in full force. While the diagram depicts a single web application, there may in fact be zero, one, or many web applications invoking these microservices. Microservices may also depend on a data service layer for persistence, or may in fact not have any persistence requirements.

It is assumed that every microservice has multiple instances deployed but the number of instances depends on the load and mission criticality of the service in question. It is no longer necessary to deploy 10 copies of one service, because a different service needs 10 active copies to serve its purpose:

Figure 2.6. Strategic Microservices, HA

Strategic Microservices HA

Notice that in this architecture diagram, some microservices are depicted as having fewer instances than others. Another obvious benefit of this approach is that the failure of a host due to a misbehaving service can be tolerated with minimal impact on other services.

As shown in the diagram, each microservice has its own persistence store. This is of course a logical data store to avoid creating dependency or coupling between the microservices. This same objective can be achieved by abstracting away the data store with a data service layer. This reference application makes a compromise in using a single database server, accessed directly by the microservices, while using separate schemas to segregate the services.

However, with a large number of microservices, each service may depend on a number of other services, each of which is deployed and scaled out in unknown locations. This leads to the requirement for a comprehensive service discovery solution, where services are registered as soon as they come up on a node and deregistered when they are taken offline. To avoid a single point of failure, such a service discovery solution would have to be replicated and highly available. Despite its HA nature, most services would also need to cache the results and be prepared to work when unable to access this solution.

Load balancing can quickly get more complex when a large number of microservices are scaled out in different numbers and the dependency graph gets more depth and breadth. Services might require their own distinct load balancing strategy and an extra hop in such an environment may prove costlier than usual.

The performance cost of making repeated remote calls typically leads to extensive caching requirements. The most often requirement is to have a service cache so that repeated and often expensive calls to the same microservice may be avoided.

High granularity along with a distributed deployment can also lead to orchestration challenges. Because of network latency, parallel invocation of services often becomes desirable, leading to the need for a queuing, asynchronous invocation and orchestration of requests and responses.

In general, as the environment becomes more heterogeneous, using uniform tooling and infrastructure becomes a less viable option.

2.3.5. Business-Driven Microservices

It must be emphasized that a microservice architectural style carries a lot of real benefits along with very real costs. The complexity of the system can grow exponentially with a large number of distributed components, each separately scaled out and perhaps dynamically auto-scaled.

Like most decisions, this does not have be a binary choice. The modularity of the services can determine the complexity of the environment as well as the benefits and costs that are realized.

A distributed business-driven microservice architecture can achieve many of the benefits, while avoiding some of the costs:

Figure 2.7. Business-Driven Microservices

Business-Driven Microservices

An important and distinguishing characteristic of this architecture is that microservices do not communicate with one another. Instead, an aggregation layer is provided in the form of a web application that provides the required coordination.

The three services in this architecture diagram exist within a trust perimeter and the web application is the only client permitted to directly access these services. To use a different presentation technology, the web layer may be replaced with an aggregation layer that exposes a REST or other API. For example, JavaScript and similar technology may replace the Servlet layer and instead make direct calls to the server. In such a setup, the aggregation layer would be the only service exposed to outside clients and it would carefully design and expose an API where each operation would orchestrate and coordinate the underlying services as needed.

The architecture presented in this diagram avoids certain costs and continues to benefit from supported and familiar products and frameworks by constraining modularity and avoiding complex dependency graphs.

In its simplest form, microservices in this architecture remain self-contained within the system by avoiding any dependencies on other microservices. This does not include external dependencies, but is an attempt to simplify the environment by avoiding a large and deep dependency graph within the system.

When a certain component requires special consideration, either in its scaling requirements or in terms of fault isolation, it can be broken out and deployed independently. This can lead to a hybrid solution incorporating some of the tactical considerations of the previously described architecture depicted in Figure 2.3, “Tactical Microservices”.

System requirements, willingness to be an early-adopter, in-house skill set, required agility and other factors can determine the best fit for an environment. There are systems and environments, for which a monolithic application architecture is the best fit. There are also very agile software groups creating fast-evolving systems that receive a large return on investment in strategic microservices. For a large group in-between, this approach can be a safe and rewarding compromise to gain many of the benefits without paying all of the costs.

Application server clustering can be used in this model to provide high availability. When employing horizontal scaling to provide redundancy and load balancing, service modularity determines the pieces that can be scaled out separately.

In Figure 2.7, “Business-Driven Microservices”, Service 1 may be part of a 3-node cluster while Service 2 is clustered as 10 nodes and Service 3 only has a single active backup. Likewise, a catastrophic failure as a result of Service 1 would have no impact on Service 2 and Service 3, as they are separately packaged and deployed.

2.4. Cross-cutting concerns

2.4.1. Overview

Any system has a series of cross-cutting concerns, preferably addressed through consistent and common solutions that result in easier maintenance and lower cost. There are mature tools, frameworks and services that address such concerns and continue to be useful in various architectural styles.

The distributed and modular nature of microservice architecture creates new priorities and raises specific concerns that are not always adequately satisfied by traditional and available solutions.

The nature and specifics of these cross-cutting concerns will depend on the modularity of a microservice architecture. At one end of the spectrum, monolithic applications represent traditional enterprise applications that have been successfully operated in production environments for years and already benefit from a large array of established tools. At the other end of the spectrum, a highly modular and distributed MSA environment, described as strategic microservices in this document, introduces requirements that have not always existed in other architectures and do not have established and mature solutions.

While the concept of a microservices platform is not a well-defined industry term at the time of writing, it will inevitably emerge as the paradigm becomes more prevalent. Such a platform would provide value by filling in the missing pieces and ease the burden that is placed on early adopter today.

2.4.2. Containerization

An important feature and arguably the cornerstone of the microservice architecture is the isolated and individual deployment of each service. It is important that every instance of each microservice would have complete autonomy over the environment. Given the high granularity and the relatively small size of each service, dedicating a physical machine to each instance is not in consideration.

Virtualization reduces the overhead cost of each logical machine by sharing host resources and is often an acceptable environment for microservices. However, Linux containers and Docker technology in particular have improved on the benefits of virtualization by avoiding the full cost of an operating system and sharing some of host services that would be unnecessarily duplicated in each virtual machine.

Docker containers are emerging as the preferred units of deployment for microservices.

2.4.3. Service Discovery

Highly granular MSA environments typically involve dozens of services, each deployed as multiple instances. The dependency graph for some service invocations may involve as many as 10 to 20 calls and be up to 4 or 5 levels deep. This type of distribution makes a comprehensive service discovery solution critical. To take advantage of service redundancy, the caller needs to locate available and deployed instances of any given service at the required time.

The service discovery solution would have to include a distributed and highly available service registry where each service instance can register itself upon deployment and de-register on shutdown. There often needs to be a health check mechanism to remove instances that have suddenly dropped off, or be notified of failures to reach a service.

Communication with the service registry is best achieved through REST calls over HTTP to ensure that the solution remains language and platform agnostic.

Red Hat JBoss Data Grid provides a large number of features including RESTful interfaces, queries, customization and of course replication, that make it an attractive foundation for a service registry.

2.4.4. Load Balancer

One of the obvious costs of microservice architecture is the network latency that is introduced by the number of hops as a service dependency graph is traversed. Using a traditional load balancer typically doubles this latency by introducing an extra hop on each microservice invocation. For strategic microservices and in what is often an already chatty network environment, these extra hops are typically not acceptable. This architecture benefits from a load balancing solution that can be embedded in the client to eliminate the extra remote call. Such a framework would benefit from an IoC approach, allowing each caller to determine the load balancing strategy according to the circumstances.

2.4.5. Cache

In addition to common caching requirements in enterprise applications, typically used in front of databases or other remote and expensive calls, the distribution of functionality in an application often leads to repeated remote calls to a service, requesting the same information and unnecessarily increasing its load.

In microservice architecture environments with a large number of fine-grained microservices, it is prudent to identify those services that are often repeatedly called with the same request and take advantage of a service cache to increase performance and reduce resource cost.

Red Hat JBoss Data Grid provides a powerful caching solution with support for geographically distributed data, data sharding, consistent hashing algorithm and many other useful and relevant features that make it a great fit for an MSA environment.

2.4.6. Throttling, Circuit Breaker, Composable Asynchronous Execution

Complex dependency graphs along with network latency often make parallel invocation of services a necessarily. To successfully orchestrate calls to dependencies while taking advantage of parallel execution, a sync to async pattern is often required. Once such an approach has been implemented, it becomes fairly easy to throttle calls to a service, or outbound calls from a service. The JAX-RS 2.0 Specification provides an implementation of asynchronous REST processing as well as REST clients.

Another critical design pattern for an MSA environment is the circuit breaker, which can limit the number of threads stuck while attempting to call a single service and protect the rest of the environment from faulty services.

2.4.7. Security

Authentication and Authorization requirements are ubiquitous in practically all software environments.

One of the primary considerations in a microservice architecture environment is how the user identity will be propagated through the distributed service call. While many such environments may designate a security perimeter and not have each service be concerned with authenticating the end user, this approach is neither advisable nor acceptable in all situations.

Industry standards such as OAuth2, SAML and similar token-based security solutions provide a natural fit for RESTful services in a distributed environment. JBoss software provides support for these standards and satisfies associated security requirements through the PicketLink and Keycloak projects.

2.4.8. Monitoring and Management

The monitoring and management aspect of microservices are highly dependent on the deployment environment.

Most microservice deployments occur on an on-premise or public cloud environment. These cloud environment typically include native monitoring and management tools that can easily be used for the deployed services.

2.4.9. Resilience Testing

Microservices are designed and built to have the overall system withstand the failure of individual services. Like any feature or objective, this attribute needs to be tested and verified.

Test suites often need to be developed to verify the resilience of the system when unexpected load is placed on one service or a defect causes some service instances to break down.

Available testing and environment frameworks are often adapted to created the necessary QA tools for MSA environments.

2.5. Anatomy of a Microservice

The microservice architectural style lays out a set of principles on how application functionality can be decomposed into modular services, how these services should be deployed and the best practices around their inter-communication and other aspects of the architecture.

It is no coincidence that the design and development of the microservice itself is not part of this conversation. One of the stated goals of the microservice architectural style is to allow choice for the developers of each microservice to use the best tools and technologies, without the need to conform to an enterprise-wide or even a system-wide standard.

Despite this choice and the variety in both the requirements and their implementation from one service to another, these services largely resemble other enterprise software components. The term microservice may mislead some to view it as a trivial component but any system justifying the adoption of microservice architecture is complicated enough that each microservice will have its own significant dependencies and technical requirements.

Most microservices require persistence and need database connection pooling and connection management. Some have external dependencies and need to integrate with legacy system. Oftentimes, a microservice needs to enforce authentication and authorization; it would therefore benefit from declarative security. When a service performs several tasks as part of the same responsibility, even transactional behavior within the service may be required or beneficial.

These requirements are fundamentally no different than common enterprise software requirements that have led to the prevalence of application servers. The biggest impediment of using a Java EE application server to host an individual microservice is the resource usage and high fixed cost. Application servers are designed to act as shared infrastructure for a large number of software components and with enough load, the overhead cost is diminished in comparison. In microservice architecture where each service is deployed separately, this overheard can become prohibitively large.

JBoss EAP 7 benefits from an exceptional level of modularity afforded to the platform by the use of JBoss Modules. As a result, the platform can be configured to exclude modules that are not used by a given microservice and minimize the overhead.

While the ultimate choice of structure and deployment for each microservice is made separately, the benefits of creating a microservice as a JBoss EAP 7 application are well worth considering.