Chapter 5. Design and Development
5.1. Overview
The source code for the Lambda Air application is made available in a public github repository. This chapter briefly covers each microservice and its functionality while reviewing the pieces of the software stack used in the reference architecture.
5.2. Maven Project Model
Each microservice project includes a Maven POM file, which in addition to declaring the module properties and dependencies, also includes a profile definition to use fabric8-maven-plugin to create and deploy an OpenShift image.
5.2.1. Supported Software Components
Software components used in this reference architecture application fall into three separate categories:
- Red Hat Supported Software
- Tested and Verified Software Components
- Community Open-Source Software
The use of Maven BOM files to declare library dependencies helps distinguish between these categories.
5.2.1.1. Red Hat Supported Software
The POM file uses a property to declare the base image containing the operating system and Java Development Kit (JDK). All the services in this application build on top of a Red Hat Enterprise Linux (RHEL) base image, containing a supported version of OpenJDK:
<properties>
...
<fabric8.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift
</fabric8.generator.from>
</properties>
Further down in the POM file, the dependency section references a BOM file in the Red Hat repository that maintains a list of supported versions and libraries for WildFly Swarm:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>bom</artifactId>
<version>${version.wildfly.swarm}</version>
<scope>import</scope>
<type>pom</type>
</dependency>
...
This BOM file allows the project Maven files to reference WildFly fractions without providing a version, and import the supported library versions:
<!-- WildFly Swarm Fractions -→
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>monitor</artifactId>
</dependency>
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>jaxrs</artifactId>
</dependency>
...
<!-- CDI needed to inject system properties with @RequestScoped -→
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>cdi</artifactId>
</dependency>
5.2.1.2. Tested and Verified Software Components
To use tested and verified components, a project Maven file would also reference the bom-certified file in its dependency management section:
<dependencyManagement>
...
<dependencies>
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>bom-certified</artifactId>
<version>${version.wildfly.swarm}</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
This BOM file maintains a list of library versions that have been tested and verified, allowing their use in the project POM:
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>hystrix</artifactId>
</dependency>
5.2.1.3. Community Open-Source Software
The reference architecture application also makes occasional use of open-source libraries that are neither supported nor tested and verified by Red Hat. In such cases, the POM files do not make use of dependency management, and directly import the required version of each library:
<!-- Community fraction - Jaeger OpenTracing -→
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>jaeger</artifactId>
<version>${version.wildfly.swarm.community}</version>
</dependency>
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-jaxrs2</artifactId>
<version>0.0.9</version>
</dependency>
<!-- Hystrix plugin included in this library and works regardless of Feign -→
<dependency>
<groupId>io.github.openfeign.opentracing</groupId>
<artifactId>feign-hystrix-opentracing</artifactId>
<version>0.0.5</version>
</dependency>
5.2.2. OpenShift Health Probes
Every service in this application also declares a dependency on the WildFly Swarm Monitor fraction, which provides access to the application runtime status on each node.
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>monitor</artifactId>
</dependency>
When a dependency on the Monitor is declared, fabric8 generates default OpenShift health probes that communicate with Monitor services to determine whether a service is running and ready to service requests:
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
5.3. Resource Limits
OpenShift allows administrators to set constraints to limit the number of objects or amount of compute resources that are used in each project. While these constraints apply to projects in the aggregate, each pod may also request minimum resources and/or be constrained with limits on its memory and CPU use.
The OpenShift template provided in the project repository for the Jaeger agent uses this capability to request that at least 20% of a CPU core and 200 megabytes of memory be made available to its container. Twice the processing power and the memory may be provided to the container, if necessary and available, but no more than that will be assigned.
resources:
limits:
cpu: "400m"
memory: "800Mi"
requests:
cpu: "200m"
memory: "200Mi"
When the fabric8 Maven plugin is used to create the image and direct edits to the deployment configuration are not convenient, resource fragments can be used to provide the desired snippets. This application provides deployment.yml files to leverage this capability and set resource requests and limits on the WildFly Swarm projects:
spec:
replicas: 1
template:
spec:
containers:
- resources:
requests:
cpu: '200m'
memory: '400Mi'
limits:
cpu: '400m'
memory: '800Mi'
Control over the memory and processing use of individual services is often critical. Proper configuration of these values, as specified above, is seamless to the deployment and administration process. However, it can be helpful to set up resource quotas in projects for the purpose of enforcing their inclusion in pod deployment configurations.
5.4. WildFly Swarm REST Service
5.4.1. Overview
The Airports service is the simplest microservice of the application, which makes it a good point of reference for building a basic WildFly Swarm REST service.
5.4.2. WildFly Swarm REST Service
To easily include the dependencies for a simple WildFly Swarm application that provides a REST service, declare the following artifact:
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>jaxrs</artifactId>
</dependency>
To receive and process REST requests, include a Java class annotated with Path:
...
import javax.ws.rs.Path;
...
@Path("/")
public class ControllerThis is enough to create a JAX-RS service that listens on the default port of 8080 on the root context.
Each REST operation is implemented by a Java method. Business operations typically require specifying the HTTP verb, request and response media types, and request arguments:
@GET
@Path("/airports")
@Produces(MediaType.APPLICATION_JSON)
public Collection<Airport> airports(@QueryParam( "filter" ) String filter)
{
...5.4.3. Startup Initialization
The Airports service uses eager initialization to load airport data into memory at the time of startup. This is implemented through a ServletContextListener that is called as the Servlet context is initialized and destroyed:
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.annotation.WebListener;
@WebListener
public class ApplicationInitialization implements ServletContextListener
{
@Override
public void contextInitialized(ServletContextEvent sce)5.5. JAX-RS Client and Load Balancing
5.5.1. Overview
The Flights service has a similar structure to that of the Airports service, but relies on, and calls the Airports service. As such, it makes use of the JAX-RS Client and the generated OpenShift service for high availability.
5.5.2. JAX-RS Client
The JAX-RS Client is made available along side the JAX-RS service library and is therefore already imported for these projects.
To obtain a new instance of the JAX-RS Client, use the factory method provided by the ClientBuilder class. Convenience methods for working with the JAX-RS client are included in the RestClient class within each project:
private static Client getClient()
{
Client client = ClientBuilder.newClient();
...To make calls to a service using the JAX-RS Client, a WebTarget object must be obtained using the destination address. The convenience method provided for this purpose assumes that service addresses are externalized as system properties and retrievable through the service name:
public static WebTarget getWebTarget(String service, Object... path)
{
Client client = getClient();
WebTarget target = client.target( System.getProperty( "service." + service + ".baseUrl" ) );
for( Object part : path )
{
target = target.path( String.valueOf( part ) );
}
return target;
}Given a WebTarget, a convenience method helps make the request, parse the response and return the right object type:
public static <T> T invokeGet(WebTarget webTarget, Class<T> responseType) throws HttpErrorException, ProcessingException
{
Response response = webTarget.request( MediaType.APPLICATION_JSON ).get();
return respond( response, responseType );
}Parsing the response is just one line when the request is successful, but it is important to also check the response code for errors, and react appropriately:
private static <T> T respond(Response response, Class<T> responseType) throws HttpErrorException
{
if( response.getStatus() >= 400 )
{
HttpErrorException exception = new HttpErrorException( response );
logger.info( "Received an error response for the HTTP request: " + exception.getMessage() );
throw exception;
}
else if( responseType.isArray() )
{
return response.readEntity( new GenericType<>( responseType ) );
}
else
{
return response.readEntity( responseType );
}
}Using these convenience methods, services can be called in 1-2 easy lines, for example:
WebTarget webTarget = RestClient.getWebTarget( "airports", "airports" ); Airport[] airportArray = RestClient.invokeGet( webTarget, Airport[].class );
The service address provided to the convenience method is resolved based on values provided in the configuration properties:
service:
airports:
baseUrl: http://edge:8080/airports
The service address is resolved, based on the service name, to a URL with the hostname of edge with port 8080. The Edge service uses the second part of the address, the root web context, to redirect the request through static or dynamic routing, as explained later in this document.
The provided hostname of edge is the OpenShift service name, and is resolved to the cluster IP address of the service, then routed to an internal OpenShift load balancer. The OpenShift service name is determined when a service is created using the oc tool, or when deploying an image using the fabric8 Maven plugin, it is declared in the service yaml file.
To emphasize, it should be noted that all calls are being routed to an OpenShift internal load balancer, which is aware of replication and failure of service instances, and can redirect the request properly.
5.6. WildFly Swarm Web Application
5.6.1. Overview
The Presentation service uses based WildFly Swarm WebApp capability to expose HTML and JavaScript and run a client-side application in the browser.
5.6.2. Context Disambiguation
To avoid a clash between the JAX-RS and Web Application listeners, the Presentation service declares a JAX-RS Application with a root web context of /gateway. This allows the index.html to capture requests sent to the root context:
import javax.ws.rs.ApplicationPath;
@ApplicationPath( "/gateway" )
public class Application extends javax.ws.rs.core.Application
{
}5.6.3. Bower Package Manager
The Presentation service uses Bower package manager to declare, download and update JavaScript libraries. Libraries, versions and components to download (or rather, those to ignore) are specified in a bower JSON file. Running bower install downloads the declared libraries to the bower_components directory, which can in turn be imported in the HTML application.
5.6.4. PatternFly
The HTML application developed for this reference architecture uses PatternFly to provide consistent visual design and improved user experience.
PatternFly stylesheets are imported in the main html:
<!-- PatternFly Styles -→
<link href="bower_components/patternfly/dist/css/patternfly.min.css" rel="stylesheet"
media="screen, print"/>
<link href="bower_components/patternfly/dist/css/patternfly-additions.min.css" rel="stylesheet"
media="screen, print"/>
The associated JavaScript is also included in the header:
<!-- PatternFly -→
<script src="bower_components/patternfly/dist/js/patternfly.min.js"></script>
5.6.5. JavaScript
The presentation tier of this application is built in HTML5 and relies heavily on JavaScript. This includes using ajax calls to the API gateway, as well as minor changes to HTML elements that visible and displayed to the user.
5.6.5.1. jQuery UI
Some features of the jQuery UI library, including autocomplete for airport fields, are utilized in the presentation layer.
5.6.5.2. jQuery Bootstrap Table
To display flight search results in a dynamic table with pagination, and the ability to expand each row to reveal more data, a jQuery Bootstrap Table library is included and utilized.
5.7. Hystrix
5.7.1. Overview
The Presentation service also contains a REST service, that acts as an API gateway. The API gateway makes simple REST calls to the Airports service, similar to the previously discussed Flights service, but also calls the Sales service to get pricing information and uses a different pattern for this call. Hystrix is used to avoid a large number of hung threads and lengthy timeouts when the Sales service is down. Instead, flight information can be returned without providing a ticket price. The reactive interface of Hystrix is also leveraged to implement parallel processing.
5.7.2. Circuit Breaker
Hystrix provides multiple patterns for the use of its API. The Presentation service uses Hystrix commands for its outgoing calls to Sales. This is implemented as a Hystrix command:
private class PricingCall extends HystrixCommand<Itinerary>
{
private Flight flight;
PricingCall(Flight flight)
{
super( HystrixCommandGroupKey.Factory.asKey( "Sales" ),
HystrixThreadPoolKey.Factory.asKey( "SalesThreads" ) );
this.flight = flight;
}
@Override
protected Itinerary run() throws HttpErrorException, ProcessingException
{
WebTarget webTarget = getWebTarget( "sales", "price" );
return invokePost( webTarget, flight, Itinerary.class );
}
@Override
protected Itinerary getFallback()
{
logger.warning( "Failed to obtain price, " + getFailedExecutionException().getMessage() + " for " + flight );
return new Itinerary( flight );
}
}After being instantiated and provided a flight for pricing, the command takes one of two routes. When successful and able to reach the service being called, the run method is executed which uses the now-familiar pattern of calling the service through the OpenShift service abstraction. However, if an error prevents us from reaching the Sales service, getFallback() provides a chance to recover from the error, which in this case involves returning the itinerary without a price.
The fallback scenario can happen simply because the call has failed, but also in cases when the circuit is open (tripped). Configure Hystrix as part of the service properties to specify when a thread should time out and fail, as well as the queue used for concurrent processing of outgoing calls.
To configure the command timeout for a specific command (and not globally), the HystrixCommandKey is required. This defaults to the command class name, which is PricingCall in this implementation.
Configure thread pool properties for this specific thread pool by using the specified thread pool key of SalesThreads.
hystrix.command.PricingCall.execution.isolation.thread.timeoutInMilliseconds: 2000
hystrix:
threadpool:
SalesThreads:
coreSize: 20
maxQueueSize: 200
queueSizeRejectionThreshold: 200
5.7.3. Concurrent Reactive Execution
We assume technical considerations have led to the Sales service accepting a single flight object in its API. To reduce lag time and take advantage of horizontal scaling, the service uses Reactive Commands for batch processing of pricing calls.
The configured thread pool size is injected into the API gateway service as a field:
@Inject @ConfigurationValue( "hystrix.threadpool.SalesThreads.coreSize" ) private int threadSize;
To enable injection, the API_GatewayController class must be annotated as a bean:
@RequestScoped public class API_GatewayController
The thread size is later used as the batch size for the concurrent calls to calculate the price of a flight:
int batchLimit = Math.min( index + threadSize, itineraries.length );
for( int batchIndex = index; batchIndex < batchLimit; batchIndex++ )
{
observables.add( new PricingCall( itineraries[batchIndex] ).toObservable() );
}The Reactive zip operator is used to process the calls for each batch concurrently and store results in a collection. The number of batches depends on the ratio of total flights found to the batch size, which is set to 20 in this service configuration.
5.8. OpenShift ConfigMap
5.8.1. Overview
While considering the concurrent execution of pricing calls, it should be noted that the API gateway is itself multi-threaded, so the batch size is not the final determinant of the thread count. In this example of a batch size of 20, with a maximum queue size of 200 and the same threshold leading to rejection, receiving more than 10 concurrent query calls can lead to errors. These values should be fine-tuned based on realistic expectations of load as well as the horizontal scaling of the environment.
This configuration can be externalized by creating a ConfigMap for each OpenShift environment, with overriding values provided in a properties file that is then provided to all future pods.
5.8.2. Property File Mount
Refer to the steps in creating the environment for detailed instructions on how to create an external application properties and mounting it in the pod. The property file is placed in the application class path and the provided values supersede those of the application.
5.9. Jaeger
5.9.1. Overview
This reference architecture uses Jaeger and the OpenTracing API to collect and broadcast tracing data to the Jaeger back end, which is deployed as an OpenShift service and backed by persistent Cassandra database images. The tracing data can be queried from the Jaeger console, which is exposed through an OpenShift route.
5.9.2. Cassandra Database
5.9.2.1. Persistence
The Cassandra database configured by the default jaeger-production_template is an ephemeral datastore with 3 replicas. This reference architecture adds persistence to Cassandra by configuring persistent volume.
5.9.2.2. Persistent Volume Claims
This reference architecture uses volumeClaimTemplates to dynamically create the required number of persistent volume claims:
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
- metadata:
name: cassandra-logs
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
These two volume claim templates generate a total of 6 persistent volume claims for the three Cassandra replicas.
5.9.2.3. Persistent Volume
In most cloud environments, corresponding persistent volumes would be available or dynamically provisioned. The reference architecture lab creates and mounts a logical volume that is exposed through NFS. In total, 6 OpenShift persistent volumes serve to expose the storage to the image. Once the storage is set up and shared by the NFS server:
$ oc create -f Jaeger/jaeger-pv.yml
persistentvolume "cassandra-pv-1" created
persistentvolume "cassandra-pv-2" created
persistentvolume "cassandra-pv-3" created
persistentvolume "cassandra-pv-4" created
persistentvolume "cassandra-pv-5" created
persistentvolume "cassandra-pv-6" created5.9.3. Jaeger Image
Other than configuring persitence, this reference architecture uses the Jaeger production template of the jaeger-openshift project as provided in the latest release, at the time of writing. This results in the use of version 0.6 images from jaeger-tracing:
- description: The Jaeger image version to use
displayName: Image version
name: IMAGE_VERSION
required: false
value: "0.6"
5.9.4. Jaeger Tracing Client
While the Jaeger service allows distributed tracing data to be aggregated, persisted and used for reporting, this application also relies on the client-side Java implementation of the OpenTracing API by Jaeger to correlate calls and send data to the server.
Integration with JAX-RS and other framework libraries make it very easy to use Jaeger in the application. Include the libraries by declaring a dependency in the project Maven file:
<!-- Jaeger OpenTracing -→ <dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>jaeger</artifactId>
</dependency>
To collect distributed tracing data for calls made from the JAX-RS Client, include the opentracing-jaxrs2 library:
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-jaxrs2</artifactId>
<version>0.0.9</version>
</dependency>
Use the JAX-RS Feature provided in this library to intercept outgoing calls from JAX-RS Client. This application registers the Feature in its convenience method that abstracts away the JAX-RS Client configuration:
client.register( ClientTracingFeature.class );
Also specify in the application properties, the percentage of requests that should be traced, as well as the connection information for the Jaeger server. Once again, we rely on the OpenShift service abstract to reach the Jaeger service, and jaeger-agent is the OpenShift service name:
JAEGER_AGENT_HOST: jaeger-agent
JAEGER_AGENT_PORT: 6831
JAEGER_SERVICE_NAME: presentation
JAEGER_REPORTER_LOG_SPANS: true
JAEGER_SAMPLER_TYPE: const
JAEGER_SAMPLER_PARAM: 1
The sampler type is set to const, indicating that the Constant Sampler should be used. The sampling rate of 1, meaning 100%, is therefore already implied but is a required configuration that should be left in. The Jaeger service name affects how tracing data from this service is reported by the Jaeger console, and the agent host and port is used to reach the Jaeger agent through UDP.
These steps are enough to collect tracing data, but a Tracer object can also be [retrieved] by the code for extended functionality, by calling GlobalTracer.get(). While every remote call can produce and store a trace by default, adding a tag can help to better understand zipkin reports. The service also creates and demarcates tracing spans of interest, by treating the span as a Java resource, to collect more meaningful tracing data.
5.9.4.1. Baggage Data
While the Jaeger client library is primarily intended as a distributed tracing tool, its ability to correlate distributed calls can have other practical uses as well. Every created span allows the attachment of arbitrary data, called a baggage item, that will be automatically inserted into the HTTP header and seamlessly carried along with the business request from service to service, for the duration of the span. This application is interested in making the original caller’s IP address available to every microservice. In an OpenShift environment, the calling IP address is stored in the HTTP header under a standard key. To retrieve and set this value on the span:
querySpan.setBaggageItem( "forwarded-for", request.getHeader( "x-forwarded-for" ) );
This value will later be accessible from any service within the same call span under the header key of uberctx-forwarded-for. It is used by the Edge service in JavaScript to perform dynamic routing.
5.10. Edge Service
5.10.1. Overview
This reference architecture uses a reverse proxy for all calls between microservices. The reverse proxy is a custom implementation of an edge service with support for both declarative static routing, as well as dynamic routing through simple JavaScript, or other pluggable implementations.
5.10.2. Usage
By default, the Edge service uses static declarative routing as defined in its application properties:
edge:
proxy:
presentation:
address: http://presentation:8080
airports:
address: http://airports:8080
flights:
address: http://flights:8080
sales:
address: http://sales:8080
It should be noted that declarative mapping is considered static, only because routing is based on the service address without regard for the content of the request or other contextual information. It is still possible to override the mapping properties through an OpenShift ConfigMap, as outlined in the section on Hystrix properties, to change the mapping rules for a given web context.
Dynamic routing through JavaScript is a simple matter of reassigning the implicit hostAddress variable:
if( mapper.getServiceName( request ) == "sales" )
{
var ipAddress = mapper.getBaggageItem( request, "forwarded-for" );
mapper.fine( 'Got IP Address as ' + ipAddress );
if( ipAddress )
{
var lastDigit = ipAddress.substring( ipAddress.length - 1 );
mapper.fine( 'Got last digit as ' + lastDigit );
if( lastDigit % 2 == 0 )
{
mapper.info( 'Rerouting to B instance for IP Address ' + ipAddress );
//Even IP address, reroute for A/B testing:
hostAddress = mapper.getRoutedAddress( request, "http://sales2:8080" );
}
}
}Mapping through the application properties happens first, and if a script does not modify the value of hostAddress, the original mapping remains effective.
5.10.3. Implementation Details
This edge service extends Smiley’s HTTP Proxy Servlet, an open-source reverse proxy implementation using Java Servlet and Apache HttpClient. This library provides pluggable dynamic routing and has been tested and used in the community.
To use this component, the Edge service extends the provided proxy Servlet:
@WebServlet( name = "Edge", urlPatterns = "/*" ) public class EdgeService extends ProxyServlet
The implementation does not require any initialization, so an empty method is provided:
@Override
protected void initTarget() throws ServletException
{
//No target URI used
}The main logic for the Servlet class is provided in the service method. The implementation for this proxy uses its own routing rules to set the destination host and address as the ATTR_TARGET_HOST and ATTR_TARGET_URI request attributes, respectively. The mapping object is used to obtain the destination based on the request:
@Override
protected void service(HttpServletRequest servletRequest, HttpServletResponse servletResponse) throws ServletException, IOException
{
try
{
String fullAddress = mapping.getHostAddress( servletRequest );
URI uri = new URI( fullAddress );
logger.fine( "Will forward request to " + fullAddress );
servletRequest.setAttribute( ATTR_TARGET_HOST, URIUtils.extractHost( uri ) );
servletRequest.setAttribute( ATTR_FULL_URI, uri.toString() );
URI noQueryURI = new URI( uri.getScheme(), uri.getUserInfo(),
uri.getHost(), uri.getPort(), uri.getPath(), null, null );
servletRequest.setAttribute( ATTR_TARGET_URI, noQueryURI.toString() );
super.service( servletRequest, servletResponse );
}
catch( URISyntaxException e )
{
throw new ServletException( e );
}
}The ProxyServlet also relies on another method to find the routing address based on the request. The Edge implementation maps the route in a single step, so the result above is stored as a request attribute and can be returned from the second method:
@Override
protected String rewriteUrlFromRequest(HttpServletRequest servletRequest)
{
return (String)servletRequest.getAttribute( ATTR_FULL_URI );
}Mapping is provided by a separate framework as part of the same service. The integration with this Servlet is simple and performed through the injection of the MappingConfiguration object:
@Inject private MappingConfiguration mapping;
MappingConfiguration retains a chain of mappers it uses for routing:
@ApplicationScoped
public class MappingConfiguration
{
private static Logger logger = Logger.getLogger( MappingConfiguration.class.getName() );
private List<Mapper> mapperChain = new ArrayList<>();
public MappingConfiguration()
{
Mapper[] candidates = new Mapper[]{PropertyMapper.getInstance(),
JavaScriptMapper.getInstance()};
for( Mapper candidate : candidates )
{
if( candidate.initialize() )
{
mapperChain.add( candidate );
}
}
}
public String getHostAddress(HttpServletRequest request)
{
if( mapperChain.isEmpty() )
{
logger.severe( "No mapper configured, will return null" );
return null;
}
else
{
Iterator<Mapper> mapperIterator = mapperChain.iterator();
String hostAddress = mapperIterator.next().getHostAddress( request, null );
logger.fine( "Default mapper returned " + hostAddress );
while( mapperIterator.hasNext() )
{
Mapper mapper = mapperIterator.next();
hostAddress = mapper.getHostAddress( request, hostAddress );
logger.fine( "Mapper " + mapper + " returned " + hostAddress );
}
return hostAddress;
}
}
}The default implementation uses a PropertyMapper and a JavaScriptMapper. The property mapper looks up the root web context in system properties, typically defined in a project-defaults.yml file, and uses the value as the destination address.
The JavaScript mapper looks for /edge/routing.js on the file system and if found, executes the script. The HTTP request is injected into the JavaScript context as the request variable and the destination address returned by the previous mapper in the chain is injected as hostAddress. The mapper object itself is also inject as mapper to provide convenience methods to look up Jaeger baggage items and construct a URL with a new host address. After execution, the modified value for hostAddress is read from the context and used.
5.10.4. A/B Testing
To implement A/B testing, the Sales2 service introduces a minor change in the algorithm for calculating fares. Dynamic routing is provided by Edge through JavaScript.
Only calls to the Sales service are potentially filtered:
if( mapper.getServiceName( request ) == "sales" )
{
...
}From those calls, requests that originate from an IP address ending in an even digit are filtered, by modified the value of hostAddress:
var ipAddress = mapper.getBaggageItem( request, "forwarded-for" );
mapper.fine( 'Got IP Address as ' + ipAddress );
if( ipAddress )
{
var lastDigit = ipAddress.substring( ipAddress.length - 1 );
mapper.fine( 'Got last digit as ' + lastDigit );
if( lastDigit % 2 == 0 )
{
mapper.info( 'Rerouting to B instance for IP Address ' + ipAddress );
//Even IP address, reroute for A/B testing:
hostAddress = mapper.getRoutedAddress( request, "http://sales2:8080" );
}
}To enable dynamic routing without changing application code, shared storage is made available to the OpenShift nodes and a persistent volume is created and claimed. With the volume set up and the JavaScript in place, the OpenShift deployment config can be adjusted administratively to mount a directory as a volume:
$ oc volume dc/edge --add --name=edge --type=persistentVolumeClaim --claim-name=edge --mount-path=/edgeThis results in a lookup for a routing.js file under the edge directory. If found, its content is executed with the default JavaScript Engine of the JDK and any changes to the host address value is returned:
FileReader fileReader = new FileReader( JS_FILE_NAME ); Bindings bindings = engine.createBindings(); bindings.put( "request", request ); bindings.put( "hostAddress", hostAddress ); engine.eval( fileReader, bindings ); return (String)bindings.get( "hostAddress" );

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.