LibraryPrintFeedback

Implementing Enterprise Integration Patterns

Version 7.1

December 2012
Trademark Disclaimer
Third Party Acknowledgements

Updated: 08 Jan 2014

Table of Contents

1. Building Blocks for Route Definitions
Implementing a RouteBuilder Class
Basic Java DSL Syntax
Router Schema in a Spring XML File
Endpoints
Processors
2. Basic Principles of Route Building
Pipeline Processing
Multiple Inputs
Exception Handling
onException Clause
Error Handler
doTry, doCatch, and doFinally
Propagating SOAP Exceptions
Bean Integration
Aspect Oriented Programming
Transforming Message Content
Property Placeholders
Threading Model
Controlling Start-Up and Shutdown of Routes
Scheduled Route Policy
Overview of Scheduled Route Policies
Simple Scheduled Route Policy
Cron Scheduled Route Policy
JMX Naming
3. Introducing Enterprise Integration Patterns
Overview of the Patterns
4. Messaging Systems
Message
Message Channel
Message Endpoint
Pipes and Filters
Message Router
Message Translator
5. Messaging Channels
Point-to-Point Channel
Publish-Subscribe Channel
Dead Letter Channel
Guaranteed Delivery
Message Bus
6. Message Construction
Correlation Identifier
Event Message
Return Address
7. Message Routing
Content-Based Router
Message Filter
Recipient List
Splitter
Aggregator
Resequencer
Routing Slip
Throttler
Delayer
Load Balancer
Multicast
Composed Message Processor
Scatter-Gather
Loop
Sampling
Dynamic Router
8. Message Transformation
Content Enricher
Content Filter
Normalizer
Claim Check
Sort
Validate
9. Messaging Endpoints
Messaging Mapper
Event Driven Consumer
Polling Consumer
Competing Consumers
Message Dispatcher
Selective Consumer
Durable Subscriber
Idempotent Consumer
Transactional Client
Messaging Gateway
Service Activator
10. System Management
Detour
LogEIP
Wire Tap
A. Migrating from ServiceMix EIP
Migrating Endpoints
Common Elements
ServiceMix EIP Patterns
Content-based Router
Content Enricher
Message Filter
Pipeline
Resequencer
Static Recipient List
Static Routing Slip
Wire Tap
XPath Splitter
Index

List of Figures

1.1. Local Routing Rules
2.1. Processor Modifying an In Message
2.2. Processor Creating an Out Message
2.3. Sample Pipeline for InOnly Exchanges
2.4. Sample Pipeline for InOut Exchanges
2.5. Processing Multiple Inputs with Segmented Routes
4.1. Message Pattern
4.2. Message Channel Pattern
4.3. Message Endpoint Pattern
4.4. Pipes and Filters Pattern
4.5. Pipeline for InOut Exchanges
4.6. Pipeline for InOnly Exchanges
4.7. Message Router Pattern
4.8. Message Translator Pattern
5.1. Point to Point Channel Pattern
5.2. Publish Subscribe Channel Pattern
5.3. Dead Letter Channel Pattern
5.4. Guaranteed Delivery Pattern
5.5. Message Bus Pattern
6.1. Correlation Identifier Pattern
7.1. Content-Based Router Pattern
7.2. Message Filter Pattern
7.3. Recipient List Pattern
7.4. Splitter Pattern
7.5. Aggregator Pattern
7.6. Aggregator Implementation
7.7. Recoverable Aggregation Repository
7.8. Resequencer Pattern
7.9. Routing Slip Pattern
7.10. Multicast Pattern
7.11. Composed Message Processor Pattern
7.12. Scatter-Gather Pattern
7.13. Dynamic Router Pattern
8.1. Content Enricher Pattern
8.2. Content Filter Pattern
8.3. Normalizer Pattern
8.4. Claim Check Pattern
9.1. Event Driven Consumer Pattern
9.2. Polling Consumer Pattern
9.3. Competing Consumers Pattern
9.4. Message Dispatcher Pattern
9.5. Selective Consumer Pattern
9.6. Durable Subscriber Pattern
9.7. Transactional Client Pattern
9.8. Messaging Gateway Pattern
9.9. Service Activator Pattern
10.1. Wire Tap Pattern
A.1. Content-based Router Pattern
A.2. Content Enricher Pattern
A.3. Message Filter Pattern
A.4. Pipes and Filters Pattern
A.5. Resequencer Pattern
A.6. Static Recipient List Pattern
A.7. Wire Tap Pattern
A.8. XPath Splitter Pattern

List of Tables

1.1. Apache Camel Processors
2.1. Error Handler Types
2.2. Registry Plug-Ins
2.3. Basic Bean Annotations
2.4. Expression Language Annotations
2.5. Transformation Methods from the ProcessorDefinition Class
2.6. Methods from the Builder Class
2.7. Modifier Methods from the ValueBuilder Class
2.8. Processor Threading Options
2.9. Default Thread Pool Profile Settings
2.10. Thread Pool Builder Options
2.11. JMX Name Pattern Tokens
3.1. Messaging Systems
3.2. Messaging Channels
3.3. Message Construction
3.4. Message Routing
3.5. Message Transformation
3.6. Messaging Endpoints
3.7. System Management
5.1. Redelivery Policy Settings
5.2. Dead Letter Redelivery Headers
7.1. Aggregated Exchange Properties
7.2. Redelivered Exchange Properties
7.3. Aggregator Options
7.4. Batch Resequencer Options
7.5. Weighted Options
A.1. Mapping the Exchange Target Element
A.2. ServiceMix EIP Patterns

List of Examples

1.1. Implementation of a RouteBuilder Class
1.2. Specifying the Router Schema Location
1.3. Implementing a Custom Processor Class
2.1. Simple Transformation of Incoming Messages
2.2. Sample Property File
2.3. Startup Order in Java DSL
2.4. Startup Order in XML DSL
2.5. Java DSL Example of Simple Scheduled Route
2.6. XML DSL Example of Simple Scheduled Route
2.7. Java DSL Example of a Cron Scheduled Route
2.8. XML DSL Example of a Cron Scheduled Route
7.1. Messaging Client Sample
9.1. Filtering Duplicate Messages with an In-memory Cache
A.1. ServiceMix EIP Content-based Route
A.2. Apache Camel Content-based Router Using XML Configuration
A.3. Apache Camel Content-based Router Using Java DSL
A.4. ServiceMix EIP Content Enricher
A.5. Apache Camel Content Enricher using XML Configuration
A.6. Apache Camel Content Enricher using Java DSL
A.7. ServiceMix EIP Message Filter
A.8. Apache Camel Message Filter Using XML
A.9. Apache Camel Message Filter Using Java DSL
A.10. ServiceMix EIP Pipeline
A.11. Apache Camel Pipeline Using XML
A.12. Apache Camel Pipeline Using Java DSL
A.13. ServiceMix EIP Resequncer
A.14. Apache Camel Resequencer Using XML
A.15. Apache Camel Resequencer Using Java DSL
A.16. ServiceMix EIP Static Recipient List
A.17. Apache Camel Static Recipient List Using XML
A.18. Apache Camel Static Recipient List Using Java DSL
A.19. ServiceMix EIP Static Routing Slip
A.20. Apache Camel Static Routing Slip Using XML
A.21. Apache Camel Static Routing Slip Using Java DSL
A.22. ServiceMix EIP Wire Tap
A.23. Apache Camel Wire Tap Using XML
A.24. Apache Camel Wire Tap Using Java DSL
A.25. ServiceMix EIP XPath Splitter
A.26. Apache Camel XPath Splitter Using XML
A.27. Apache Camel XPath Splitter Using Java DSL

An exchange object consists of a message, augmented by metadata. Exchanges are of central importance in Apache Camel, because the exchange is the standard form in which messages are propagated through routing rules. The main constituents of an exchange are, as follows:

Each URI scheme maps to a Apache Camel component, where a Apache Camel component is essentially an endpoint factory. In other words, to use a particular type of endpoint, you must deploy the corresponding Apache Camel component in your runtime container. For example, to use JMS endpoints, you would deploy the JMS component in your container.

Apache Camel provides a large variety of different components that enable you to integrate your application with various transport protocols and third-party products. For example, some of the more commonly used components are: File, JMS, CXF (Web services), HTTP, Jetty, Direct, and Mock. For the full list of supported components, see the Apache Camel component documentation.

Most of the Apache Camel components are packaged separately to the Camel core. If you use Maven to build your application, you can easily add a component (and its third-party dependencies) to your application simply by adding a dependency on the relevant component artifact. For example, to include the HTTP component, you would add the following Maven dependency to your project POM file:

<!-- Maven POM File -->
  <properties>
    <camel-version>2.10.0-fuse-00-05</camel-version>
    ...
  </properties>

  <dependencies>
    ...
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-http</artifactId>
      <version>${camel-version}</version>
    </dependency>
    ...
  </dependencies>

The following components are built-in to the Camel core (in the camel-core artifact), so they are always available:

  • Bean

  • Browse

  • Dataset

  • Direct

  • File

  • Log

  • Mock

  • Properties

  • Ref

  • SEDA

  • Timer

  • VM

A consumer endpoint is an endpoint that appears at the start of a route (that is, in a from() DSL command). In other words, the consumer endpoint is responsible for initiating processing in a route: it creates a new exchange instance (typically, based on some message that it has received or obtained), and provides a thread to process the exchange in the rest of the route.

For example, the following JMS consumer endpoint pulls messages off the payments queue and processes them in the route:

from("jms:queue:payments")
  .process(SomeProcessor)
  .to("TargetURI");

Or equivalently, in Spring XML:

<camelContext id="CamelContextID"
              xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="jms:queue:payments"/>
    <process ref="someProcessorId"/>
    <to uri="TargetURI"/>
  </route>
</camelContext>

Some components are consumer only—that is, they can only be used to define consumer endpoints. For example, the Quartz component is used exclusively to define consumer endpoints. The following Quartz endpoint generates an event every second (1000 milliseconds):

from("quartz://secondTimer?trigger.repeatInterval=1000")
  .process(SomeProcessor)
  .to("TargetURI");

If you like, you can specify the endpoint URI as a formatted string, using the fromF() Java DSL command. For example, to substitute the username and password into the URI for an FTP endpoint, you could write the route in Java, as follows:

fromF("ftp:%s@fusesource.com?password=%s", username, password)
  .process(SomeProcessor)
  .to("TargetURI");

Where the first occurrence of %s is replaced by the value of the username string and the second occurrence of %s is replaced by the password string. This string formatting mechanism is implemented by String.format() and is similar to the formatting provided by the C printf() function. For details, see java.util.Formatter.

To enable the router to do something more interesting than simply connecting a consumer endpoint to a producer endpoint, you can add processors to your route. A processor is a command you can insert into a routing rule to perform arbitrary processing of messages that flow through the rule. Apache Camel provides a wide variety of different processors, as shown in Table 1.1.

Table 1.1. Apache Camel Processors

Java DSLXML DSLDescription
aggregate()aggregate

Aggregator EIP: Creates an aggregator, which combines multiple incoming exchanges into a single exchange.

aop()aop

Use Aspect Oriented Programming (AOP) to do work before and after a specified sub-route. See Aspect Oriented Programming.

bean(), beanRef()bean

Process the current exchange by invoking a method on a Java object (or bean). See Bean Integration.

choice()choice

Content Based Router EIP: Selects a particular sub-route based on the exchange content, using when and otherwise clauses.

convertBodyTo()convertBodyTo

Converts the In message body to the specified type.

delay()delay

Delayer EIP: Delays the propagation of the exchange to the latter part of the route.

doTry()doTry

Creates a try/catch block for handling exceptions, using doCatch, doFinally, and end clauses.

end()N/AEnds the current command block.
enrich(),enrichRef()enrich

Content Enricher EIP: Combines the current exchange with data requested from a specified producer endpoint URI.

filter()filter

Message Filter EIP: Uses a predicate expression to filter incoming exchanges.

idempotentConsumer()idempotentConsumer

Idempotent Consumer EIP: Implements a strategy to suppress duplicate messages.

inheritErrorHandler()@inheritErrorHandlerBoolean option that can be used to disable the inherited error handler on a particular route node (defined as a sub-clause in the Java DSL and as an attribute in the XML DSL).
inOnly()inOnly

Either sets the current exchange's MEP to InOnly (if no arguments) or sends the exchange as an InOnly to the specified endpoint(s).

inOut()inOut

Either sets the current exchange's MEP to InOut (if no arguments) or sends the exchange as an InOut to the specified endpoint(s).

loadBalance()loadBalance

Load Balancer EIP: Implements load balancing over a collection of endpoints.

log()logLogs a message to the console.
loop()loop

Loop EIP: Repeatedly resends each exchange to the latter part of the route.

markRollbackOnly()@markRollbackOnly(Transactions) Marks the current transaction for rollback only (no exception is raised). In the XML DSL, this option is set as a boolean attribute on the rollback element. See EIP Transaction Guide.
markRollbackOnlyLast()@markRollbackOnlyLast(Transactions) If one or more transactions have previously been associated with this thread and then suspended, this command marks the latest transaction for rollback only (no exception is raised). In the XML DSL, this option is set as a boolean attribute on the rollback element. See EIP Transaction Guide.
marshal()marshal

Transforms into a low-level or binary format using the specified data format, in preparation for sending over a particular transport protocol. See Marshalling and unmarshalling.

multicast()multicast

Multicast EIP: Multicasts the current exchange to multiple destinations, where each destination gets its own copy of the exchange.

onCompletion()onCompletion

Defines a sub-route (terminated by end() in the Java DSL) that gets executed after the main route has completed. For conditional execution, use the onWhen sub-clause. Can also be defined on its own line (not in a route).

onException()onException

Defines a sub-route (terminated by end() in the Java DSL) that gets executed whenever the specified exception occurs. Usually defined on its own line (not in a route).

pipeline()pipeline

Pipes and Filters EIP: Sends the exchange to a series of endpoints, where the output of one endpoint becomes the input of the next endpoint. See also Pipeline Processing.

policy()policy

Apply a policy to the current route (currently only used for transactional policies—see EIP Transaction Guide).

pollEnrich(),pollEnrichRef()pollEnrich

Content Enricher EIP: Combines the current exchange with data polled from a specified consumer endpoint URI.

process(),processRefprocess

Execute a custom processor on the current exchange. See Custom processor and Programming EIP Components.

recipientList()recipientList

Recipient List EIP: Sends the exchange to a list of recipients that is calculated at runtime (for example, based on the contents of a header).

removeHeader()removeHeader

Removes the specified header from the exchange's In message.

removeHeaders()removeHeadersRemoves the headers matching the specified pattern from the exchange's In message. The pattern can have the form, prefix*—in which case it matches every name starting with prefix—otherwise, it is interpreted as a regular expression.
removeProperty()removeProperty

Removes the specified exchange property from the exchange.

resequence()resequence

Resequencer EIP: Re-orders incoming exchanges on the basis of a specified comparotor operation. Supports a batch mode and a stream mode.

rollback()rollback

(Transactions) Marks the current transaction for rollback only (also raising an exception, by default). See EIP Transaction Guide.

routingSlip()routingSlip

Routing Slip EIP: Routes the exchange through a pipeline that is constructed dynamically, based on the list of endpoint URIs extracted from a slip header.

sample()sampleCreates a sampling throttler, allowing you to extract a sample of exchanges from the traffic on a route.
setBody()setBody

Sets the message body of the exchange's In message.

setExchangePattern()setExchangePattern

Sets the current exchange's MEP to the specified value. See Message exchange patterns.

setHeader()setHeader

Sets the specified header in the exchange's In message.

setOutHeader()setOutHeader

Sets the specified header in the exchange's Out message.

setProperty()setProperty()

Sets the specified exchange property.

sort()sort

Sorts the contents of the In message body (where a custom comparator can optionally be specified).

split()split

Splitter EIP: Splits the current exchange into a sequence of exchanges, where each split exchange contains a fragment of the original message body.

stop()stop

Stops routing the current exchange and marks it as completed.

threads()threads

Creates a thread pool for concurrent processing of the latter part of the route.

throttle()throttle

Throttler EIP: Limit the flow rate to the specified level (exchanges per second).

throwException()throwException

Throw the specified Java exception.

to()to

Send the exchange to one or more endpoints. See Pipeline Processing.

toF()N/ASend the exchange to an endpoint, using string formatting. That is, the endpoint URI string can embed substitutions in the style of the C printf() function.
transacted()transacted

Create a Spring transaction scope that encloses the latter part of the route. See EIP Transaction Guide.

transform()transform

Message Translator EIP: Copy the In message headers to the Out message headers and set the Out message body to the specified value.

unmarshal()unmarshal

Transforms the In message body from a low-level or binary format to a high-level format, using the specified data format. See Marshalling and unmarshalling.

validate()validateTakes a predicate expression to test whether the current message is valid. If the predicate returns false, throws a PredicateValidationException exception.
wireTap()wireTap

Wire Tap EIP: Sends a copy of the current exchange to the specified wire tap URI, using the ExchangePattern.InOnly MEP.


The choice() processor is a conditional statement that is used to route incoming messages to alternative producer endpoints. Each alternative producer endpoint is preceded by a when() method, which takes a predicate argument. If the predicate is true, the following target is selected, otherwise processing proceeds to the next when() method in the rule. For example, the following choice() processor directs incoming messages to either Target1, Target2, or Target3, depending on the values of Predicate1 and Predicate2:

from("SourceURL")
    .choice()
        .when(Predicate1).to("Target1")
        .when(Predicate2).to("Target2")
        .otherwise().to("Target3");

Or equivalently in Spring XML:

<camelContext id="buildSimpleRouteWithChoice" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="SourceURL"/>
    <choice>
      <when>
        <!-- First predicate -->
        <simple>header.foo = 'bar'</simple>
        <to uri="Target1"/>
      </when>
      <when>
        <!-- Second predicate -->
        <simple>header.foo = 'manchu'</simple>
        <to uri="Target2"/>
      </when>
      <otherwise>
        <to uri="Target3"/>
      </otherwise>
    </choice>
  </route>
</camelContext>

In the Java DSL, there is a special case where you might need to use the endChoice() command. Some of the standard Apache Camel processors enable you to specify extra parameters using special sub-clauses, effectively opening an extra level of nesting which is usually terminated by the end() command. For example, you could specify a load balancer clause as loadBalance().roundRobin().to("mock:foo").to("mock:bar").end(), which load balances messages between the mock:foo and mock:bar endpoints. If the load balancer clause is embedded in a choice condition, however, it is necessary to terminate the clause using the endChoice() command, as follows:

from("direct:start")
    .choice()
        .when(body().contains("Camel"))
            .loadBalance().roundRobin().to("mock:foo").to("mock:bar").endChoice()
        .otherwise()
            .to("mock:result");

Every node in a route, except for the initial endpoint, is a processor, in the sense that they inherit from the org.apache.camel.Processor interface. In other words, processors make up the basic building blocks of a DSL route. For example, DSL commands such as filter(), delayer(), setBody(), setHeader(), and to() all represent processors. When considering how processors connect together to build up a route, it is important to distinguish two different processing approaches.

The first approach is where the processor simply modifies the exchange's In message, as shown in Figure 2.1. The exchange's Out message remains null in this case.


The following route shows a setHeader() command that modifies the current In message by adding (or modifying) the BillingSystem heading:

from("activemq:orderQueue")
    .setHeader("BillingSystem", xpath("/order/billingSystem"))
    .to("activemq:billingQueue");

The second approach is where the processor creates an Out message to represent the result of the processing, as shown in Figure 2.2.


The following route shows a transform() command that creates an Out message with a message body containing the string, DummyBody:

from("activemq:orderQueue")
    .transform(constant("DummyBody"))
    .to("activemq:billingQueue");

where constant("DummyBody") represents a constant expression. You cannot pass the string, DummyBody, directly, because the argument to transform() must be an expression type.

Figure 2.3 shows an example of a processor pipeline for InOnly exchanges. Processor A acts by modifying the In message, while processors B and C create an Out message. The route builder links the processors together as shown. In particular, processors B and C are linked together in the form of a pipeline: that is, processor B's Out message is moved to the In message before feeding the exchange into processor C, and processor C's Out message is moved to the In message before feeding the exchange into the producer endpoint. Thus the processors' outputs and inputs are joined into a continuous pipeline, as shown in Figure 2.3.


Apache Camel employs the pipeline pattern by default, so you do not need to use any special syntax to create a pipeline in your routes. For example, the following route pulls messages from a userdataQueue queue, pipes the message through a Velocity template (to produce a customer address in text format), and then sends the resulting text address to the queue, envelopeAddressQueue:

from("activemq:userdataQueue")
    .to(ExchangePattern.InOut, "velocity:file:AdressTemplate.vm")
    .to("activemq:envelopeAddresses");

Where the Velocity endpoint, velocity:file:AdressTemplate.vm, specifies the location of a Velocity template file, file:AdressTemplate.vm, in the file system. The to() command changes the exchange pattern to InOut before sending the exchange to the Velocity endpoint and then changes it back to InOnly afterwards. For more details of the Velocity endpoint, see Velocity in EIP Component Reference.

Figure 2.4 shows an example of a processor pipeline for InOut exchanges, which you typically use to support remote procedure call (RPC) semantics. Processors A, B, and C are linked together in the form of a pipeline, with the output of each processor being fed into the input of the next. The final Out message produced by the producer endpoint is sent all the way back to the consumer endpoint, where it provides the reply to the original request.


Note that in order to support the InOut exchange pattern, it is essential that the last node in the route (whether it is a producer endpoint or some other kind of processor) creates an Out message. Otherwise, any client that connects to the consumer endpoint would hang and wait indefinitely for a reply message. You should be aware that not all producer endpoints create Out messages.

Consider the following route that processes payment requests, by processing incoming HTTP requests:

from("jetty:http://localhost:8080/foo")
    .to("cxf:bean:addAccountDetails")
    .to("cxf:bean:getCreditRating")
    .to("cxf:bean:processTransaction");

Where the incoming payment request is processed by passing it through a pipeline of Web services, cxf:bean:addAccountDetails, cxf:bean:getCreditRating, and cxf:bean:processTransaction. The final Web service, processTransaction, generates a response (Out message) that is sent back through the JETTY endpoint.

When the pipeline consists of just a sequence of endpoints, it is also possible to use the following alternative syntax:

from("jetty:http://localhost:8080/foo")
    .pipeline("cxf:bean:addAccountDetails", "cxf:bean:getCreditRating", "cxf:bean:processTransaction");

The pipeline for InOptionalOut exchanges is essentially the same as the pipeline in Figure 2.4. The difference between InOut and InOptionalOut is that an exchange with the InOptionalOut exchange pattern is allowed to have a null Out message as a reply. That is, in the case of an InOptionalOut exchange, a null Out message is copied to the In message of the next node in the pipeline. By contrast, in the case of an InOut exchange, a null Out message is discarded and the original In message from the current node would be copied to the In message of the next node instead.

The SEDA component provides an alternative mechanism for linking together routes. You can use it in a similar way to the direct component, but it has a different underlying event and threading model, as follows:

One of the main advantages of using a SEDA endpoint is that the routes can be more responsive, owing to the built-in consumer thread pool. The stock transactions example can be re-written to use SEDA endpoints instead of direct endpoints, as follows:

from("activemq:Nyse").to("seda:mergeTxns");
from("activemq:Nasdaq").to("seda:mergeTxns");

from("seda:mergeTxns").to("activemq:USTxn");

The main difference between this example and the direct example is that when using SEDA, the second route segment (from seda:mergeTxns to activemq:USTxn) is processed by a pool of five threads.

[Note]Note

There is more to SEDA than simply pasting together route segments. The staged event-driven architecture (SEDA) encompasses a design philosophy for building more manageable multi-threaded applications. The purpose of the SEDA component in Apache Camel is simply to enable you to apply this design philosophy to your applications. For more details about SEDA, see http://www.eecs.harvard.edu/~mdw/proj/seda/.

The content enricher pattern defines a fundamentally different way of dealing with multiple inputs to a route. When an exchange enters the enricher processor, the enricher contacts an external resource to retrieve information, which is then added to the original message. In this pattern, the external resource effectively represents a second input to the message.

For example, suppose you are writing an application that processes credit requests. Before processing a credit request, you need to augment it with the data that assigns a credit rating to the customer, where the ratings data is stored in a file in the directory, src/data/ratings. You can combine the incoming credit request with data from the ratings file using the pollEnrich() pattern and a GroupedExchangeAggregationStrategy aggregation strategy, as follows:

from("jms:queue:creditRequests")
    .pollEnrich("file:src/data/ratings?noop=true", new GroupedExchangeAggregationStrategy())
    .bean(new MergeCreditRequestAndRatings(), "merge")
    .to("jms:queue:reformattedRequests");

Where the GroupedExchangeAggregationStrategy class is a standard aggregation strategy from the org.apache.camel.processor.aggregate package that adds each new exchange to a java.util.List instance and stores the resulting list in the Exchange.GROUPED_EXCHANGE exchange property. In this case, the list contains two elements: the original exchange (from the creditRequests JMS queue); and the enricher exchange (from the file endpoint).

To access the grouped exchange, you can use code like the following:

public class MergeCreditRequestAndRatings {
    public void merge(Exchange ex) {
        // Obtain the grouped exchange
        List<Exchange> list = ex.getProperty(Exchange.GROUPED_EXCHANGE, List.class);

        // Get the exchanges from the grouped exchange
        Exchange originalEx = list.get(0);
        Exchange ratingsEx  = list.get(1);

        // Merge the exchanges
        ...
    }
}

An alternative approach to this application would be to put the merge code directly into the implementation of the custom aggregation strategy class.

For more details about the content enricher pattern, see Content Enricher.

You can define multiple onException clauses to trap exceptions in a RouteBuilder scope. This enables you to take different actions in response to different exceptions. For example, the following series of onException clauses defined in the Java DSL define different deadletter destinations for ValidationException, ValidationException, and Exception:

onException(ValidationException.class).to("activemq:validationFailed");
onException(java.io.IOException.class).to("activemq:ioExceptions");
onException(Exception.class).to("activemq:exceptions");

You can define the same series of onException clauses in the XML DSL as follows:

<onException>
    <exception>com.mycompany.ValidationException</exception>
    <to uri="activemq:validationFailed"/>
</onException>
<onException>
    <exception>java.io.IOException</exception>
    <to uri="activemq:ioExceptions"/>
</onException>
<onException>
    <exception>java.lang.Exception</exception>
    <to uri="activemq:exceptions"/>
</onException>

You can also group multiple exceptions together to be trapped by the same onException clause. In the Java DSL, you can group multiple exceptions as follows:

onException(ValidationException.class, BuesinessException.class)
  .to("activemq:validationFailed");

In the XML DSL, you can group multiple exceptions together by defining more than one exception element inside the onException element, as follows:

<onException>
    <exception>com.mycompany.ValidationException</exception>
    <exception>com.mycompany.BuesinessException</exception>
    <to uri="activemq:validationFailed"/>
</onException>

When trapping multiple exceptions, the order of the onException clauses is significant. Apache Camel initially attempts to match the thrown exception against the first clause. If the first clause fails to match, the next onException clause is tried, and so on until a match is found. Each matching attempt is governed by the following algorithm:

  1. If the thrown exception is a chained exception (that is, where an exception has been caught and rethrown as a different exception), the most nested exception type serves initially as the basis for matching. This exception is tested as follows:

    1. If the exception-to-test has exactly the type specified in the onException clause (tested using instanceof), a match is triggered.

    2. If the exception-to-test is a sub-type of the type specified in the onException clause, a match is triggered.

  2. If the most nested exception fails to yield a match, the next exception in the chain (the wrapping exception) is tested instead. The testing continues up the chain until either a match is triggered or the chain is exhausted.

Instead of interrupting the processing of a message and giving up as soon as an exception is raised, Apache Camel gives you the option of attempting to redeliver the message at the point where the exception occurred. In networked systems, where timeouts can occur and temporary faults arise, it is often possible for failed messages to be processed successfully, if they are redelivered shortly after the original exception was raised.

The Apache Camel redelivery supports various strategies for redelivering messages after an exception occurs. Some of the most important options for configuring redelivery are as follows:

In the Java DSL, redelivery policy options are specified using DSL commands in the onException clause. For example, you can specify a maximum of six redeliveries, after which the exchange is sent to the validationFailed deadletter queue, as follows:

onException(ValidationException.class)
  .maximumRedeliveries(6)
  .retryAttemptedLogLevel(org.apache.camel.LogginLevel.WARN)
  .to("activemq:validationFailed");

In the XML DSL, redelivery policy options are specified by setting attributes on the redeliveryPolicy element. For example, the preceding route can be expressed in XML DSL as follows:

<onException useOriginalMessage="true">
    <exception>com.mycompany.ValidationException</exception>
    <redeliveryPolicy maximumRedeliveries="6"/>
    <to uri="activemq:validationFailed"/>
</onException>

The latter part of the route—after the redelivery options are set—is not processed until after the last redelivery attempt has failed. For detailed descriptions of all the redelivery options, see Dead Letter Channel.

Alternatively, you can specify redelivery policy options in a redeliveryPolicyProfile instance. You can then reference the redeliveryPolicyProfile instance using the onException element's redeliverPolicyRef attribute. For example, the preceding route can be expressed as follows:

<redeliveryPolicyProfile id="redelivPolicy" maximumRedeliveries="6" retryAttemptedLogLevel="WARN"/>

<onException useOriginalMessage="true" redeliveryPolicyRef="redelivPolicy">
    <exception>com.mycompany.ValidationException</exception>
    <to uri="activemq:validationFailed"/>
</onException>
[Note]Note

The approach using redeliveryPolicyProfile is useful, if you want to re-use the same redelivery policy in multiple onException clauses.

Exception trapping with onException can be made conditional by specifying the onWhen option. If you specify the onWhen option in an onException clause, a match is triggered only when the thrown exception matches the clause and the onWhen predicate evaluates to true on the current exchange.

For example, in the following Java DSL fragment,the first onException clause triggers, only if the thrown exception matches MyUserException and the user header is non-null in the current exchange:

// Java

// Here we define onException() to catch MyUserException when
// there is a header[user] on the exchange that is not null
onException(MyUserException.class)
    .onWhen(header("user").isNotNull())
    .maximumRedeliveries(2)
    .to(ERROR_USER_QUEUE);

// Here we define onException to catch MyUserException as a kind
// of fallback when the above did not match.
// Noitce: The order how we have defined these onException is
// important as Camel will resolve in the same order as they
// have been defined
onException(MyUserException.class)
    .maximumRedeliveries(2)
    .to(ERROR_QUEUE);

The preceding onException clauses can be expressed in the XML DSL as follows:

<redeliveryPolicyProfile id="twoRedeliveries" maximumRedeliveries="2"/>

<onException redeliveryPolicyRef="twoRedeliveries">
    <exception>com.mycompany.MyUserException</exception>
    <onWhen>
        <simple>${header.user} != null</simple>
    </onWhen>
    <to uri="activemq:error_user_queue"/>
</onException>

<onException redeliveryPolicyRef="twoRedeliveries">
    <exception>com.mycompany.MyUserException</exception>
    <to uri="activemq:error_queue"/>
</onException>

By default, when an exception is raised in the middle of a route, processing of the current exchange is interrupted and the thrown exception is propagated back to the consumer endpoint at the start of the route. When an onException clause is triggered, the behavior is essentially the same, except that the onException clause performs some processing before the thrown exception is propagated back.

But this default behavior is not the only way to handle an exception. The onException provides various options to modify the exception handling behavior, as follows:

  • Suppressing exception rethrow—you have the option of suppressing the rethrown exception after the onException clause has completed. In other words, in this case the exception does not propagate back to the consumer endpoint at the start of the route.

  • Continuing processing—you have the option of resuming normal processing of the exchange from the point where the exception originally occurred. Implicitly, this approach also suppresses the rethrown exception.

  • Sending a response—in the special case where the consumer endpoint at the start of the route expects a reply (that is, having an InOut MEP), you might prefer to construct a custom fault reply message, rather than propagating the exception back to the consumer endpoint.

When the consumer endpoint that starts a route expects a reply, you might prefer to construct a custom fault reply message, instead of simply letting the thrown exception propagate back to the consumer. There are two essential steps you need to follow in this case: suppress the rethrown exception using the handled option; and populate the exchange's Out message slot with a custom fault message.

For example, the following Java DSL fragment shows how to send a reply message containing the text string, Sorry, whenever the MyFunctionalException exception occurs:

// we catch MyFunctionalException and want to mark it as handled (= no failure returned to client)
// but we want to return a fixed text response, so we transform OUT body as Sorry.
onException(MyFunctionalException.class)
    .handled(true)
    .transform().constant("Sorry");

If you are sending a fault response to the client, you will often want to incorporate the text of the exception message in the response. You can access the text of the current exception message using the exceptionMessage() builder method. For example, you can send a reply containing just the text of the exception message whenever the MyFunctionalException exception occurs, as follows:

// we catch MyFunctionalException and want to mark it as handled (= no failure returned to client)
// but we want to return a fixed text response, so we transform OUT body and return the exception message
onException(MyFunctionalException.class)
    .handled(true)
    .transform(exceptionMessage());

The exception message text is also accessible from the Simple language, through the exception.message variable. For example, you could embed the current exception text in a reply message, as follows:

// we catch MyFunctionalException and want to mark it as handled (= no failure returned to client)
// but we want to return a fixed text response, so we transform OUT body and return a nice message
// using the simple language where we want insert the exception message
onException(MyFunctionalException.class)
    .handled(true)
    .transform().simple("Error reported: ${exception.message} - cannot process this message.");

The preceding onException clause can be expressed in XML DSL as follows:

<onException>
    <exception>com.mycompany.MyFunctionalException</exception>
    <handled>
        <constant>true</constant>
    </handled>
    <transform>
        <simple>Error reported: ${exception.message} - cannot process this message.</simple>
    </transform>
</onException>

It is possible to configure a CXF endpoint so that, when a Java exception is thrown on the server side, the stack trace for the exception is marshalled into a fault message and returned to the client. To enable this feaure, set the dataFormat to PAYLOAD and set the faultStackTraceEnabled property to true in the cxfEndpoint element, as follows:

<cxf:cxfEndpoint id="router" address="http://localhost:9002/TestMessage"
    wsdlURL="ship.wsdl"
    endpointName="s:TestSoapEndpoint"
    serviceName="s:TestService"
    xmlns:s="http://test">
  <cxf:properties>
    <!-- enable sending the stack trace back to client; the default value is false-->
    <entry key="faultStackTraceEnabled" value="true" />
    <entry key="dataFormat" value="PAYLOAD" />
  </cxf:properties>
</cxf:cxfEndpoint>

For security reasons, the stack trace does not include the causing exception (that is, the part of a stack trace that follows Caused by). If you want to include the causing exception in the stack trace, set the exceptionMessageCauseEnabled property to true in the cxfEndpoint element, as follows:

<cxf:cxfEndpoint id="router" address="http://localhost:9002/TestMessage"
    wsdlURL="ship.wsdl"
    endpointName="s:TestSoapEndpoint"
    serviceName="s:TestService"
    xmlns:s="http://test">
  <cxf:properties>
    <!-- enable to show the cause exception message and the default value is false -->
    <entry key="exceptionMessageCauseEnabled" value="true" />
    <!-- enable to send the stack trace back to client,  the default value is false-->
    <entry key="faultStackTraceEnabled" value="true" />
    <entry key="dataFormat" value="PAYLOAD" />
  </cxf:properties>
</cxf:cxfEndpoint>
[Warning]Warning

You should only enable the exceptionMessageCauseEnabled flag for testing and diagnostic purposes. It is normal practice for servers to conceal the original cause of an exception to make it harder for hostile users to probe the server.

You can specify parameter values explicitly, when you call the bean method. The following simple type values can be passed:

The following example shows how you can mix explicit parameter values with type specifiers in the same method invocation:

from("file:data/inbound")
  .bean(MyBeanProcessor.class, "processBody(String, 'Sample string value', true, 7)")
  .to("file:data/outbound");

In the preceding example, the value of the first parameter would presumably be determined by a parameter binding annotation (see Basic annotations).

In addition to the simple type values, you can also specify parameter values using the Simple language (The Simple Language in Routing Expression and Predicate Languages). This means that the full power of the Simple language is available when specifying parameter values. For example, to pass the message body and the value of the title header to a bean method:

from("file:data/inbound")
  .bean(MyBeanProcessor.class, "processBodyAndHeader(${body},${header.title})")
  .to("file:data/outbound");

You can also pass the entire header hash map as a parameter. For example, in the following example, the second method parameter must be declared to be of type java.util.Map:

from("file:data/inbound")
  .bean(MyBeanProcessor.class, "processBodyAndAllHeaders(${body},${header})")
  .to("file:data/outbound");

Instead of creating a bean instance in Java, you can create an instance using Spring XML. In fact, this is the only feasible approach if you are defining your routes in XML. To define a bean in XML, use the standard Spring bean element. The following example shows how to create an instance of MyBeanProcessor:

<beans ...>
    ...
    <bean id="myBeanId" class="com.acme.MyBeanProcessor"/>
</beans>

It is also possible to pass data to the bean's constructor arguments using Spring syntax. For full details of how to use the Spring bean element, see The IoC Container from the Spring reference guide.

When you create an object instance using the bean element, you can reference it later using the bean's ID (the value of the bean element's id attribute). For example, given the bean element with ID equal to myBeanId, you can reference the bean in a Java DSL route using the beanRef() processor, as follows:

from("file:data/inbound").beanRef("myBeanId", "processBody").to("file:data/outbound");

Where the beanRef() processor invokes the MyBeanProcessor.processBody() method on the specified bean instance. You can also invoke the bean from within a Spring XML route, using the Camel schema's bean element. For example:

<camelContext id="CamelContextID" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="file:data/inbound"/>
    <bean ref="myBeanId" method="processBody"/>
    <to uri="file:data/outbound"/>
  </route>
</camelContext>

The basic parameter bindings described in Basic method signatures might not always be convenient to use. For example, if you have a legacy Java class that performs some data manipulation, you might want to extract data from an inbound exchange and map it to the arguments of an existing method signature. For this kind of parameter binding, Apache Camel provides the following kinds of Java annotation:

The expression language annotations provide a powerful mechanism for injecting message data into a bean method's arguments. Using these annotations, you can invoke an arbitrary script, written in the scripting language of your choice, to extract data from an inbound exchange and inject the data into a method argument. Table 2.4 shows the annotations from the org.apache.camel.language package (and sub-packages, for the non-core annotations) that you can use to inject message data into the arguments of a bean method.


For example, the following class shows you how to use the @XPath annotation to extract a username and a password from the body of an incoming message in XML format:

// Java
import org.apache.camel.language.*;

public class MyBeanProcessor {
    public void checkCredentials(
        @XPath("/credentials/username/text()") String user,
        @XPath("/credentials/password/text()") String pass
    ) {
        // Check the user/pass credentials...
        ...
    }
}

The @Bean annotation is a special case, because it enables you to inject the result of invoking a registered bean. For example, to inject a correlation ID into a method argument, you can use the @Bean annotation to invoke an ID generator class, as follows:

// Java
import org.apache.camel.language.*;

public class MyBeanProcessor {
    public void processCorrelatedMsg(
        @Bean("myCorrIdGenerator") String corrId,
        @Body String body
    ) {
        // Check the user/pass credentials...
        ...
    }
}

Where the string, myCorrIdGenerator, is the bean ID of the ID generator instance. The ID generator class can be instantiated using the spring bean element, as follows:

<beans ...>
    ...
    <bean id="myCorrIdGenerator" class="com.acme.MyIdGenerator"/>
</beans>

Where the MySimpleIdGenerator class could be defined as follows:

// Java
package com.acme;

public class MyIdGenerator {

    private UserManager userManager;

    public String generate(
        @Header(name = "user") String user,
        @Body String payload
    ) throws Exception {
       User user = userManager.lookupUser(user);
       String userId = user.getPrimaryId();
       String id = userId + generateHashCodeForPayload(payload);
       return id;
   }
}

Notice that you can also use annotations in the referenced bean class, MyIdGenerator. The only restriction on the generate() method signature is that it must return the correct type to inject into the argument annotated by @Bean. Because the @Bean annotation does not let you specify a method name, the injection mechanism simply invokes the first method in the referenced bean that has the matching return type.

[Note]Note

Some of the language annotations are available in the core component (@Bean, @Constant, @Simple, and @XPath). For non-core components, however, you will have to make sure that you load the relevant component. For example, to use the OGNL script, you must load the camel-ognl component.

The org.apache.camel.model.ProcessorDefinition class defines the DSL commands you can insert directly into a router rule—for example, the setBody() command in Example 2.1. Table 2.5 shows the ProcessorDefinition methods that are relevant to transforming message content:

Table 2.5. Transformation Methods from the ProcessorDefinition Class

MethodDescription
Type convertBodyTo(Class type) Converts the IN message body to the specified type.
Type removeFaultHeader(String name) Adds a processor which removes the header on the FAULT message.
Type removeHeader(String name) Adds a processor which removes the header on the IN message.
Type removeProperty(String name) Adds a processor which removes the exchange property.
ExpressionClause<ProcessorDefinition<Type>> setBody() Adds a processor which sets the body on the IN message.
Type setFaultBody(Expression expression) Adds a processor which sets the body on the FAULT message.
Type setFaultHeader(String name, Expression expression) Adds a processor which sets the header on the FAULT message.
ExpressionClause<ProcessorDefinition<Type>> setHeader(String name) Adds a processor which sets the header on the IN message.
Type setHeader(String name, Expression expression) Adds a processor which sets the header on the IN message.
ExpressionClause<ProcessorDefinition<Type>> setOutHeader(String name) Adds a processor which sets the header on the OUT message.
Type setOutHeader(String name, Expression expression) Adds a processor which sets the header on the OUT message.
ExpressionClause<ProcessorDefinition<Type>> setProperty(String name) Adds a processor which sets the exchange property.
Type setProperty(String name, Expression expression) Adds a processor which sets the exchange property.
ExpressionClause<ProcessorDefinition<Type>> transform() Adds a processor which sets the body on the OUT message.
Type transform(Expression expression) Adds a processor which sets the body on the OUT message.

The org.apache.camel.builder.Builder class provides access to message content in contexts where expressions or predicates are expected. In other words, Builder methods are typically invoked in the arguments of DSL commands—for example, the body() command in Example 2.1. Table 2.6 summarizes the static methods available in the Builder class.

Table 2.6. Methods from the Builder Class

MethodDescription
static <E extends Exchange> ValueBuilder<E> body() Returns a predicate and value builder for the inbound body on an exchange.
static <E extends Exchange,T> ValueBuilder<E> bodyAs(Class<T> type) Returns a predicate and value builder for the inbound message body as a specific type.
static <E extends Exchange> ValueBuilder<E> constant(Object value) Returns a constant expression.
static <E extends Exchange> ValueBuilder<E> faultBody() Returns a predicate and value builder for the fault body on an exchange.
static <E extends Exchange,T> ValueBuilder<E> faultBodyAs(Class<T> type) Returns a predicate and value builder for the fault message body as a specific type.
static <E extends Exchange> ValueBuilder<E> header(String name) Returns a predicate and value builder for headers on an exchange.
static <E extends Exchange> ValueBuilder<E> outBody() Returns a predicate and value builder for the outbound body on an exchange.
static <E extends Exchange> ValueBuilder<E> outBodyAs(Class<T> type) Returns a predicate and value builder for the outbound message body as a specific type.
static ValueBuilder property(String name) Returns a predicate and value builder for properties on an exchange.
static ValueBuilder regexReplaceAll(Expression content, String regex, Expression replacement) Returns an expression that replaces all occurrences of the regular expression with the given replacement.
static ValueBuilder regexReplaceAll(Expression content, String regex, String replacement) Returns an expression that replaces all occurrences of the regular expression with the given replacement.
static ValueBuilder sendTo(String uri) Returns an expression processing the exchange to the given endpoint uri.
static <E extends Exchange> ValueBuilder<E> systemProperty(String name) Returns an expression for the given system property.
static <E extends Exchange> ValueBuilder<E> systemProperty(String name, String defaultValue) Returns an expression for the given system property.

The org.apache.camel.builder.ValueBuilder class enables you to modify values returned by the Builder methods. In other words, the methods in ValueBuilder provide a simple way of modifying message content. Table 2.7 summarizes the methods available in the ValueBuilder class. That is, the table shows only the methods that are used to modify the value they are invoked on (for full details, see the API Reference documentation).

Table 2.7. Modifier Methods from the ValueBuilder Class

MethodDescription
ValueBuilder<E> append(Object value) Appends the string evaluation of this expression with the given value.
Predicate contains(Object value) Create a predicate that the left hand expression contains the value of the right hand expression.
ValueBuilder<E> convertTo(Class type) Converts the current value to the given type using the registered type converters.
ValueBuilder<E> convertToString() Converts the current value a String using the registered type converters.
Predicate endsWith(Object value)  
<T> T evaluate(Exchange exchange, Class<T> type)  
Predicate in(Object... values)  
Predicate in(Predicate... predicates)  
Predicate isEqualTo(Object value) Returns true, if the current value is equal to the given value argument.
Predicate isGreaterThan(Object value) Returns true, if the current value is greater than the given value argument.
Predicate isGreaterThanOrEqualTo(Object value) Returns true, if the current value is greater than or equal to the given value argument.
Predicate isInstanceOf(Class type) Returns true, if the current value is an instance of the given type.
Predicate isLessThan(Object value) Returns true, if the current value is less than the given value argument.
Predicate isLessThanOrEqualTo(Object value) Returns true, if the current value is less than or equal to the given value argument.
Predicate isNotEqualTo(Object value) Returns true, if the current value is not equal to the given value argument.
Predicate isNotNull() Returns true, if the current value is not null.
Predicate isNull() Returns true, if the current value is null.
Predicate matches(Expression expression)  
Predicate not(Predicate predicate) Negates the predicate argument.
ValueBuilder prepend(Object value) Prepends the string evaluation of this expression to the given value.
Predicate regex(String regex)  
ValueBuilder<E> regexReplaceAll(String regex, Expression<E> replacement) Replaces all occurrencies of the regular expression with the given replacement.
ValueBuilder<E> regexReplaceAll(String regex, String replacement) Replaces all occurrencies of the regular expression with the given replacement.
ValueBuilder<E> regexTokenize(String regex) Tokenizes the string conversion of this expression using the given regular expression.
ValueBuilder sort(Comparator comparator) Sorts the current value using the given comparator.
Predicate startsWith(Object value) Returns true, if the current value matches the string value of the value argument.
ValueBuilder<E> tokenize() Tokenizes the string conversion of this expression using the comma token separator.
ValueBuilder<E> tokenize(String token) Tokenizes the string conversion of this expression using the given token separator.

You can convert between low-level and high-level message formats using the following commands:

Apache Camel supports marshalling and unmarshalling of the following data formats:

Wherever an endpoint URI string appears in a route, the first step in parsing the endpoint URI is to apply the property placeholder parser. The placeholder parser automatically substitutes any property names appearing between double braces, {{Key}}. For example, given the property settings shown in Example 2.2, you could define a route as follows:

from("{{cool.start}}")
    .to("log:{{cool.start}}?showBodyType=false&showExchangeId={{cool.showid}}")
    .to("mock:{{cool.result}}");

By default, the placeholder parser looks up the properties bean ID in the registry to find the property component. If you prefer, you can explicitly specify the scheme in the endpoint URIs. For example, by prefixing properties: to each of the endpoint URIs, you can define the following equivalent route:

from("properties:{{cool.start}}")
    .to("properties:log:{{cool.start}}?showBodyType=false&showExchangeId={{cool.showid}}")
    .to("properties:mock:{{cool.result}}");

When specifying the scheme explicitly, you also have the option of specifying options to the properties component. For example, to override the property file location, you could set the location option as follows:

from("direct:start").to("properties:{{bar.end}}?location=com/mycompany/bar.properties");

If you define a camelContext element inside an OSGi blueprint file, the Apache Camel property placeholder mechanism automatically integrates with the blueprint property placeholder mechanism. That is, placeholders obeying the Apache Camel syntax (for example, {{cool.end}}) that appear within the scope of camelContext are implicitly resolved by looking up the blueprint property placeholder mechanism.

For example, consider the following route defined in an OSGi blueprint file, where the last endpoint in the route is defined by the property placeholder, {{result}}:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
           xsi:schemaLocation="
           http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

    <!-- OSGI blueprint property placeholder -->
    <cm:property-placeholder id="myblueprint.placeholder" persistent-id="camel.blueprint">
        <!-- list some properties for this test -->
        <cm:default-properties>
            <cm:property name="result" value="mock:result"/>
        </cm:default-properties>
    </cm:property-placeholder>

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
        <!-- in the route we can use {{ }} placeholders which will look up in blueprint,
             as Camel will auto detect the OSGi blueprint property placeholder and use it -->
        <route>
            <from uri="direct:start"/>
            <to uri="mock:foo"/>
            <to uri="{{result}}"/>
        </route>
    </camelContext>

</blueprint>

The blueprint property placeholder mechanism is initialized by creating a cm:property-placeholder bean. In the preceding example, the cm:property-placeholder bean is associated with the camel.blueprint persistent ID, where a persistent ID is the standard way of referencing a group of related properties from the OSGi Configuration Adminn service. In other words, the cm:property-placeholder bean provides access to all of the properties defined under the camel.blueprint persistent ID. It is also possible to specify default values for some of the properties (using the nested cm:property elements).

In the context of blueprint, the Apache Camel placeholder mechanism searches for an instance of cm:property-placeholder in the bean registry. If it finds such an instance, it automatically integrates the Apache Camel placeholder mechanism, so that placeholders like, {{result}}, are resolved by looking up the key in the blueprint property placeholder mechanism (in this example, through the myblueprint.placeholder bean).

[Note]Note

The default blueprint placeholder syntax (accessing the blueprint properties directly) is ${Key}. Hence, outside the scope of a camelContext element, the placeholder syntax you must use is ${Key}. Whereas, inside the scope of a camelContext element, the placeholder syntax you must use is {{Key}}.

If you want to have more control over where the Apache Camel property placeholder mechanism finds its properties, you can define a propertyPlaceholder element and specify the resolver locations explicitly.

For example, consider the following blueprint configuration, which differs from the previous example in that it creates an explicit propertyPlaceholder instance:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
           xsi:schemaLocation="
           http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

    <!-- OSGI blueprint property placeholder -->
    <cm:property-placeholder id="myblueprint.placeholder" persistent-id="camel.blueprint">
        <!-- list some properties for this test -->
        <cm:default-properties>
            <cm:property name="result" value="mock:result"/>
        </cm:default-properties>
    </cm:property-placeholder>

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">

        <!-- using Camel properties component and refer to the blueprint property placeholder by its id -->
        <propertyPlaceholder id="properties" location="blueprint:myblueprint.placeholder"/>

        <!-- in the route we can use {{ }} placeholders which will lookup in blueprint -->
        <route>
            <from uri="direct:start"/>
            <to uri="mock:foo"/>
            <to uri="{{result}}"/>
        </route>

    </camelContext>

</blueprint>

In the preceding example, the propertyPlaceholder element specifies explicitly which cm:property-placeholder bean to use by setting the location to blueprint:myblueprint.placeholder. That is, the blueprint: resolver explicitly references the ID, myblueprint.placeholder, of the cm:property-placeholder bean.

This style of configuration is useful, if there is more than one cm:property-placeholder bean defined in the blueprint file and you need to specify which one to use. It also makes it possible to source properties from multiple locations, by specifying a comma-separated list of locations. For example, if you wanted to look up properties both from the cm:property-placeholder bean and from the properties file, myproperties.properties, on the classpath, you could define the propertyPlaceholder element as follows:

<propertyPlaceholder id="properties"
  location="blueprint:myblueprint.placeholder,classpath:myproperties.properties"/>

If you define your Apache Camel application using XML DSL in a Spring XML file, you can integrate the Apache Camel property placeholder mechanism with Spring property placeholder mechanism by declaring a Spring bean of type, org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer.

Define a BridgePropertyPlaceholderConfigurer, which replaces both Apache Camel's propertyPlaceholder element and Spring's ctx:property-placeholder element in the Spring XML file. You can then refer to the configured properties using either the Spring ${PropName} syntax or the Apache Camel {{PropName}} syntax.

For example, defining a bridge property placeholder that reads its property settings from the cheese.properties file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:osgix="http://www.springframework.org/schema/osgi-compendium"
    xmlns:ctx="http://www.springframework.org/schema/context"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/osgi http://www.springframework.org/schema/osgi/spring-osgi.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/osgi-compendium http://www.springframework.org/schema/osgi-compendium/spring-osgi-compendium.xsd
">

  <!-- Bridge Spring property placeholder with Camel -->
  <!-- Do not use <ctx:property-placeholder ... > at the same time -->
  <bean id="bridgePropertyPlaceholder"
        class="org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer">
    <property name="location"
              value="classpath:org/apache/camel/component/properties/cheese.properties"/>
  </bean>

  <!-- A bean that uses Spring property placeholder -->
  <!-- The ${hi} is a spring property placeholder -->
  <bean id="hello" class="org.apache.camel.component.properties.HelloBean">
    <property name="greeting" value="${hi}"/>
  </bean>

  <camelContext xmlns="http://camel.apache.org/schema/spring">
    <!-- Use Camel's property placeholder {{ }} style -->
    <route>
      <from uri="direct:{{cool.bar}}"/>
      <bean ref="hello"/>
      <to uri="{{cool.end}}"/>
    </route>
  </camelContext>

</beans>

The Apache Camel threading model is based on the powerful Java concurrency API, java.util.concurrent, that first became available in Sun's JDK 1.5. The key interface in this API is the ExecutorService interface, which represents a thread pool. Using the concurrency API, you can create many different kinds of thread pool, covering a wide range of scenarios.

A custom thread pool can be any thread pool of java.util.concurrent.ExecutorService type. The following approaches to creating a thread pool instance are recommended in Apache Camel:

  • Use the org.apache.camel.builder.ThreadPoolBuilder utility to build the thread pool class.

  • Use the org.apache.camel.spi.ExecutorServiceManager instance from the current CamelContext to create the thread pool class.

Ultimately, there is not much difference between the two approaches, because the ThreadPoolBuilder is actually defined using the ExecutorServiceManager instance. Normally, the ThreadPoolBuilder is preferred, because it offers a simpler approach. But there is at least one kind of thread (the ScheduledExecutorService) that can only be created by accessing the ExecutorServiceManager instance directory.

Table 2.10 shows the options supported by the ThreadPoolBuilder class, which you can set when defining a new custom thread pool.

Table 2.10. Thread Pool Builder Options

Builder OptionDescription
maxQueueSize() Sets the maximum number of pending tasks that this thread pool can store in its incoming task queue. A value of -1 specifies an unbounded queue. Default value is taken from default thread pool profile.
poolSize() Sets the minimum number of threads in the pool (this is also the initial pool size). Default value is taken from default thread pool profile.
maxPoolSize() Sets the maximum number of threads that can be in the pool. Default value is taken from default thread pool profile.
keepAliveTime() If any threads are idle for longer than this period of time (specified in seconds), they are terminated. This allows the thread pool to shrink when the load is light. Default value is taken from default thread pool profile.
rejectedPolicy()

Specifies what course of action to take, if the incoming task queue is full. You can specify four possible values:

CallerRuns

(Default value) Gets the caller thread to run the latest incoming task. As a side effect, this option prevents the caller thread from receiving any more tasks until it has finished processing the latest incoming task.

Abort

Aborts the latest incoming task by throwing an exception.

Discard

Quietly discards the latest incoming task.

DiscardOldest

Discards the oldest unhandled task and then attempts to enqueue the latest incoming task in the task queue.

build() Finishes building the custom thread pool and registers the new thread pool under the ID specified as the argument to build().

In Java DSL, you can define a custom thread pool using the ThreadPoolBuilder, as follows:

// Java
import org.apache.camel.builder.ThreadPoolBuilder;
import java.util.concurrent.ExecutorService;
...
ThreadPoolBuilder poolBuilder = new ThreadPoolBuilder(context);
ExecutorService customPool = poolBuilder.poolSize(5).maxPoolSize(5).maxQueueSize(100).build("customPool");
...

from("direct:start")
  .multicast().executorService(customPool)
    .to("mock:first")
    .to("mock:second")
    .to("mock:third");

Instead of passing the object reference, customPool, directly to the executorService() option, you can look up the thread pool in the registry, by passing its bean ID to the executorServiceRef() option, as follows:

// Java
from("direct:start")
  .multicast().executorServiceRef("customPool")
    .to("mock:first")
    .to("mock:second")
    .to("mock:third");

In XML DSL, you access the ThreadPoolBuilder using the threadPool element. You can then reference the custom thread pool using the executorServiceRef attribute to look up the thread pool by ID in the Spring registry, as follows:

<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
    <threadPool id="customPool"
                poolSize="5"
                maxPoolSize="5"
                maxQueueSize="100" />

    <route>
        <from uri="direct:start"/>
        <multicast executorServiceRef="customPool">
            <to uri="mock:first"/>
            <to uri="mock:second"/>
            <to uri="mock:third"/>
        </multicast>
    </route>
</camelContext>

If you have many custom thread pool instances to create, you might find it more convenient to define a custom thread pool profile, which acts as a factory for thread pools. Whenever you reference a thread pool profile from a threading-aware processor, the processor automatically uses the profile to create a new thread pool instance. You can define a custom thread pool profile either in Java DSL or in XML DSL.

For example, in Java DSL you can create a custom thread pool profile with the bean ID, customProfile, and reference it from within a route, as follows:

// Java
import org.apache.camel.spi.ThreadPoolProfile;
import org.apache.camel.impl.ThreadPoolProfileSupport;
...
// Create the custom thread pool profile
ThreadPoolProfile customProfile = new ThreadPoolProfileSupport("customProfile");
customProfile.setPoolSize(5);
customProfile.setMaxPoolSize(5);
customProfile.setMaxQueueSize(100);
context.getExecutorServiceManager().registerThreadPoolProfile(customProfile);
...
// Reference the custom thread pool profile in a route
from("direct:start")
  .multicast().executorServiceRef("customProfile")
    .to("mock:first")
    .to("mock:second")
    .to("mock:third");

In XML DSL, use the threadPoolProfile element to create a custom pool profile (where you let the defaultProfile option default to false, because this is not a default thread pool profile). You can create a custom thread pool profile with the bean ID, customProfile, and reference it from within a route, as follows:

<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
    <threadPoolProfile
                id="customProfile"
                poolSize="5"
                maxPoolSize="5"
                maxQueueSize="100" />

    <route>
        <from uri="direct:start"/>
        <multicast executorServiceRef="customProfile">
            <to uri="mock:first"/>
            <to uri="mock:second"/>
            <to uri="mock:third"/>
        </multicast>
    </route>
</camelContext>

By default, Apache Camel starts up routes in a non-deterministic order. In some applications, however, it can be important to control the startup order. To control the startup order in the Java DSL, use the startupOrder() command, which takes a positive integer value as its argument. The route with the lowest integer value starts first, followed by the routes with successively higher startup order values.

For example, the first two routes in the following example are linked together through the seda:buffer endpoint. You can ensure that the first route segment starts after the second route segment by assigning startup orders (2 and 1 respectively), as follows:


Or in Spring XML, you can achieve the same effect by setting the route element's startupOrder attribute, as follows:


Each route must be assigned a unique startup order value. You can choose any positive integer value that is less than 1000. Values of 1000 and over are reserved for Apache Camel, which automatically assigns these values to routes without an explicit startup value. For example, the last route in the preceding example would automatically be assigned the startup value, 1000 (so it starts up after the first two routes).

Routes are shut down in the reverse of the start-up order. That is, when a start-up order is defined using the startupOrder() command (in Java DSL) or startupOrder attribute (in XML DSL), the first route to shut down is the route with the highest integer value assigned by the start-up order and the last route to shut down is the route with the lowest integer value assigned by the start-up order.

For example, in Example 2.3, the first route segment to be shut down is the route with the ID, first, and the second route segment to be shut down is the route with the ID, second. This example illustrates a general rule, which you should observe when shutting down routes: the routes that expose externally-accessible consumer endpoints should be shut down first, because this helps to throttle the flow of messages through the rest of the route graph.

[Note]Note

Apache Camel also provides the option shutdownRoute(Defer), which enables you to specify that a route must be amongst the last routes to shut down (overriding the start-up order value). But you should rarely ever need this option. This option was mainly needed as a workaround for earlier versions of Apache Camel (prior to 2.3), for which routes would shut down in the same order as the start-up order.

If a route is still processing messages when the shutdown starts, the shutdown strategy normally waits until the currently active exchange has finished processing before shutting down the route. This behavior can be configured on each route using the shutdownRunningTask option, which can take either of the following values:

For example, to shut down a File consumer endpoint gracefully, you should specify the CompleteAllTasks option, as shown in the following Java DSL fragment:

// Java
public void configure() throws Exception {
    from("file:target/pending")
        .routeId("first").startupOrder(2)
        .shutdownRunningTask(ShutdownRunningTask.CompleteAllTasks)
        .delay(1000).to("seda:foo");

    from("seda:foo")
        .routeId("second").startupOrder(1)
        .to("mock:bar");
}

The same route can be defined in the XML DSL as follows:

<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
    <!-- let this route complete all its pending messages when asked to shut down -->
    <route id="first"
           startupOrder="2"
           shutdownRunningTask="CompleteAllTasks">
        <from uri="file:target/pending"/>
        <delay><constant>1000</constant></delay>
        <to uri="seda:foo"/>
    </route>

    <route id="second" startupOrder="1">
        <from uri="seda:foo"/>
        <to uri="mock:bar"/>
    </route>
</camelContext>

Example 2.5 shows how to schedule a route to start up using the Java DSL. The initial start time, startTime, is defined to be 3 seconds after the current time. The policy is also configured to start the route a second time, 3 seconds after the initial start time, which is configured by setting routeStartRepeatCount to 1 and routeStartRepeatInterval to 3000 milliseconds.

In Java DSL, you attach the route policy to the route by calling the routePolicy() DSL command in the route.


[Note]Note

You can specify multiple policies on the route by calling routePolicy() with multiple arguments.

The initial times of the triggers used in the simple scheduled route policy are specified using the java.util.Date type.The most flexible way to define a Date instance is through the java.util.GregorianCalendar class. Use the convenient constructors and methods of the GregorianCalendar class to define a date and then obtain a Date instance by calling GregorianCalendar.getTime().

For example, to define the time and date for January 1, 2011 at noon, call a GregorianCalendar constructor as follows:

// Java
import java.util.GregorianCalendar;
import java.util.Calendar;
...
GregorianCalendar gc = new GregorianCalendar(
    2011,
    Calendar.JANUARY,
    1,
    12,  // hourOfDay
    0,   // minutes
    0    // seconds
);

java.util.Date triggerDate = gc.getTime();

The GregorianCalendar class also supports the definition of times in different time zones. By default, it uses the local time zone on your computer.

When you configure a simple scheduled route policy to stop a route, the route stopping algorithm is automatically integrated with the graceful shutdown procedure (see Controlling Start-Up and Shutdown of Routes). This means that the task waits until the current exchange has finished processing before shutting down the route. You can set a timeout, however, that forces the route to stop after the specified time, irrespective of whether or not the route has finished processing the exchange.

The following table lists the parameters for scheduling one or more route starts.

ParameterTypeDefaultDescription
routeStartDate java.util.Date None Specifies the date and time when the route is started for the first time.
routeStartRepeatCount int 0 When set to a non-zero value, specifies how many times the route should be started.
routeStartRepeatInterval long 0 Specifies the time interval between starts, in units of milliseconds.

The following table lists the parameters for scheduling one or more route stops.

ParameterTypeDefaultDescription
routeStopDate java.util.Date None Specifies the date and time when the route is stopped for the first time.
routeStopRepeatCount int 0 When set to a non-zero value, specifies how many times the route should be stopped.
routeStopRepeatInterval long 0 Specifies the time interval between stops, in units of milliseconds.
routeStopGracePeriod int 10000 Specifies how long to wait for the current exchange to finish processing (grace period) before forcibly stopping the route. Set to 0 for an infinite grace period.
routeStopTimeUnit long TimeUnit.MILLISECONDS Specifies the time unit of the grace period.

The following table lists the parameters for scheduling the suspension of a route one or more times.

ParameterTypeDefaultDescription
routeSuspendDate java.util.Date None Specifies the date and time when the route is suspended for the first time.
routeSuspendRepeatCount int 0When set to a non-zero value, specifies how many times the route should be suspended.
routeSuspendRepeatInterval long 0Specifies the time interval between suspends, in units of milliseconds.

The following table lists the parameters for scheduling the resumption of a route one or more times.

ParameterTypeDefaultDescription
routeResumeDate java.util.Date None Specifies the date and time when the route is resumed for the first time.
routeResumeRepeatCount int 0When set to a non-zero value, specifies how many times the route should be resumed.
routeResumeRepeatInterval long 0Specifies the time interval between resumes, in units of milliseconds.

Example 2.7 shows how to schedule a route to start up using the Java DSL. The policy is configured with the cron expression, */3 * * * * ?, which triggers a start event every 3 seconds.

In Java DSL, you attach the route policy to the route by calling the routePolicy() DSL command in the route.


[Note]Note

You can specify multiple policies on the route by calling routePolicy() with multiple arguments.

The cron expression syntax has its origins in the UNIX cron utility, which schedules jobs to run in the background on a UNIX system. A cron expression is effectively a syntax for wildcarding dates and times that enables you to specify either a single event or multiple events that recur periodically.

A cron expression consists of 6 or 7 fields in the following order:

Seconds Minutes Hours DayOfMonth Month DayOfWeek [Year]

The Year field is optional and usually omitted, unless you want to define an event that occurs once and once only. Each field consists of a mixture of literals and special characters. For example, the following cron expression specifies an event that fires once every day at midnight:

0 0 24 * * ?

The * character is a wildcard that matches every value of a field. Hence, the preceding expression matches every day of every month. The ? character is a dummy placeholder that means ignore this field. It always appears either in the DayOfMonth field or in the DayOfWeek field, because it is not logically consistent to specify both of these fields at the same time. For example, if you want to schedule an event that fires once a day, but only from Monday to Friday, use the following cron expression:

0 0 24 ? * MON-FRI

Where the hyphen character specifies a range, MON-FRI. You can also use the forward slash character, /, to specify increments. For example, to specify that an event fires every 5 minutes, use the following cron expression:

0 0/5 * * * ?

For a full explanation of the cron expression syntax, see the Wikipedia article on CRON expressions.

The following table lists the parameters for scheduling one or more route starts.

ParameterTypeDefaultDescription
routeStartString String None Specifies a cron expression that triggers one or more route start events.

The following table lists the parameters for scheduling one or more route stops.

ParameterTypeDefaultDescription
routeStopTime String None Specifies a cron expression that triggers one or more route stop events.
routeStopGracePeriod int 10000 Specifies how long to wait for the current exchange to finish processing (grace period) before forcibly stopping the route. Set to 0 for an infinite grace period.
routeStopTimeUnit long TimeUnit.MILLISECONDS Specifies the time unit of the grace period.

The following table lists the parameters for scheduling the suspension of a route one or more times.

ParameterTypeDefaultDescription
routeSuspendTime String None Specifies a cron expression that triggers one or more route suspend events.

The following table lists the parameters for scheduling the resumption of a route one or more times.

ParameterTypeDefaultDescription
routeResumeTime String None Specifies a cron expression that triggers one or more route resume events.

The message routing patterns, shown in Table 3.4, describe various ways of linking message channels together, including various algorithms that can be applied to the message stream (without modifying the body of the message).

Table 3.4. Message Routing

IconNameUse Case
Content based router icon Content Based Router How do we handle a situation where the implementation of a single logical function (e.g., inventory check) is spread across multiple physical systems?
Message filter icon Message Filter How does a component avoid receiving uninteresting messages?
Recipient List icon Recipient List How do we route a message to a list of dynamically specified recipients?
Splitter icon Splitter How can we process a message if it contains multiple elements, each of which might have to be processed in a different way?
Aggregator icon Aggregator How do we combine the results of individual, but related messages so that they can be processed as a whole?
Resequencer icon Resequencer How can we get a stream of related, but out-of-sequence, messages back into the correct order?
Composed Message Processor How can you maintain the overall message flow when processing a message consisting of multiple elements, each of which may require different processing?
Scatter-Gather How do you maintain the overall message flow when a message needs to be sent to multiple recipients, each of which may send a reply?
Routing slip icon Routing Slip How do we route a message consecutively through a series of processing steps when the sequence of steps is not known at design-time, and might vary for each message?
  Throttler How can I throttle messages to ensure that a specific endpoint does not get overloaded, or that we don't exceed an agreed SLA with some external service?
  Delayer How can I delay the sending of a message?
  Load Balancer How can I balance load across a number of endpoints?
  Multicast How can I route a message to a number of endpoints at the same time?
Loop How can I repeat processing a message in a loop?
  Sampling How can I sample one message out of many in a given period to avoid downstream route does not get overloaded?

By default, Apache Camel applies the following structure to all message types:

It is important to remember that this division into headers, body, and attachments is an abstract model of the message. Apache Camel supports many different components, that generate a wide variety of message formats. Ultimately, it is the underlying component implementation that decides what gets placed into the headers and body of a message.

A message endpoint is the interface between an application and a messaging system. As shown in Figure 4.3, you can have a sender endpoint, sometimes called a proxy or a service consumer, which is responsible for sending In messages, and a receiver endpoint, sometimes called an endpoint or a service, which is responsible for receiving In messages.


In Apache Camel, an endpoint is represented by an endpoint URI, which typically encapsulates the following kinds of data:

An endpoint URI in Apache Camel has the following general form:

ComponentPrefix:ComponentSpecificURI

Where ComponentPrefix is a URI prefix that identifies a particular Apache Camel component (see ???? for details of all the supported components). The remaining part of the URI, ComponentSpecificURI, has a syntax defined by the particular component. For example, to connect to the JMS queue, Foo.Bar, you can define an endpoint URI like the following:

jms:Foo.Bar

To define a route that connects the consumer endpoint, file://local/router/messages/foo, directly to the producer endpoint, jms:Foo.Bar, you can use the following Java DSL fragment:

from("file://local/router/messages/foo").to("jms:Foo.Bar");

Alternatively, you can define the same route in XML, as follows:

<camelContext id="CamelContextID" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="file://local/router/messages/foo"/>
    <to uri="jms:Foo.Bar"/>
  </route>
</camelContext>

The pipes and filters pattern, shown in Figure 4.4, describes a way of constructing a route by creating a chain of filters, where the output of one filter is fed into the input of the next filter in the pipeline (analogous to the UNIX pipe command). The advantage of the pipeline approach is that it enables you to compose services (some of which can be external to the Apache Camel application) to create more complex forms of message processing.


The message translator pattern, shown in Figure 4.8 describes a component that modifies the contents of a message, translating it to a different format. You can use Apache Camel's bean integration feature to perform the message translation.


You can transform a message using bean integration, which enables you to call a method on any registered bean. For example, to call the method, myMethodName(), on the bean with ID, myTransformerBean:

from("activemq:SomeQueue")
  .beanRef("myTransformerBean", "myMethodName")
  .to("mqseries:AnotherQueue");

Where the myTransformerBean bean is defined in either a Spring XML file or in JNDI. If, you omit the method name parameter from beanRef(), the bean integration will try to deduce the method name to invoke by examining the message exchange.

You can also add your own explicit Processor instance to perform the transformation, as follows:

from("direct:start").process(new Processor() {
    public void process(Exchange exchange) {
        Message in = exchange.getIn();
        in.setBody(in.getBody(String.class) + " World!");
    }
}).to("mock:result");

Or, you can use the DSL to explicitly configure the transformation, as follows:

from("direct:start").setBody(body().append(" World!")).to("mock:result");

You can also use templating to consume a message from one destination, transform it with something like Velocity in EIP Component Reference or XQuery and then send it on to another destination. For example, using the InOnly exchange pattern (one-way messaging) :

from("activemq:My.Queue").
  to("velocity:com/acme/MyResponse.vm").
  to("activemq:Another.Queue");

If you want to use InOut (request-reply) semantics to process requests on the My.Queue queue on ActiveMQ in EIP Component Reference with a template generated response, then you could use a route like the following to send responses back to the JMSReplyTo destination:

from("activemq:My.Queue").
  to("velocity:com/acme/MyResponse.vm");

A point-to-point channel, shown in Figure 5.1 is a message channel that guarantees that only one receiver consumes any given message. This is in contrast with a publish-subscribe channel, which allows multiple receivers to consume the same message. In particular, with a point-to-point channel, it is possible for multiple receivers to subscribe to the same channel. If more than one receiver competes to consume a message, it is up to the message channel to ensure that only one receiver actually consumes the message.


A publish-subscribe channel, shown in Figure 5.2, is a message channel that enables multiple subscribers to consume any given message. This is in contrast with a point-to-point channel. Publish-subscribe channels are frequently used as a means of broadcasting events or notifications to multiple subscribers.


The following Apache Camel components support the publish-subscribe channel pattern:

  • JMS

  • ActiveMQ

  • XMPP

  • SEDA in EIP Component Reference for working with SEDA in the same CamelContext which can work in pub-sub, but allowing multiple consumers.

  • VM in EIP Component Reference as SEDA, but for use within the same JVM.

The dead letter channel pattern, shown in Figure 5.3, describes the actions to take when the messaging system fails to deliver a message to the intended recipient. This includes such features as retrying delivery and, if delivery ultimately fails, sending the message to a dead letter channel, which archives the undelivered messages.


Normally, you do not send a message straight to the dead letter channel, if a delivery attempt fails. Instead, you re-attempt delivery up to some maximum limit, and after all redelivery attempts fail you would send the message to the dead letter channel. To customize message redelivery, you can configure the dead letter channel to have a redelivery policy. For example, to specify a maximum of two redelivery attempts, and to apply an exponential backoff algorithm to the time delay between delivery attempts, you can configure the dead letter channel as follows:

errorHandler(deadLetterChannel("seda:errors").maximumRedeliveries(2).useExponentialBackOff());
from("seda:a").to("seda:b");

Where you set the redelivery options on the dead letter channel by invoking the relevant methods in a chain (each method in the chain returns a reference to the current RedeliveryPolicy object). Table 5.1 summarizes the methods that you can use to set redelivery policies.

Table 5.1. Redelivery Policy Settings

Method SignatureDefaultDescription
backOffMultiplier(double multiplier)2

If exponential backoff is enabled, let m be the backoff multiplier and let d be the initial delay. The sequence of redelivery attempts are then timed as follows:

d, m*d, m*m*d, m*m*m*d, ...
collisionAvoidancePercent(double collisionAvoidancePercent)15If collision avoidance is enabled, let p be the collision avoidance percent. The collision avoidance policy then tweaks the next delay by a random amount, up to plus/minus p% of its current value.
delayPattern(String delayPattern)NoneApache Camel 2.0:
disableRedelivery()trueApache Camel 2.0: Disables the redelivery feature. To enable redelivery, set maximumRedeliveries() to a positive integer value.
handled(boolean handled)trueApache Camel 2.0: If true, the current exception is cleared when the message is moved to the dead letter channel; if false, the exception is propagated back to the client.
initialRedeliveryDelay(long initialRedeliveryDelay)1000Specifies the delay (in milliseconds) before attempting the first redelivery.
logStackTrace(boolean logStackTrace)falseApache Camel 2.0: If true, the JVM stack trace is included in the error logs.
maximumRedeliveries(int maximumRedeliveries)0Apache Camel 2.0: Maximum number of delivery attempts.
maximumRedeliveryDelay(long maxDelay)60000Apache Camel 2.0: When using an exponential backoff strategy (see useExponentialBackOff()), it is theoretically possible for the redelivery delay to increase without limit. This property imposes an upper limit on the redelivery delay (in milliseconds)
onRedelivery(Processor processor)NoneApache Camel 2.0: Configures a processor that gets called before every redelivery attempt.
redeliveryDelay(long int)0Apache Camel 2.0: Specifies the delay (in milliseconds) between redelivery attempts.
retriesExhaustedLogLevel(LoggingLevel logLevel)LoggingLevel.ERRORApache Camel 2.0: Specifies the logging level at which to log delivery failure (specified as an org.apache.camel.LoggingLevel constant).
retryAttemptedLogLevel(LoggingLevel logLevel)LoggingLevel.DEBUGApache Camel 2.0: Specifies the logging level at which to redelivery attempts (specified as an org.apache.camel.LoggingLevel constant).
useCollisionAvoidance()falseEnables collision avoidence, which adds some randomization to the backoff timings to reduce contention probability.
useOriginalMessage()falseApache Camel 2.0: If this feature is enabled, the message sent to the dead letter channel is a copy of the original message exchange, as it existed at the beginning of the route (in the from() node).
useExponentialBackOff()falseEnables exponential backoff.

When Apache Camel routes messages, it updates an Exchange property that contains the last endpoint the Exchange was sent to. Hence, you can obtain the URI for the current exchange's most recent destination using the following code:

// Java
String lastEndpointUri = exchange.getProperty(Exchange.TO_ENDPOINT, String.class);

Where Exchange.TO_ENDPOINT is a string constant equal to CamelToEndpoint. This property is updated whenever Camel sends a message to any endpoint.

If an error occurs during routing and the exchange is moved into the dead letter queue, Apache Camel will additionally set a property named CamelFailureEndpoint, which identifies the last destination the exchange was sent to before the error occcured. Hence, you can access the failure endpoint from within a dead letter queue using the following code:

// Java
String failedEndpointUri = exchange.getProperty(Exchange.FAILURE_ENDPOINT, String.class);

Where Exchange.FAILURE_ENDPOINT is a string constant equal to CamelFailureEndpoint.

[Note]Note

These properties remain set in the current exchange, even if the failure occurs after the given destination endpoint has finished processing. For example, consider the following route:

        from("activemq:queue:foo")
        .to("http://someserver/somepath")
        .beanRef("foo");

Now suppose that a failure happens in the foo bean. In this case the Exchange.TO_ENDPOINT property and the Exchange.FAILURE_ENDPOINT property still contain the value, http://someserver/somepath.

When a dead letter channel is performing redeliveries, it is possible to configure a Processor that is executed just before every redelivery attempt. This can be used for situations where you need to alter the message before it is redelivered.

For example, the following dead letter channel is configured to call the MyRedeliverProcessor before redelivering exchanges:

// we configure our Dead Letter Channel to invoke
// MyRedeliveryProcessor before a redelivery is
// attempted. This allows us to alter the message before
errorHandler(deadLetterChannel("mock:error").maximumRedeliveries(5)
        .onRedelivery(new MyRedeliverProcessor())
        // setting delay to zero is just to make unit teting faster
        .redeliveryDelay(0L));

Where the MyRedeliveryProcessor process is implemented as follows:

// This is our processor that is executed before every redelivery attempt
// here we can do what we want in the java code, such as altering the message
public class MyRedeliverProcessor implements Processor {

    public void process(Exchange exchange) throws Exception {
        // the message is being redelivered so we can alter it

        // we just append the redelivery counter to the body
        // you can of course do all kind of stuff instead
        String body = exchange.getIn().getBody(String.class);
        int count = exchange.getIn().getHeader(Exchange.REDELIVERY_COUNTER, Integer.class);

        exchange.getIn().setBody(body + count);

        // the maximum redelivery was set to 5
        int max = exchange.getIn().getHeader(Exchange.REDELIVERY_MAX_COUNTER, Integer.class);
        assertEquals(5, max);
    }
}

Instead of using the errorHandler() interceptor in your route builder, you can define a series of onException() clauses that define different redelivery policies and different dead letter channels for various exception types. For example, to define distinct behavior for each of the NullPointerException, IOException, and Exception types, you can define the following rules in your route builder using Java DSL:

onException(NullPointerException.class)
    .maximumRedeliveries(1)
    .setHeader("messageInfo", "Oh dear! An NPE.")
    .to("mock:npe_error");

onException(IOException.class)
    .initialRedeliveryDelay(5000L)
    .maximumRedeliveries(3)
    .backOffMultiplier(1.0)
    .useExponentialBackOff()
    .setHeader("messageInfo", "Oh dear! Some kind of I/O exception.")
    .to("mock:io_error");

onException(Exception.class)
    .initialRedeliveryDelay(1000L)
    .maximumRedeliveries(2)
    .setHeader("messageInfo", "Oh dear! An exception.")
    .to("mock:error");

from("seda:a").to("seda:b");

Where the redelivery options are specified by chaining the redelivery policy methods (as listed in Table 5.1), and you specify the dead letter channel's endpoint using the to() DSL command. You can also call other Java DSL commands in the onException() clauses. For example, the preceding example calls setHeader() to record some error details in a message header named, messageInfo.

In this example, the NullPointerException and the IOException exception types are configured specially. All other exception types are handled by the generic Exception exception interceptor. By default, Apache Camel applies the exception interceptor that most closely matches the thrown exception. If it fails to find an exact match, it tries to match the closest base type, and so on. Finally, if no other interceptor matches, the interceptor for the Exception type matches all remaining exceptions.

In ActiveMQ, message persistence is enabled by default. From version 5 onwards, ActiveMQ uses the AMQ message store as the default persistence mechanism. There are several different approaches you can use to enabe message persistence in ActiveMQ.

The simplest option (different from Figure 5.4) is to enable persistence in a central broker and then connect to that broker using a reliable protocol. After a message is been sent to the central broker, delivery to consumers is guaranteed. For example, in the Apache Camel configuration file, META-INF/spring/camel-context.xml, you can configure the ActiveMQ component to connect to the central broker using the OpenWire/TCP protocol as follows:

<beans ... >
  ...
  <bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
    <property name="brokerURL" value="tcp://somehost:61616"/>
  </bean>
  ...
</beans>

If you prefer to implement an architecture where messages are stored locally before being sent to a remote endpoint (similar to Figure 5.4), you do this by instantiating an embedded broker in your Apache Camel application. A simple way to achieve this is to use the ActiveMQ Peer-to-Peer protocol, which implicitly creates an embedded broker to communicate with other peer endpoints. For example, in the camel-context.xml configuration file, you can configure the ActiveMQ component to connect to all of the peers in group, GroupA, as follows:

<beans ... >
  ...
  <bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
    <property name="brokerURL" value="peer://GroupA/broker1"/>
  </bean>
  ...
</beans>

Where broker1 is the broker name of the embedded broker (other peers in the group should use different broker names). One limiting feature of the Peer-to-Peer protocol is that it relies on IP multicast to locate the other peers in its group. This makes it unsuitable for use in wide area networks (and in some local area networks that do not have IP multicast enabled).

A more flexible way to create an embedded broker in the ActiveMQ component is to exploit ActiveMQ's VM protocol, which connects to an embedded broker instance. If a broker of the required name does not already exist, the VM protocol automatically creates one. You can use this mechanism to create an embedded broker with custom configuration. For example:

<beans ... >
  ...
  <bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
    <property name="brokerURL" value="vm://broker1?brokerConfig=xbean:activemq.xml"/>
  </bean>
  ...
</beans>

Where activemq.xml is an ActiveMQ file which configures the embedded broker instance. Within the ActiveMQ configuration file, you can choose to enable one of the following persistence mechanisms:

See ActiveMQ in EIP Component Reference for more details.

The correlation identifier pattern, shown in Figure 6.1, describes how to match reply messages with request messages, given that an asynchronous messaging system is used to implement a request-reply protocol. The essence of this idea is that request messages should be generated with a unique token, the request ID, that identifies the request message and reply messages should include a token, the correlation ID, that contains the matching request ID.

Apache Camel supports the Correlation Identifier from the EIP patterns by getting or setting a header on a Message.

When working with the ActiveMQ in EIP Component Reference or JMS in EIP Component Reference components, the correlation identifier header is called JMSCorrelationID. You can add your own correlation identifier to any message exchange to help correlate messages together in a single conversation (or business process). A correlation identifier is usually stored in a Apache Camel message header.

Some EIP patterns spin off a sub message and, in those cases, Apache Camel adds a correlation ID to the Exchange as a property with they key, Exchange.CORRELATION_ID, which links back to the source Exchange. For example, the Splitter, Multicast, Recipient List, and Wire Tap EIPs do this.


Camel supports the Event Message from the Introducing Enterprise Integration Patterns by supporting the Exchange Pattern on a Message which can be set to InOnly to indicate a oneway event message. Camel Components then implement this pattern using the underlying transport or protocols.

The default behaviour of many Components is InOnly such as for JMS in EIP Component Reference, File in EIP Component Reference or SEDA in EIP Component Reference

If you are using a component which defaults to InOut you can override the Exchange Pattern for an endpoint using the pattern property.

foo:bar?exchangePattern=InOnly

From 2.0 onwards on Camel you can specify the Exchange Pattern using the dsl.

Using the Fluent Builders

from("mq:someQueue").
  inOnly().
  bean(Foo.class);

or you can invoke an endpoint with an explicit pattern

from("mq:someQueue").
  inOnly("mq:anotherQueue");

Using the Spring XML Extensions

<route>
    <from uri="mq:someQueue"/>
    <inOnly uri="bean:foo"/>
</route>
<route>
    <from uri="mq:someQueue"/>
    <inOnly uri="mq:anotherQueue"/>
</route>

Apache Camel supports the Return Address from the Introducing Enterprise Integration Patterns using the JMSReplyTo header.

For example when using JMS in EIP Component Reference with InOut, the component will by default be returned to the address given in JMSReplyTo.

Requestor Code

 getMockEndpoint("mock:bar").expectedBodiesReceived("Bye World");
 template.sendBodyAndHeader("direct:start", "World", "JMSReplyTo", "queue:bar");

Route Using the Fluent Builders

 from("direct:start").to("activemq:queue:foo?preserveMessageQos=true");
 from("activemq:queue:foo").transform(body().prepend("Bye "));
 from("activemq:queue:bar?disableReplyTo=true").to("mock:bar");

Route Using the Spring XML Extensions

 <route>
   <from uri="direct:start"/>
   <to uri="activemq:queue:foo?preserveMessageQos=true"/>
 </route>
 
 <route>
   <from uri="activemq:queue:foo"/>
   <transform>
       <simple>Bye ${in.body}</simple>
   </transform>
 </route>
 
 <route>
   <from uri="activemq:queue:bar?disableReplyTo=true"/>
   <to uri="mock:bar"/>
 </route>

For a complete example of this pattern, see this junit test case

A message filter is a processor that eliminates undesired messages based on specific criteria. In Apache Camel, the message filter pattern, shown in Figure 7.2, is implemented by the filter() Java DSL command. The filter() command takes a single predicate argument, which controls the filter. When the predicate is true, the incoming message is allowed to proceed, and when the predicate is false, the incoming message is blocked.


The following example shows how to configure the route with an XPath predicate in XML (see Expression and Predicate Languages):

<camelContext id="simpleFilterRoute" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="seda:a"/>
    <filter>
      <xpath>$foo = 'bar'</xpath>
      <to uri="seda:b"/>
    </filter>
  </route>
  </camelContext>
[Important]Filtered endpoint required inside </filter> tag

Make sure you put the endpoint you want to filter (for example, <to uri="seda:b"/>) before the closing </filter> tag or the filter will not be applied (in 2.8+, omitting this will result in an error).

Available as of Camel 2.0

Stop is a special type of filter that filters out all messages. Stop is convenient to use in a Content-Based Routerwhen you need to stop further processing in one of the predicates.

In the following example, we do not want messages with the word Bye in the message body to propagate any further in the route. We prevent this in the when() predicate using .stop().

from("direct:start")
    .choice()
        .when(body().contains("Hello")).to("mock:hello")
        .when(body().contains("Bye")).to("mock:bye").stop()
        .otherwise().to("mock:other")
    .end()
    .to("mock:result");

Knowing if Exchange was filtered or not

Available as of Camel 2.5

The Message Filter EIP will add a property on the Exchange which states if it was filtered or not.

The property has the key Exchannge.FILTER_MATCHED which has the String value of CamelFilterMatched. Its value is a boolean indicating true or false. If the value is true then the Exchange was routed in the filter block.

A recipient list, shown in Figure 7.3, is a type of router that sends each incoming message to multiple different destinations. In addition, a recipient list typically requires that the list of recipients be calculated at run time.


The simplest kind of recipient list is where the list of destinations is fixed and known in advance, and the exchange pattern is InOnly. In this case, you can hardwire the list of destinations into the to() Java DSL command.

[Note]Note

The examples given here, for the recipient list with fixed destinations, work only with the InOnly exchange pattern (similar to a pipeline). If you want to create a recipient list for exchange patterns with Out messages, use the multicast pattern instead.

Available as of Camel 2.2

The Recipient List supports parallelProcessing, which is similar to the corresponding feature in Splitter. Use the parallel processing feature to send the exchange to multiple recipients concurrently—for example:

from("direct:a").recipientList(header("myHeader")).parallelProcessing();

In Spring XML, the parallel processing feature is implemented as an attribute on the recipientList tag—for example:

<route>
  <from uri="direct:a"/>
  <recipientList parallelProcessing="true">
    <header>myHeader</header>
  </recipientList>
</route>

Available as of Camel 2.2

The Recipient List supports the stopOnException feature, which you can use to stop sending to any further recipients, if any recipient fails.

from("direct:a").recipientList(header("myHeader")).stopOnException();

And in Spring XML its an attribute on the recipient list tag.

In Spring XML, the stop on exception feature is implemented as an attribute on the recipientList tag—for example:

<route>
  <from uri="direct:a"/>
  <recipientList stopOnException="true">
    <header>myHeader</header>
  </recipientList>
</route>
[Note]Note

You can combine parallelProcessing and stopOnException in the same route.

Available as of Camel 2.3

The Recipient List supports the ignoreInvalidEndpoints option, which enables the recipient list to skip invalid endpoints (Routing Slip also supports this option). For example:

from("direct:a").recipientList(header("myHeader")).ignoreInvalidEndpoints();

And in Spring XML, you can enable this option by setting the ignoreInvalidEndpoints attribute on the recipientList tag, as follows

<route>
  <from uri="direct:a"/>
  <recipientList ignoreInvalidEndpoints="true">
    <header>myHeader</header>
  </recipientList>
</route>      

Consider the case where myHeader contains the two endpoints, direct:foo,xxx:bar. The first endpoint is valid and works. The second is invalid and, therefore, ignored. Apache Camel logs at INFO level whenever an invalid endpoint is encountered.

Available as of Camel 2.2

You can use a custom AggregationStrategy with the Recipient List, which is useful for aggregating replies from the recipients in the list. By default, Apache Camel uses the UseLatestAggregationStrategy aggregation strategy, which keeps just the last received reply. For a more sophisticated aggregation strategy, you can define your own implementation of the AggregationStrategy interface—see Aggregator EIP for details. For example, to apply the custom aggregation strategy, MyOwnAggregationStrategy, to the reply messages, you can define a Java DSL route as follows:

from("direct:a")
    .recipientList(header("myHeader")).aggregationStrategy(new MyOwnAggregationStrategy())
    .to("direct:b");

In Spring XML, you can specify the custom aggregation strategy as an attribute on the recipientList tag, as follows:

<route>
  <from uri="direct:a"/>
  <recipientList strategyRef="myStrategy">
    <header>myHeader</header>
  </recipientList>
  <to uri="direct:b"/>
</route>
        
<bean id="myStrategy" class="com.mycompany.MyOwnAggregationStrategy"/>

You can use a Bean in EIP Component Reference to provide the recipients, for example:

from("activemq:queue:test").recipientList().method(MessageRouter.class, "routeTo");      

Where the MessageRouter bean is defined as follows:

public class MessageRouter {

    public String routeTo() {
        String queueName = "activemq:queue:test2";
        return queueName;
    }
}      

Available as of Camel 2.5

If you use parallelProcessing, you can configure a total timeout value in milliseconds. Camel will then process the messages in parallel until the timeout is hit. This allows you to continue processing if one message is slow.

In the example below, the recipientlist header has the value, direct:a,direct:b,direct:c, so that the message is sent to three recipients. We have a timeout of 250 milliseconds, which means only the last two messages can be completed within the timeframe. The aggregation therefore yields the string result, BC.

from("direct:start")
    .recipientList(header("recipients"), ",")
    .aggregationStrategy(new AggregationStrategy() {
            public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
                if (oldExchange == null) {
                    return newExchange;
                }

                String body = oldExchange.getIn().getBody(String.class);
                oldExchange.getIn().setBody(body + newExchange.getIn().getBody(String.class));
                return oldExchange;
            }
        })
        .parallelProcessing().timeout(250)
    // use end to indicate end of recipientList clause
    .end()
    .to("mock:result");

from("direct:a").delay(500).to("mock:A").setBody(constant("A"));

from("direct:b").to("mock:B").setBody(constant("B"));

from("direct:c").to("mock:C").setBody(constant("C"));
[Note]Note

This timeout feature is also supported by splitter and both multicast and recipientList.

By default if a timeout occurs the AggregationStrategy is not invoked. However you can implement a specialized version

// Java
public interface TimeoutAwareAggregationStrategy extends AggregationStrategy {

    /**
     * A timeout occurred
     *
     * @param oldExchange  the oldest exchange (is &lt;tt>null&lt;/tt> on first aggregation as we only have the new exchange)
     * @param index        the index
     * @param total        the total
     * @param timeout      the timeout value in millis
     */
    void timeout(Exchange oldExchange, int index, int total, long timeout);

This allows you to deal with the timeout in the AggregationStrategy if you really need to.

[Important]Timeout is total

The timeout is total, which means that after X time, Camel will aggregate the messages which has completed within the timeframe. The remainders will be cancelled. Camel will also only invoke the timeout method in the TimeoutAwareAggregationStrategy once, for the first index which caused the timeout.

Before recipientList sends a message to one of the recipient endpoints, it creates a message replica, which is a shallow copy of the original message. If you want to perform some custom processing on each message replica before the replica is sent to its endpoint, you can invoke the onPrepare DSL command in the recipientList clause. The onPrepare command inserts a custom processor just after the message has been shallow-copied and just before the message is dispatched to its endpoint. For example, in the following route, the CustomProc processor is invoked on the message replica for each recipient endpoint:

from("direct:start")
  .recipientList().onPrepare(new CustomProc());

A common use case for the onPrepare DSL command is to perform a deep copy of some or all elements of a message. This allows each message replica to be modified independently of the others. For example, the following CustomProc processor class performs a deep copy of the message body, where the message body is presumed to be of type, BodyType, and the deep copy is performed by the method, BodyType.deepCopy().

// Java
import org.apache.camel.*;
...
public class CustomProc implements Processor {

    public void process(Exchange exchange) throws Exception {
        BodyType body = exchange.getIn().getBody(BodyType.class);

        // Make a _deep_ copy of of the body object
        BodyType clone =  BodyType.deepCopy();
        exchange.getIn().setBody(clone);

        // Headers and attachments have already been
        // shallow-copied. If you need deep copies,
        // add some more code here.
    }
}

The recipientList DSL command supports the following options:

Name Default Value Description
delimiter , Delimiter used if the Expression returned multiple endpoints.
strategyRef Refers to an AggregationStrategy to be used to assemble the replies from the recipients, into a single outgoing message from the Recipient List. By default Camel will use the last reply as the outgoing message.
parallelProcessing false Camel 2.2: If enables then sending messages to the recipients occurs concurrently. Note the caller thread will still wait until all messages has been fully processed, before it continues. Its only the sending and processing the replies from the recipients which happens concurrently.
executorServiceRef Camel 2.2: Refers to a custom Thread Pool to be used for parallel processing. Notice if you set this option, then parallel processing is automatic implied, and you do not have to enable that option as well.
stopOnException false Camel 2.2: Whether or not to stop continue processing immediately when an exception occurred. If disable, then Camel will send the message to all recipients regardless if one of them failed. You can deal with exceptions in the AggregationStrategy class where you have full control how to handle that.
ignoreInvalidEndpoints false Camel 2.3: If an endpoint uri could not be resolved, should it be ignored. Otherwise Camel will thrown an exception stating the endpoint uri is not valid.
streaming false Camel 2.5: If enabled then Camel will process replies out-of-order, eg in the order they come back. If disabled, Camel will process replies in the same order as the Expression specified.
timeout Camel 2.5: Sets a total timeout specified in millis. If the Recipient List hasn't been able to send and process all replies within the given timeframe, then the timeout triggers and the Recipient List breaks out and continues. Notice if you provide a TimeoutAwareAggregationStrategy then the timeout method is invoked before breaking out.
onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the copy of the Exchange each recipient will receive. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc.
shareUnitOfWork false Camel 2.8: Whether the unit of work should be shared. See the same option on Splitter for more details.

If you want to execute the resulting pieces of the message in parallel, you can enable the parallel processing option, which instantiates a thread pool to process the message pieces. For example:

XPathBuilder xPathBuilder = new XPathBuilder("//foo/bar"); 
from("activemq:my.queue").split(xPathBuilder).parallelProcessing().to("activemq:my.parts");

You can customize the underlying ThreadPoolExecutor used in the parallel splitter. For example, you can specify a custom executor in the Java DSL as follows:

XPathBuilder xPathBuilder = new XPathBuilder("//foo/bar"); 
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(8, 16, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue());
from("activemq:my.queue")
  .split(xPathBuilder)
  .parallelProcessing()
  .executorService(threadPoolExecutor)
  .to("activemq:my.parts");

You can specify a custom executor in the XML DSL as follows:

<camelContext xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="direct:parallel-custom-pool"/>
    <split executorServiceRef="threadPoolExecutor">
      <xpath>/invoice/lineItems</xpath>
      <to uri="mock:result"/>
    </split>
  </route>
</camelContext>

<bean id="threadPoolExecutor" class="java.util.concurrent.ThreadPoolExecutor">
  <constructor-arg index="0" value="8"/>
  <constructor-arg index="1" value="16"/>
  <constructor-arg index="2" value="0"/>
  <constructor-arg index="3" value="MILLISECONDS"/>
  <constructor-arg index="4"><bean class="java.util.concurrent.LinkedBlockingQueue"/></constructor-arg>
</bean>

As the splitter can use any expression to do the splitting, we can use a bean to perform splitting, by invoking the method() expression. The bean should return an iterable value such as: java.util.Collection, java.util.Iterator, or an array.

The following route defines a method() expression that calls a method on the mySplitterBean bean instance:

from("direct:body")
        // here we use a POJO bean mySplitterBean to do the split of the payload
        .split()
        .method("mySplitterBean", "splitBody")
        .to("mock:result");
from("direct:message")
        // here we use a POJO bean mySplitterBean to do the split of the message 
        // with a certain header value
        .split()
        .method("mySplitterBean", "splitMessage")
        .to("mock:result");

Where mySplitterBean is an instance of the MySplitterBean class, which is defined as follows:

public class MySplitterBean {

    /**
     * The split body method returns something that is iteratable such as a java.util.List.
     *
     * @param body the payload of the incoming message
     * @return a list containing each part split
     */
    public List<String> splitBody(String body) {
        // since this is based on an unit test you can of couse
        // use different logic for splitting as Apache Camel have out
        // of the box support for splitting a String based on comma
        // but this is for show and tell, since this is java code
        // you have the full power how you like to split your messages
        List<String> answer = new ArrayList<String>();
        String[] parts = body.split(",");
        for (String part : parts) {
            answer.add(part);
        }
        return answer;
    }
    
    /**
     * The split message method returns something that is iteratable such as a java.util.List.
     *
     * @param header the header of the incoming message with the name user
     * @param body the payload of the incoming message
     * @return a list containing each part split
     */
    public List<Message> splitMessage(@Header(value = "user") String header, @Body String body) {
        // we can leverage the Parameter Binding Annotations  
        // http://camel.apache.org/parameter-binding-annotations.html
        // to access the message header and body at same time, 
        // then create the message that we want, splitter will
        // take care rest of them.
        // *NOTE* this feature requires Apache Camel version >= 1.6.1
        List<Message> answer = new ArrayList<Message>();
        String[] parts = header.split(",");
        for (String part : parts) {
            DefaultMessage message = new DefaultMessage();
            message.setHeader("user", part);
            message.setBody(body);
            answer.add(message);
        }
        return answer;
    }
}

The following properties are set on each split exchange:

header type description
CamelSplitIndex int Apache Camel 2.0: A split counter that increases for each Exchange being split. The counter starts from 0.
CamelSplitSize int Apache Camel 2.0: The total number of Exchanges that was split. This header is not applied for stream based splitting.
CamelSplitComplete boolean Apache Camel 2.4: Whether or not this Exchange is the last.

If an incoming messages is a very large XML file, you can process the message most efficiently using the tokenizeXML sub-command in streaming mode.

For example, given a large XML file that contains a sequence of order elements, you can split the file into order elements using a route like the following:

from("file:inbox")
  .split().tokenizeXML("order").streaming()
  .to("activemq:queue:order"); 

You can do the same thing in XML, by defining a route like the following:

<route>
  <from uri="file:inbox"/>
  <split streaming="true">
    <tokenize token="order" xml="true"/>
    <to uri="activemq:queue:order"/>
  </split>
</route>

It is often the case that you need access to namespaces that are defined in one of the enclosing (ancestor) elements of the token elements. You can copy namespace definitions from one of the ancestor elements into the token element, by specifing which element you want to inherit namespace definitions from.

In the Java DSL, you specify the ancestor element as the second argument of tokenizeXML. For example, to inherit namespace definitions from the enclosing orders element:

from("file:inbox")
  .split().tokenizeXML("order", "orders").streaming()
  .to("activemq:queue:order"); 

In the XML DSL, you specify the ancestor element using the inheritNamespaceTagName attribute. For example:

<route>
  <from uri="file:inbox"/>
  <split streaming="true">
    <tokenize token="order"
              xml="true"
              inheritNamespaceTagName="orders"/>
    <to uri="activemq:queue:order"/>
  </split>
</route>

The split DSL command supports the following options:

Name Default Value Description
strategyRef Refers to an AggregationStrategy to be used to assemble the replies from the sub-messages, into a single outgoing message from the Splitter. See the section titled What does the splitter return below for whats used by default.
parallelProcessing false If enables then processing the sub-messages occurs concurrently. Note the caller thread will still wait until all sub-messages has been fully processed, before it continues.
executorServiceRef Refers to a custom Thread Pool to be used for parallel processing. Notice if you set this option, then parallel processing is automatic implied, and you do not have to enable that option as well.
stopOnException false Camel 2.2: Whether or not to stop continue processing immediately when an exception occurred. If disable, then Camel continue splitting and process the sub-messages regardless if one of them failed. You can deal with exceptions in the AggregationStrategy class where you have full control how to handle that.
streaming false If enabled then Camel will split in a streaming fashion, which means it will split the input message in chunks. This reduces the memory overhead. For example if you split big messages its recommended to enable streaming. If streaming is enabled then the sub-message replies will be aggregated out-of-order, eg in the order they come back. If disabled, Camel will process sub-message replies in the same order as they where splitted.
timeout Camel 2.5: Sets a total timeout specified in millis. If the Recipient List hasn't been able to split and process all replies within the given timeframe, then the timeout triggers and the Splitter breaks out and continues. Notice if you provide a TimeoutAwareAggregationStrategy then the timeout method is invoked before breaking out.
onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the sub-message of the Exchange, before its processed. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc.
shareUnitOfWork false Camel 2.8: Whether the unit of work should be shared. See further below for more details.

Figure 7.6 shows an overview of how the aggregator works, assuming it is fed with a stream of exchanges that have correlation keys such as A, B, C, or D.


The incoming stream of exchanges shown in Figure 7.6 is processed as follows:

  1. The correlator is responsible for sorting exchanges based on the correlation key. For each incoming exchange, the correlation expression is evaluated, yielding the correlation key. For example, for the exchange shown in Figure 7.6, the correlation key evaluates to A.

  2. The aggregation strategy is responsible for merging exchanges with the same correlation key. When a new exchange, A, comes in, the aggregator looks up the corresponding aggregate exchange, A', in the aggregation repository and combines it with the new exchange.

    Until a particular aggregation cycle is completed, incoming exchanges are continuously aggregated with the corresponding aggregate exchange. An aggregation cycle lasts until terminated by one of the completion mechanisms.

  3. If a completion predicate is specified on the aggregator, the aggregate exchange is tested to determine whether it is ready to be sent to the next processor in the route. Processing continues as follows:

    • If complete, the aggregate exchange is processed by the latter part of the route. There are two alternative models for this: synchronous (the default), which causes the calling thread to block, or asynchronous (if parallel processing is enabled), where the aggregate exchange is submitted to an executor thread pool (as shown in Figure 7.6).

    • If not complete, the aggregate exchange is saved back to the aggregation repository.

  4. In parallel with the synchronous completion tests, it is possible to enable an asynchronous completion test by enabling either the completionTimeout option or the completionInterval option. These completion tests run in a separate thread and, whenever the completion test is satisfied, the corresponding exchange is marked as complete and starts to be processed by the latter part of the route (either synchronously or asynchronously, depending on whether parallel processing is enabled or not).

  5. If parallel processing is enabled, a thread pool is responsible for processing exchanges in the latter part of the route. By default, this thread pool contains ten threads, but you have the option of customizing the pool (Threading options).

If you want to apply a different aggregation strategy, you can implement one of the following aggregation strategy base interfaces:

For example, the following code shows two different custom aggregation strategies, StringAggregationStrategy and ArrayListAggregationStrategy::

 //simply combines Exchange String body values using '+' as a delimiter
 class StringAggregationStrategy implements AggregationStrategy {
 
     public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
         if (oldExchange == null) {
             return newExchange;
         }
 
         String oldBody = oldExchange.getIn().getBody(String.class);
         String newBody = newExchange.getIn().getBody(String.class);
         oldExchange.getIn().setBody(oldBody + "+" + newBody);
         return oldExchange;
     }
 }
 
 //simply combines Exchange body values into an ArrayList<Object>
 class ArrayListAggregationStrategy implements AggregationStrategy {
 
     public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
 	Object newBody = newExchange.getIn().getBody();
 	ArrayList<Object> list = null;
         if (oldExchange == null) {
 		list = new ArrayList<Object>();
 		list.add(newBody);
 		newExchange.getIn().setBody(list);
 		return newExchange;
         } else {
 	        list = oldExchange.getIn().getBody(ArrayList.class);
 		list.add(newBody);
 		return oldExchange;
 	}
     }
 }
[Note]Note

Since Apache Camel 2.0, the AggregationStrategy.aggregate() callback method is also invoked for the very first exchange. On the first invocation of the aggregate method, the oldExchange parameter is null and the newExchange parameter contains the first incoming exchange.

To aggregate messages using the custom strategy class, ArrayListAggregationStrategy, define a route like the following:

from("direct:start")
    .aggregate(header("StockSymbol"), new ArrayListAggregationStrategy())
    .completionTimeout(3000)
    .to("mock:result");

You can also configure a route with a custom aggregation strategy in XML, as follows:

<camelContext xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="direct:start"/>
    <aggregate strategyRef="aggregatorStrategy"
               completionTimeout="3000">
      <correlationExpression>
        <simple>header.StockSymbol</simple>
      </correlationExpression>
      <to uri="mock:aggregated"/>
    </aggregate>
  </route>
</camelContext>

<bean id="aggregatorStrategy" class="com.my_package_name.ArrayListAggregationStrategy"/>

It is mandatory to specify at least one completion condition, which determines when an aggregate exchange leaves the aggregator and proceeds to the next node on the route. The following completion conditions can be specified:

completionPredicate

Evaluates a predicate after each exchange is aggregated in order to determine completeness. A value of true indicates that the aggregate exchange is complete.

completionSize

Completes the aggregate exchange after the specified number of incoming exchanges are aggregated.

completionTimeout

(Incompatible with completionInterval) Completes the aggregate exchange, if no incoming exchanges are aggregated within the specified timeout.

In other words, the timeout mechanism keeps track of a timeout for each correlation key value. The clock starts ticking after the latest exchange with a particular key value is received. If another exchange with the same key value is not received within the specified timeout, the corresponding aggregate exchange is marked complete and sent to the next node on the route.

completionInterval

(Incompatible with completionTimeout) Completes all outstanding aggregate exchanges, after each time interval (of specified length) has elapsed.

The time interval is not tailored to each aggregate exchange. This mechanism forces simultaneous completion of all outstanding aggregate exchanges. Hence, in some cases, this mechanism could complete an aggregate exchange immediately after it started aggregating.

completionFromBatchConsumer

When used in combination with a consumer endpoint that supports the batch consumer mechanism, this completion option automatically figures out when the current batch of exchanges is complete, based on information it receives from the consumer endpoint. See Batch consumer.

forceCompletionOnStop

When this option is enabled, it forces completion of all outstanding aggregate exchanges when the current route context is stopped.

The preceding completion conditions can be combined arbitrarily, except for the completionTimeout and completionInterval conditions, which cannot be simultaneously enabled. When conditions are used in combination, the general rule is that the first completion condition to trigger is the effective completion condition.

In some aggregation scenarios, you might want to enforce the condition that the correlation key is unique for each batch of exchanges. In other words, when the aggregate exchange for a particular correlation key completes, you want to make sure that no further aggregate exchanges with that correlation key are allowed to proceed. For example, you might want to enforce this condition, if the latter part of the route expects to process exchanges with unique correlation key values.

Depending on how the completion conditions are configured, there might be a risk of more than one aggregate exchange being generated with a particular correlation key. For example, although you might define a completion predicate that is designed to wait until all the exchanges with a particular correlation key are received, you might also define a completion timeout, which could fire before all of the exchanges with that key have arrived. In this case, the late-arriving exchanges could give rise to a second aggregate exchange with the same correlation key value.

For such scenarios, you can configure the aggregator to suppress aggregate exchanges that duplicate previous correlation key values, by setting the closeCorrelationKeyOnCompletion option. In order to suppress duplicate correlation key values, it is necessary for the aggregator to record previous correlation key values in a cache. The size of this cache (the number of cached correlation keys) is specified as an argument to the closeCorrelationKeyOnCompletion() DSL command. To specify a cache of unlimited size, you can pass a value of zero or a negative integer. For example, to specify a cache size of 10000 key values:

from("direct:start")
    .aggregate(header("UniqueBatchID"), new MyConcatenateStrategy())
        .completionSize(header("mySize"))
        .closeCorrelationKeyOnCompletion(10000)
    .to("mock:aggregated");

If an aggregate exchange completes with a duplicate correlation key value, the aggregator throws a ClosedCorrelationKeyException exception.

If you want pending aggregated exchanges to be stored persistently, you can use either the HawtDB in EIP Component Reference component or the SQL Component in EIP Component Reference for persistence support as a persistent aggregation repository. For example, if using HawtDB, you need to include a dependency on the camel-hawtdb component in your Maven POM. You can then configure a route to use the HawtDB aggregation repository as follows:

public void configure() throws Exception {
    HawtDBAggregationRepository repo = new AggregationRepository("repo1", "target/data/hawtdb.dat");

    from("direct:start")
        .aggregate(header("id"), new UseLatestAggregationStrategy())
            .completionTimeout(3000)
            .aggregationRepository(repo)
        .to("mock:aggregated");
}

The HawtDB aggregation repository has a feature that enables it to recover and retry any failed exchanges (that is, any exchange that raised an exception while it was being processed by the latter part of the route). Figure 7.7 shows an overview of the recovery mechanism.


The recovery mechanism works as follows:

  1. The aggregator creates a dedicated recovery thread, which runs in the background, scanning the aggregation repository to find any failed exchanges.

  2. Each failed exchange is checked to see whether its current redelivery count exceeds the maximum redelivery limit. If it is under the limit, the recovery task resubmits the exchange for processing in the latter part of the route.

  3. If the current redelivery count is over the limit, the failed exchange is passed to the dead letter queue.

For more details about the HawtDB component, see HawtDB in EIP Component Reference.

As shown in Figure 7.6, the aggregator is dsecoupled from the latter part of the route, where the exchanges sent to the latter part of the route are processed by a dedicated thread pool. By default, this pool contains just a single thread. If you want to specify a pool with multiple threads, enable the parallelProcessing option, as follows:

from("direct:start")
    .aggregate(header("id"), new UseLatestAggregationStrategy())
        .completionTimeout(3000)
        .parallelProcessing()
    .to("mock:aggregated");

By default, this creates a pool with 10 worker threads.

If you want to exercise more control over the created thread pool, specify a custom java.util.concurrent.ExecutorService instance using the executorService option (in which case it is unnecessary to enable the parallelProcessing option).

The aggregator supports the following options:

Table 7.3. Aggregator Options

OptionDefaultDescription
correlationExpression   Mandatory Expression which evaluates the correlation key to use for aggregation. The Exchange which has the same correlation key is aggregated together. If the correlation key could not be evaluated an Exception is thrown. You can disable this by using the ignoreBadCorrelationKeys option.
aggregationStrategy   Mandatory AggregationStrategy which is used to merge the incoming Exchange with the existing already merged exchanges. At first call the oldExchange parameter is null. On subsequent invocations the oldExchange contains the merged exchanges and newExchange is of course the new incoming Exchange. From Camel 2.9.2 onwards, the strategy can optionally be a TimeoutAwareAggregationStrategy implementation, which supports a timeout callback
strategyRef   A reference to lookup the AggregationStrategy in the Registry.
completionSize   Number of messages aggregated before the aggregation is complete. This option can be set as either a fixed value or using an Expression which allows you to evaluate a size dynamically - will use Integer as result. If both are set Camel will fallback to use the fixed value if the Expression result was null or 0.
completionTimeout   Time in millis that an aggregated exchange should be inactive before its complete. This option can be set as either a fixed value or using an Expression which allows you to evaluate a timeout dynamically - will use Long as result. If both are set Camel will fallback to use the fixed value if the Expression result was null or 0. You cannot use this option together with completionInterval, only one of the two can be used.
completionInterval   A repeating period in millis by which the aggregator will complete all current aggregated exchanges. Camel has a background task which is triggered every period. You cannot use this option together with completionTimeout, only one of them can be used.
completionPredicate   A Predicate to indicate when an aggregated exchange is complete.
completionFromBatchConsumer false This option is if the exchanges are coming from a Batch Consumer. Then when enabled the Aggregator will use the batch size determined by the Batch Consumer in the message header CamelBatchSize. See more details at Batch Consumer. This can be used to aggregate all files consumed from a File in EIP Component Reference endpoint in that given poll.
eagerCheckCompletion false Whether or not to eager check for completion when a new incoming Exchange has been received. This option influences the behavior of the completionPredicate option as the Exchange being passed in changes accordingly. When false the Exchange passed in the Predicate is the aggregated Exchange which means any information you may store on the aggregated Exchange from the AggregationStrategy is available for the Predicate. When true the Exchange passed in the Predicate is the incoming Exchange, which means you can access data from the incoming Exchange.
forceCompletionOnStopfalse If true, complete all aggregated exchanges when the current route context is stopped.
groupExchanges false If enabled then Camel will group all aggregated Exchanges into a single combined org.apache.camel.impl.GroupedExchange holder class that holds all the aggregated Exchanges. And as a result only one Exchange is being sent out from the aggregator. Can be used to combine many incoming Exchanges into a single output Exchange without coding a custom AggregationStrategy yourself.
ignoreInvalidCorrelationKeys false Whether or not to ignore correlation keys which could not be evaluated to a value. By default Camel will throw an Exception, but you can enable this option and ignore the situation instead.
closeCorrelationKeyOnCompletion   Whether or not late Exchanges should be accepted or not. You can enable this to indicate that if a correlation key has already been completed, then any new exchanges with the same correlation key be denied. Camel will then throw a closedCorrelationKeyException exception. When using this option you pass in a integer which is a number for a LRUCache which keeps that last X number of closed correlation keys. You can pass in 0 or a negative value to indicate a unbounded cache. By passing in a number you are ensured that cache wont grown too big if you use a log of different correlation keys.
discardOnCompletionTimeout false Camel 2.5: Whether or not exchanges which complete due to a timeout should be discarded. If enabled, then when a timeout occurs the aggregated message will not be sent out but dropped (discarded).
aggregationRepository  Allows you to plug in you own implementation of org.apache.camel.spi.AggregationRepository which keeps track of the current inflight aggregated exchanges. Camel uses by default a memory based implementation.
aggregationRepositoryRef   Reference to lookup a aggregationRepository in the Registry.
parallelProcessing false When aggregated are completed they are being send out of the aggregator. This option indicates whether or not Camel should use a thread pool with multiple threads for concurrency. If no custom thread pool has been specified then Camel creates a default pool with 10 concurrent threads.
executorService   If using parallelProcessing you can specify a custom thread pool to be used. In fact also if you are not using parallelProcessing this custom thread pool is used to send out aggregated exchanges as well.
executorServiceRef   Reference to lookup a executorService in the Registry
timeoutCheckerExecutorService  If using one of the completionTimeout, completionTimeoutExpression, or completionInterval options, a background thread is created to check for the completion for every aggregator. Set this option to provide a custom thread pool to be used rather than creating a new thread for every aggregator.
timeoutCheckerExecutorServiceRef  Reference to look up a timeoutCheckerExecutorService in the registry.

The batch resequencing algorithm is enabled by default. For example, to resequence a batch of incoming messages based on the value of a timestamp contained in the TimeStamp header, you can define the following route in Java DSL:

from("direct:start").resequence(header("TimeStamp")).to("mock:result");

By default, the batch is obtained by collecting all of the incoming messages that arrive in a time interval of 1000 milliseconds (default batch timeout), up to a maximum of 100 messages (default batch size). You can customize the values of the batch timeout and the batch size by appending a batch() DSL command, which takes a BatchResequencerConfig instance as its sole argument. For example, to modify the preceding route so that the batch consists of messages collected in a 4000 millisecond time window, up to a maximum of 300 messages, you can define the Java DSL route as follows:

import org.apache.camel.model.config.BatchResequencerConfig;

RouteBuilder builder = new RouteBuilder() {
    public void configure() {
        from("direct:start").resequence(header("TimeStamp")).batch(new BatchResequencerConfig(300,4000L)).to("mock:result");
    }
};

You can also use multiple expressions to sort messages in a batch. For example, if you want to sort incoming messages, first, according to their JMS priority (as recorded in the JMSPriority header) and second, according to the value of their time stamp (as recorded in the TimeStamp header), you can define a route like the following:

from("direct:start").resequence(header("JMSPriority"), header("TimeStamp")).to("mock:result");

In this case, messages with the highest priority (that is, low JMS priority number) are moved to the front of the batch. If more than one message has the highest priority, the highest priority messages are put in order according to the value of the TimeStamp header.

You can also specify a batch resequencer pattern using XML configuration. The following example defines a batch resequencer with a batch size of 300 and a batch timeout of 4000 milliseconds:

<camelContext id="resequencerBatch" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="direct:start" />
    <resequence>
      <!-- 
        batch-config can be omitted for default (batch) resequencer settings
      -->
      <batch-config batchSize="300" batchTimeout="4000" />
      <simple>header.TimeStamp</simple>
      <to uri="mock:result" />
    </resequence>
  </route>
</camelContext>

To enable the stream resequencing algorithm, you must append stream() to the resequence() DSL command. For example, to resequence incoming messages based on the value of a sequence number in the seqnum header, you define a DSL route as follows:

from("direct:start").resequence(header("seqnum")).stream().to("mock:result");

The stream-processing resequencer algorithm is based on the detection of gaps in a message stream, rather than on a fixed batch size. Gap detection, in combination with timeouts, removes the constraint of needing to know the number of messages of a sequence (that is, the batch size) in advance. Messages must contain a unique sequence number for which a predecessor and a successor is known. For example a message with the sequence number 3 has a predecessor message with the sequence number 2 and a successor message with the sequence number 4. The message sequence 2,3,5 has a gap because the successor of 3 is missing. The resequencer therefore must retain message 5 until message 4 arrives (or a timeout occurs).

By default, the stream resequencer is configured with a timeout of 1000 milliseconds, and a maximum message capacity of 100. To customize the stream's timeout and message capacity, you can pass a StreamResequencerConfig object as an argument to stream(). For example, to configure a stream resequencer with a message capacity of 5000 and a timeout of 4000 milliseconds, you define a route as follows:

// Java
import org.apache.camel.model.config.StreamResequencerConfig;

RouteBuilder builder = new RouteBuilder() {
    public void configure() {
        from("direct:start").resequence(header("seqnum")).
            stream(new StreamResequencerConfig(5000, 4000L)).
            to("mock:result");
    }
};

If the maximum time delay between successive messages (that is, messages with adjacent sequence numbers) in a message stream is known, the resequencer's timeout parameter should be set to this value. In this case, you can guarantee that all messages in the stream are delivered in the correct order to the next processor. The lower the timeout value that is compared to the out-of-sequence time difference, the more likely it is that the resequencer will deliver messages out of sequence. Large timeout values should be supported by sufficiently high capacity values, where the capacity parameter is used to prevent the resequencer from running out of memory.

If you want to use sequence numbers of some type other than long, you would must define a custom comparator, as follows:

// Java
ExpressionResultComparator<Exchange> comparator = new MyComparator();
StreamResequencerConfig config = new StreamResequencerConfig(5000, 4000L, comparator);
from("direct:start").resequence(header("seqnum")).stream(config).to("mock:result");

You can also specify a stream resequencer pattern using XML configuration. The following example defines a stream resequencer with a message capacity of 5000 and a timeout of 4000 milliseconds:

<camelContext id="resequencerStream" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="direct:start"/>
    <resequence>
      <stream-config capacity="5000" timeout="4000"/>
      <simple>header.seqnum</simple>
      <to uri="mock:result" />
    </resequence>
  </route>
</camelContext>

The routing slip pattern, shown in Figure 7.9, enables you to route a message consecutively through a series of processing steps, where the sequence of steps is not known at design time and can vary for each message. The list of endpoints through which the message should pass is stored in a header field (the slip), which Apache Camel reads at run time to construct a pipeline on the fly.


The Routing Slip now supports ignoreInvalidEndpoints, which the Recipient List pattern also supports. You can use it to skip endpoints that are invalid. For example:

    from("direct:a").routingSlip("myHeader").ignoreInvalidEndpoints();

In Spring XML, this feature is enabled by setting the ignoreInvalidEndpoints attribute on the <routingSlip> tag:

   <route>
       <from uri="direct:a"/>
       <routingSlip ignoreInvalidEndpoints="true">
         <headerName>myHeader</headerName>
       </routingSlip>
   </route>

Consider the case where myHeader contains the two endpoints, direct:foo,xxx:bar. The first endpoint is valid and works. The second is invalid and, therefore, ignored. Apache Camel logs at INFO level whenever an invalid endpoint is encountered.

The routingSlip DSL command supports the following options:

Name Default Value Description
uriDelimiter , Delimiter used if the Expression returned multiple endpoints.
ignoreInvalidEndpoints false If an endpoint uri could not be resolved, should it be ignored. Otherwise Camel will thrown an exception stating the endpoint uri is not valid.

Available os of Camel 2.8 Since we use an Expression, you can adjust this value at runtime, for example you can provide a header with the value. At runtime Camel evaluates the expression and converts the result to a java.lang.Long type. In the example below we use a header from the message to determine the maximum requests per period. If the header is absent, then the Throttler uses the old value. So that allows you to only provide a header if the value is to be changed:

<camelContext id="throttleRoute" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="direct:expressionHeader"/>
    <throttle timePeriodMillis="500">
      <!-- use a header to determine how many messages to throttle per 0.5 sec -->
      <header>throttleValue</header>
      <to uri="mock:result"/>
    </throttle>
  </route>
</camelContext>

The throttle DSL command supports the following options:

Name Default Value Description
maximumRequestsPerPeriod Maximum number of requests per period to throttle. This option must be provided and a positive number. Notice, in the XML DSL, from Camel 2.8 onwards this option is configured using an Expression instead of an attribute.
timePeriodMillis 1000 The time period in millis, in which the throttler will allow at most maximumRequestsPerPeriod number of messages.
asyncDelayed false Camel 2.4: If enabled then any messages which is delayed happens asynchronously using a scheduled thread pool.
executorServiceRef Camel 2.4: Refers to a custom Thread Pool to be used if asyncDelay has been enabled.
callerRunsWhenRejected true Camel 2.4: Is used if asyncDelayed was enabled. This controls if the caller thread should execute the task if the thread pool rejected the task.

The delayer pattern supports the following options:

Name Default Value Description
asyncDelayed false Camel 2.4: If enabled then delayed messages happens asynchronously using a scheduled thread pool.
executorServiceRef Camel 2.4: Refers to a custom Thread Pool to be used if asyncDelay has been enabled.
callerRunsWhenRejected true Camel 2.4: Is used if asyncDelayed was enabled. This controls if the caller thread should execute the task if the thread pool rejected the task.

Available as of Apache Camel 2.0 The failover load balancer is capable of trying the next processor in case an Exchange failed with an exception during processing. You can configure the failover with a list of specific exceptions that trigger failover. If you do not specify any exceptions, failover is triggered by any exception. The failover load balancer uses the same strategy for matching exceptions as the onException exception clause.

[Important]Enable stream caching if using streams

If you use streaming, you should enable Stream Caching when using the failover load balancer. This is needed so the stream can be re-read when failing over.

The failover load balancer supports the following options:

Option Type Default Description
inheritErrorHandler boolean true

Camel 2.3: Specifies whether to use the errorHandler configured on the route. If you want to fail over immediately to the next endpoint, you should disable this option (value of false). If you enable this option, Apache Camel will first attempt to process the message using the errorHandler.

For example, the errorHandler might be configured to redeliver messages and use delays between attempts. Apache Camel will initially try to redeliver to the original endpoint, and only fail over to the next endpoint when the errorHandler is exhausted.

maximumFailoverAttempts int -1

Camel 2.3: Specifies the maximum number of attempts to fail over to a new endpoint. The value, 0, implies that no failover attempts are made and the value, -1, implies an infinite number of failover attempts.

roundRobin boolean false

Camel 2.3: Specifies whether the failover load balancer should operate in round robin mode or not. If not, it will always start from the first endpoint when a new message is to be processed. In other words it restarts from the top for every message. If round robin is enabled, it keeps state and continues with the next endpoint in a round robin fashion. When using round robin it will not stick to last known good endpoint, it will always pick the next endpoint to use.

The following example is configured to fail over, only if an IOException exception is thrown:

from("direct:start")
    // here we will load balance if IOException was thrown
    // any other kind of exception will result in the Exchange as failed
    // to failover over any kind of exception we can just omit the exception
    // in the failOver DSL
    .loadBalance().failover(IOException.class)
        .to("direct:x", "direct:y", "direct:z");

You can optionally specify multiple exceptions to fail over, as follows:

// enable redelivery so failover can react
errorHandler(defaultErrorHandler().maximumRedeliveries(5));

from("direct:foo")
    .loadBalance()
    .failover(IOException.class, MyOtherException.class)
    .to("direct:a", "direct:b");

You can configure the same route in XML, as follows:

<route errorHandlerRef="myErrorHandler">
    <from uri="direct:foo"/>
    <loadBalance>
        <failover>
            <exception>java.io.IOException</exception>
            <exception>com.mycompany.MyOtherException</exception>
        </failover>
        <to uri="direct:a"/>
        <to uri="direct:b"/>
    </loadBalance>
</route>

The following example shows how to fail over in round robin mode:

from("direct:start")
    // Use failover load balancer in stateful round robin mode,
    // which means it will fail over immediately in case of an exception
    // as it does NOT inherit error handler. It will also keep retrying, as
    // it is configured to retry indefinitely.
    .loadBalance().failover(-1, false, true)
    .to("direct:bad", "direct:bad2", "direct:good", "direct:good2");

You can configure the same route in XML, as follows:

<route>
    <from uri="direct:start"/>
    <loadBalance>
        <!-- failover using stateful round robin,
        which will keep retrying the 4 endpoints indefinitely.
        You can set the maximumFailoverAttempt to break out after X attempts -->
        <failover roundRobin="true"/>
        <to uri="direct:bad"/>
        <to uri="direct:bad2"/>
        <to uri="direct:good"/>
        <to uri="direct:good2"/>
    </loadBalance>
</route>

In many enterprise environments, where server nodes of unequal processing power are hosting services, it is usually preferable to distribute the load in accordance with the individual server processing capacities. A weighted round robin algorithm or a weighted random algorithm can be used to address this problem.

The weighted load balancing policy allows you to specify a processing load distribution ratio for each server with respect to the others. You can specify this value as a positive processing weight for each server. A larger number indicates that the server can handle a larger load. The processing weight is used to determine the payload distribution ratio of each processing endpoint with respect to the others.

The parameters that can be used are


The following Java DSL examples show how to define a weighted round-robin route and a weighted random route:

// Java
// round-robin
from("direct:start")
  .loadBalance().weighted(true, "4:2:1" distributionRatioDelimiter=":")
  .to("mock:x", "mock:y", "mock:z");

//random
from("direct:start")
  .loadBalance().weighted(false, "4,2,1")
  .to("mock:x", "mock:y", "mock:z");

You can configure the round-robin route in XML, as follows:

<!-- round-robin -->
<route>
  <from uri="direct:start"/>
  <loadBalance>
    <weighted roundRobin="true" distributionRatio="4:2:1" distributionRatioDelimiter=":" />
    <to uri="mock:x"/>
    <to uri="mock:y"/>
    <to uri="mock:z"/>
  </loadBalance>
</route>

You can use a custom load balancer (eg your own implementation) also.

An example using Java DSL:

from("direct:start")
     // using our custom load balancer
     .loadBalance(new MyLoadBalancer())
     .to("mock:x", "mock:y", "mock:z");

And the same example using XML DSL:

<!-- this is the implementation of our custom load balancer -->
 <bean id="myBalancer" class="org.apache.camel.processor.CustomLoadBalanceTest$MyLoadBalancer"/>
 
 <camelContext xmlns="http://camel.apache.org/schema/spring">
   <route>
     <from uri="direct:start"/>
     <loadBalance>
       <!-- refer to my custom load balancer -->
       <custom ref="myBalancer"/>
       <!-- these are the endpoints to balancer -->
       <to uri="mock:x"/>
       <to uri="mock:y"/>
       <to uri="mock:z"/>
     </loadBalance>
   </route>
 </camelContext>

Notice in the XML DSL above we use <custom> which is only available in Camel 2.8 onwards. In older releases you would have to do as follows instead:

       <loadBalance ref="myBalancer">
         <!-- these are the endpoints to balancer -->
         <to uri="mock:x"/>
         <to uri="mock:y"/>
         <to uri="mock:z"/>
       </loadBalance>

To implement a custom load balancer you can extend some support classes such as LoadBalancerSupport and SimpleLoadBalancerSupport. The former supports the asynchronous routing engine, and the latter does not. Here is an example:

public static class MyLoadBalancer extends LoadBalancerSupport {
 
     public boolean process(Exchange exchange, AsyncCallback callback) {
         String body = exchange.getIn().getBody(String.class);
         try {
             if ("x".equals(body)) {
                 getProcessors().get(0).process(exchange);
             } else if ("y".equals(body)) {
                 getProcessors().get(1).process(exchange);
             } else {
                 getProcessors().get(2).process(exchange);
             }
         } catch (Throwable e) {
             exchange.setException(e);
         }
         callback.done(true);
         return true;
     }
 }

The multicast pattern, shown in Figure 7.10, is a variation of the recipient list with a fixed destination pattern, which is compatible with the InOut message exchange pattern. This is in contrast to recipient list, which is only compatible with the InOnly exchange pattern.


Whereas the multicast processor receives multiple Out messages in response to the original request (one from each of the recipients), the original caller is only expecting to receive a single reply. Thus, there is an inherent mismatch on the reply leg of the message exchange, and to overcome this mismatch, you must provide a custom aggregation strategy to the multicast processor. The aggregation strategy class is responsible for aggregating all of the Out messages into a single reply message.

Consider the example of an electronic auction service, where a seller offers an item for sale to a list of buyers. The buyers each put in a bid for the item, and the seller automatically selects the bid with the highest price. You can implement the logic for distributing an offer to a fixed list of buyers using the multicast() DSL command, as follows:

from("cxf:bean:offer").multicast(new HighestBidAggregationStrategy()).
    to("cxf:bean:Buyer1", "cxf:bean:Buyer2", "cxf:bean:Buyer3");

Where the seller is represented by the endpoint, cxf:bean:offer, and the buyers are represented by the endpoints, cxf:bean:Buyer1, cxf:bean:Buyer2, cxf:bean:Buyer3. To consolidate the bids received from the various buyers, the multicast processor uses the aggregation strategy, HighestBidAggregationStrategy. You can implement the HighestBidAggregationStrategy in Java, as follows:

// Java
import org.apache.camel.processor.aggregate.AggregationStrategy;
import org.apache.camel.Exchange;

public class HighestBidAggregationStrategy implements AggregationStrategy {
    public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
        float oldBid = oldExchange.getOut().getHeader("Bid", Float.class);
        float newBid = newExchange.getOut().getHeader("Bid", Float.class);
        return (newBid > oldBid) ? newExchange : oldExchange;
    }
}

Where it is assumed that the buyers insert the bid price into a header named, Bid. For more details about custom aggregation strategies, see Aggregator.

By default, the multicast processor invokes each of the recipient endpoints one after another (in the order listed in the to() command). In some cases, this might cause unacceptably long latency. To avoid these long latency times, you have the option of enabling parallel processing by adding the parallelProcessing() clause. For example, to enable parallel processing in the electronic auction example, define the route as follows:

from("cxf:bean:offer")
    .multicast(new HighestBidAggregationStrategy())
        .parallelProcessing()
        .to("cxf:bean:Buyer1", "cxf:bean:Buyer2", "cxf:bean:Buyer3");

Where the multicast processor now invokes the buyer endpoints, using a thread pool that has one thread for each of the endpoints.

If you want to customize the size of the thread pool that invokes the buyer endpoints, you can invoke the executorService() method to specify your own custom executor service. For example:

from("cxf:bean:offer")
    .multicast(new HighestBidAggregationStrategy())
        .executorService(MyExecutor)
        .to("cxf:bean:Buyer1", "cxf:bean:Buyer2", "cxf:bean:Buyer3");

Where MyExecutor is an instance of java.util.concurrent.ExecutorService type.

When the exchange has an InOut pattern, an aggregation strategy is used to aggregate reply messages. The default aggregation strategy takes the latest reply message and discards earlier replies. For example, in the following route, the custom strategy, MyAggregationStrategy, is used to aggregate the replies from the endpoints, direct:a, direct:b, and direct:c:

from("direct:start")
  .multicast(new MyAggregationStrategy())
      .parallelProcessing()
      .timeout(500)
      .to("direct:a", "direct:b", "direct:c")
  .end()
  .to("mock:result");

Before multicast sends a message to one of the recipient endpoints, it creates a message replica, which is a shallow copy of the original message. If you want to perform some custom processing on each message replica before the replica is sent to its endpoint, you can invoke the onPrepare DSL command in the multicast clause. The onPrepare command inserts a custom processor just after the message has been shallow-copied and just before the message is dispatched to its endpoint. For example, in the following route, the CustomProc processor is invoked on the message sent to direct:a and the CustomProc processor is also invoked on the message sent to direct:b.

from("direct:start")
  .multicast().onPrepare(new CustomProc())
  .to("direct:a").to("direct:b");

A common use case for the onPrepare DSL command is to perform a deep copy of some or all elements of a message. For example, the following CustomProc processor class performs a deep copy of the message body, where the message body is presumed to be of type, BodyType, and the deep copy is performed by the method, BodyType.deepCopy().

// Java
import org.apache.camel.*;
...
public class CustomProc implements Processor {

    public void process(Exchange exchange) throws Exception {
        BodyType body = exchange.getIn().getBody(BodyType.class);

        // Make a _deep_ copy of of the body object
        BodyType clone =  BodyType.deepCopy();
        exchange.getIn().setBody(clone);

        // Headers and attachments have already been
        // shallow-copied. If you need deep copies,
        // add some more code here.
    }
}
[Note]Note

Although the multicast syntax allows you to invoke the process DSL command in the multicast clause, this does not make sense semantically and it does not have the same effect as onPrepare (in fact, in this context, the process DSL command has no effect).

The Multicast will copy the source Exchange and multicast each copy. However the copy is a shallow copy, so in case you have mutateable message bodies, then any changes will be visible by the other copied messages. If you want to use a deep clone copy then you need to use a custom onPrepare which allows you to do this using the Processor interface.

Notice the onPrepare can be used for any kind of custom logic which you would like to execute before the Exchange is being multicasted.

[Note]Note

Its best practice to design for immutable objects.

For example if you have a mutable message body as this Animal class:

public class Animal implements Serializable {
 
     private int id;
     private String name;
 
     public Animal() {
     }
 
     public Animal(int id, String name) {
         this.id = id;
         this.name = name;
     }
 
     public Animal deepClone() {
         Animal clone = new Animal();
         clone.setId(getId());
         clone.setName(getName());
         return clone;
     }
 
     public int getId() {
         return id;
     }
 
     public void setId(int id) {
         this.id = id;
     }
 
     public String getName() {
         return name;
     }
 
     public void setName(String name) {
         this.name = name;
     }
 
     @Override
     public String toString() {
         return id + " " + name;
     }
 }

Then we can create a deep clone processor which clones the message body:

public class AnimalDeepClonePrepare implements Processor {
 
     public void process(Exchange exchange) throws Exception {
         Animal body = exchange.getIn().getBody(Animal.class);
 
         // do a deep clone of the body which wont affect when doing multicasting
         Animal clone = body.deepClone();
         exchange.getIn().setBody(clone);
     }
 }

Then we can use the AnimalDeepClonePrepare class in the Multicast route using the onPrepare option as shown:

from("direct:start")
     .multicast().onPrepare(new AnimalDeepClonePrepare()).to("direct:a").to("direct:b");

And the same example in XML DSL

<camelContext xmlns="http://camel.apache.org/schema/spring">
     <route>
         <from uri="direct:start"/>
         <!-- use on prepare with multicast -->
         <multicast onPrepareRef="animalDeepClonePrepare">
             <to uri="direct:a"/>
             <to uri="direct:b"/>
         </multicast>
     </route>
 
     <route>
         <from uri="direct:a"/>
         <process ref="processorA"/>
         <to uri="mock:a"/>
     </route>
     <route>
         <from uri="direct:b"/>
         <process ref="processorB"/>
         <to uri="mock:b"/>
     </route>
 </camelContext>
 
 <!-- the on prepare Processor which performs the deep cloning -->
 <bean id="animalDeepClonePrepare" class="org.apache.camel.processor.AnimalDeepClonePrepare"/>
 
 <!-- processors used for the last two routes, as part of unit test -->
 <bean id="processorA" class="org.apache.camel.processor.MulticastOnPrepareTest$ProcessorA"/>
 <bean id="processorB" class="org.apache.camel.processor.MulticastOnPrepareTest$ProcessorB"/>

The multicast DSL command supports the following options:

Name Default Value Description
strategyRef Refers to an AggregationStrategy to be used to assemble the replies from the multicasts, into a single outgoing message from the Multicast. By default Camel will use the last reply as the outgoing message.
parallelProcessing false If enables then sending messages to the multicasts occurs concurrently. Note the caller thread will still wait until all messages has been fully processed, before it continues. Its only the sending and processing the replies from the multicasts which happens concurrently.
executorServiceRef Refers to a custom Thread Pool to be used for parallel processing. Notice if you set this option, then parallel processing is automatic implied, and you do not have to enable that option as well.
stopOnException false Camel 2.2: Whether or not to stop continue processing immediately when an exception occurred. If disable, then Camel will send the message to all multicasts regardless if one of them failed. You can deal with exceptions in the AggregationStrategy class where you have full control how to handle that.
streaming false If enabled then Camel will process replies out-of-order, eg in the order they come back. If disabled, Camel will process replies in the same order as multicasted.
timeout Camel 2.5: Sets a total timeout specified in millis. If the Multicast hasn't been able to send and process all replies within the given timeframe, then the timeout triggers and the Multicast breaks out and continues. Notice if you provide a TimeoutAwareAggregationStrategy then the timeout method is invoked before breaking out.
onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the copy of the Exchange each multicast will receive. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc.
shareUnitOfWork false Camel 2.8: Whether the unit of work should be shared. See the same option on Splitter for more details.

The composed message processor pattern, as shown in Figure 7.11, allows you to process a composite message by splitting it up, routing the sub-messages to appropriate destinations, and then re-aggregating the responses back into a single message.


Processing starts by splitting the order, using a Splitter. The Splitter then sends individual OrderItems to a Content Based Router, which routes messages based on the item type. Widget items get sent for checking in the widgetInventory bean and gadget items get sent to the gadgetInventory bean. Once these OrderItems have been validated by the appropriate bean, they are sent on to the Aggregator which collects and re-assembles the validated OrderItems into an order again.

Each received order has a header containing an order ID. We make use of the order ID during the aggregation step: the .header("orderId") qualifier on the aggregate() DSL command instructs the aggregator to use the header with the key, orderId, as the correlation expression.

For full details, check the example source here:

The scatter-gather pattern, as shown in Figure 7.12, enables you to route messages to a number of dynamically specified recipients and re-aggregate the responses back into a single message.


The following example outlines an application that gets the best quote for beer from several different vendors. The examples uses a dynamic Recipient List to request a quote from all vendors and an Aggregator to pick the best quote out of all the responses. The routes for this application are defined as follows:

<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="direct:start"/>
    <recipientList>
      <header>listOfVendors</header>
    </recipientList>
  </route>
  <route>
    <from uri="seda:quoteAggregator"/>
    <aggregate strategyRef="aggregatorStrategy" completionTimeout="1000">
      <correlationExpression>
        <header>quoteRequestId</header>
      </correlationExpression>
      <to uri="mock:result"/>
    </aggregate>
  </route>
</camelContext>

In the first route, the Recipient List looks at the listOfVendors header to obtain the list of recipients. Hence, the client that sends messages to this application needs to add a listOfVendors header to the message. Example 7.1 shows some sample code from a messaging client that adds the relevant header data to outgoing messages.


The message would be distributed to the following endpoints: bean:vendor1, bean:vendor2, and bean:vendor3. These beans are all implemented by the following class:

public class MyVendor {
    private int beerPrice;
    
    @Produce(uri = "seda:quoteAggregator")
    private ProducerTemplate quoteAggregator;
            
    public MyVendor(int beerPrice) {
        this.beerPrice = beerPrice;
    }
        
    public void getQuote(@XPath("/quote_request/@item") String item, Exchange exchange) throws Exception {
        if ("beer".equals(item)) {
            exchange.getIn().setBody(beerPrice);
            quoteAggregator.send(exchange);
        } else {
            throw new Exception("No quote available for " + item);
        }
    }
}

The bean instances, vendor1, vendor2, and vendor3, are instantiated using Spring XML syntax, as follows:

<bean id="aggregatorStrategy" class="org.apache.camel.spring.processor.scattergather.LowestQuoteAggregationStrategy"/>

<bean id="vendor1" class="org.apache.camel.spring.processor.scattergather.MyVendor">
  <constructor-arg>
    <value>1</value>
  </constructor-arg>
</bean>

<bean id="vendor2" class="org.apache.camel.spring.processor.scattergather.MyVendor">
  <constructor-arg>
    <value>2</value>
  </constructor-arg>
</bean>

<bean id="vendor3" class="org.apache.camel.spring.processor.scattergather.MyVendor">
  <constructor-arg>
    <value>3</value>
  </constructor-arg>
</bean>

Each bean is initialized with a different price for beer (passed to the constructor argument). When a message is sent to each bean endpoint, it arrives at the MyVendor.getQuote method. This method does a simple check to see whether this quote request is for beer and then sets the price of beer on the exchange for retrieval at a later step. The message is forwarded to the next step using POJO Producing (see the @Produce annotation).

At the next step, we want to take the beer quotes from all vendors and find out which one was the best (that is, the lowest). For this, we use an Aggregator with a custom aggregation strategy. The Aggregator needs to identify which messages are relevant to the current quote, which is done by correlating messages based on the value of the quoteRequestId header (passed to the correlationExpression). As shown in Example 7.1, the correlation ID is set to quoteRequest-1 (the correlation ID should be unique). To pick the lowest quote out of the set, you can use a custom aggregation strategy like the following:

public class LowestQuoteAggregationStrategy implements AggregationStrategy {
    public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
        // the first time we only have the new exchange
        if (oldExchange == null) {
            return newExchange;
        }

        if (oldExchange.getIn().getBody(int.class) < newExchange.getIn().getBody(int.class)) {
            return oldExchange;
        } else {
            return newExchange;
        }
    }
}

You can specify the recipients explicitly in the scatter-gather application by employing a static Recipient List. The following example shows the routes you would use to implement a static scatter-gather scenario:

from("direct:start").multicast().to("seda:vendor1", "seda:vendor2", "seda:vendor3");

from("seda:vendor1").to("bean:vendor1").to("seda:quoteAggregator");
from("seda:vendor2").to("bean:vendor2").to("seda:quoteAggregator");
from("seda:vendor3").to("bean:vendor3").to("seda:quoteAggregator");

from("seda:quoteAggregator")
    .aggregate(header("quoteRequestId"), new LowestQuoteAggregationStrategy()).to("mock:result")

The loop pattern enables you to process a message multiple times. It is used mainly for testing.

[Important]Default mode

Notice by default the loop uses the same exchange throughout the looping. So the result from the previous iteration is used for the next (eg Pipes and Filters). From Camel 2.8 onwards you can enable copy mode instead. See the options table for more details.

On each loop iteration, two exchange properties are set, which can optionally be read by any processors included in the loop.

Property Description
CamelLoopSize Apache Camel 2.0: Total number of loops
CamelLoopIndex Apache Camel 2.0: Index of the current iteration (0 based)

The loop DSL command supports the following options:

Name Default Value Description
copy false Camel 2.8: Whether or not copy mode is used. If false then the same Exchange is being used throughout the looping. So the result from the previous iteration will be visible for the next iteration. Instead you can enable copy mode, and then each iteration is restarting with a fresh copy of the input Exchange.

The sample DSL command supports the following options:

Name Default Value Description
messageFrequency Samples the message every N'th message. You can only use either frequency or period.
samplePeriod 1 Samples the message every N'th period. You can only use either frequency or period.
units SECOND Time unit as an enum of java.util.concurrent.TimeUnit from the JDK.

The Dynamic Router pattern, as shown in Figure 7.13, enables you to route a message consecutively through a series of processing steps, where the sequence of steps is not known at design time. The list of endpoints through which the message should pass is calculated dynamically at run time. Each time the message returns from an endpoint, the dynamic router calls back on a bean to discover the next endpoint in the route.


In Camel 2.5 we introduced a dynamicRouter in the DSL, which is like a dynamic Routing Slip that evaluates the slip on-the-fly.

[Warning]Beware

You must ensure the expression used for the dynamicRouter (such as a bean), returns null to indicate the end. Otherwise, the dynamicRouter will continue in an endless loop.

From Camel 2.5, the Dynamic Router updates the exchange property, Exchange.SLIP_ENDPOINT, with the current endpoint as it advances through the slip. This enables you to find out how far the exchange has progressed through the slip. (It's a slip because the Dynamic Router implementation is based on Routing Slip).

In Java DSL you can use the dynamicRouter as follows:

from("direct:start")
    // use a bean as the dynamic router
    .dynamicRouter(bean(DynamicRouterTest.class, "slip"));

Which will leverage a Bean in EIP Component Reference to compute the slip on-the-fly, which could be implemented as follows:

// Java
/**
 * Use this method to compute dynamic where we should route next.
 *
 * @param body the message body
 * @return endpoints to go, or <tt>null</tt> to indicate the end
 */
public String slip(String body) {
    bodies.add(body);
    invoked++;

    if (invoked == 1) {
        return "mock:a";
    } else if (invoked == 2) {
        return "mock:b,mock:c";
    } else if (invoked == 3) {
        return "direct:foo";
    } else if (invoked == 4) {
        return "mock:result";
    }

    // no more so return null
    return null;
    }
[Note]Note

The preceding example is not thread safe. You would have to store the state on the Exchange to ensure thread safety.

The dynamicRouter DSL command supports the following options:

Name Default Value Description
uriDelimiter , Delimiter used if the Expression returned multiple endpoints.
ignoreInvalidEndpoints false If an endpoint uri could not be resolved, should it be ignored. Otherwise Camel will thrown an exception stating the endpoint uri is not valid.

The enrich DSL command supports the following options:

Name Default Value Description
uri The endpoint uri for the external servie to enrich from. You must use either uri or ref.
ref Refers to the endpoint for the external servie to enrich from. You must use either uri or ref.
strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service, into a single outgoing message. By default Camel will use the reply from the external service as outgoing message.

The pollEnrich DSL command supports the following options:

Name Default Value Description
uri The endpoint uri for the external servie to enrich from. You must use either uri or ref.
ref Refers to the endpoint for the external servie to enrich from. You must use either uri or ref.
strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service, into a single outgoing message. By default Camel will use the reply from the external service as outgoing message.
timeout 0 Timeout in millis to use when polling from the external service. See below for important details about the timeout.

This example shows a Message Normalizer that converts two types of XML messages into a common format. Messages in this common format are then filtered.

Using the Fluent Builders

// we need to normalize two types of incoming messages
from("direct:start")
    .choice()
        .when().xpath("/employee").to("bean:normalizer?method=employeeToPerson")
        .when().xpath("/customer").to("bean:normalizer?method=customerToPerson")
    .end()
    .to("mock:result");

In this case we're using a Java bean as the normalizer. The class looks like this

// Java
public class MyNormalizer {
    public void employeeToPerson(Exchange exchange, @XPath("/employee/name/text()") String name) {
        exchange.getOut().setBody(createPerson(name));            
    }

    public void customerToPerson(Exchange exchange, @XPath("/customer/@name") String name) {
        exchange.getOut().setBody(createPerson(name));
    }        
    
    private String createPerson(String name) {
        return "&lt;person name=\"" + name + "\"/>";
    }
}    

The claim check pattern, shown in Figure 8.4, allows you to replace message content with a claim check (a unique key), which can be used to retrieve the message content at a later time. The message content is stored temporarily in a persistent store like a database or file system. This pattern is very useful when message content is very large (thus it would be expensive to send around) and not all components require all information.

It can also be useful in situations where you cannot trust the information with an outside party; in this case, you can use the Claim Check to hide the sensitive portions of data.


The example route is just a Pipeline. In a real application, you would substitute some other steps for the mock:testCheckpoint endpoint.

The sort DSL command supports the following options:

Name Default Value Description
comparatorRef Refers to a custom java.util.Comparator to use for sorting the message body. Camel will by default use a comparator which does a A..Z sorting.

To use validate in the XML DSL, the recommended approach is to use the simple expression language:

<route>
  <from uri="jms:queue:incoming"/>
  <validate>
    <simple>${body} regex ^\\w{10}\\,\\d{2}\\,\\w{24}$</simple>
  </validate>
  <beanRef ref="myServiceBean" method="processLine"/>
</route>

<bean id="myServiceBean" class="com.mycompany.MyServiceBean"/>

You can also validate a message header—for example:

<route>
  <from uri="jms:queue:incoming"/>
  <validate>
    <simple>${in.header.bar} == 100</simple>
  </validate>
  <beanRef ref="myServiceBean" method="processLine"/>
</route>

<bean id="myServiceBean" class="com.mycompany.MyServiceBean"/>

The messaging mapper pattern describes how to map domain objects to and from a canonical message format, where the message format is chosen to be as platform neutral as possible. The chosen message format should be suitable for transmission through a message bus, where the message bus is the backbone for integrating a variety of different systems, some of which might not be object-oriented.

Many different approaches are possible, but not all of them fulfill the requirements of a messaging mapper. For example, an obvious way to transmit an object is to use object serialization, which enables you to write an object to a data stream using an unambiguous encoding (supported natively in Java). However, this is not a suitable approach to use for the messaging mapper pattern, however, because the serialization format is understood only by Java applications. Java object serialization creates an impedance mismatch between the original application and the other applications in the messaging system.

The requirements for a messaging mapper can be summarized as follows:

  • The canonical message format used to transmit domain objects should be suitable for consumption by non-object oriented applications.

  • The mapper code should be implemented separately from both the domain object code and the messaging infrastructure. Apache Camel helps fulfill this requirement by providing hooks that can be used to insert mapper code into a route.

  • The mapper might need to find an effective way of dealing with certain object-oriented concepts such as inheritance, object references, and object trees. The complexity of these issues varies from application to application, but the aim of the mapper implementation must always be to create messages that can be processed effectively by non-object-oriented applications.

You can use one of the following mechanisms to find the objects to map:

The polling consumer pattern, shown in Figure 9.2, is a pattern for implementing the consumer endpoint in a Apache Camel component, so it is only relevant to programmers who need to develop a custom component in Apache Camel. Existing components already have a consumer implementation pattern hard-wired into them.

Consumers that conform to this pattern expose polling methods, receive(), receive(long timeout), and receiveNoWait() that return a new exchange object, if one is available from the monitored resource. A polling consumer implementation must provide its own thread pool to perform the polling.

For more details about this implementation pattern, see Consumer Patterns and Threading in Programming EIP Components, Consumer Interface in Programming EIP Components, and Using the Consumer Template in Programming EIP Components.


The competing consumers pattern, shown in Figure 9.3, enables multiple consumers to pull messages from the same queue, with the guarantee that each message is consumed once only. This pattern can be used to replace serial message processing with concurrent message processing (bringing a corresponding reduction in response latency).


The following components demonstrate the competing consumers pattern:

The purpose of the SEDA component is to simplify concurrent processing by breaking the computation into stages. A SEDA endpoint essentially encapsulates an in-memory blocking queue (implemented by java.util.concurrent.BlockingQueue). Therefore, you can use a SEDA endpoint to break a route into stages, where each stage might use multiple threads. For example, you can define a SEDA route consisting of two stages, as follows:

// Stage 1: Read messages from file system.
from("file://var/messages").to("seda:fanout");

// Stage 2: Perform concurrent processing (3 threads).
from("seda:fanout").to("cxf:bean:replica01");
from("seda:fanout").to("cxf:bean:replica02");
from("seda:fanout").to("cxf:bean:replica03");

Where the first stage contains a single thread that consumes message from a file endpoint, file://var/messages, and routes them to a SEDA endpoint, seda:fanout. The second stage contains three threads: a thread that routes exchanges to cxf:bean:replica01, a thread that routes exchanges to cxf:bean:replica02, and a thread that routes exchanges to cxf:bean:replica03. These three threads compete to take exchange instances from the SEDA endpoint, which is implemented using a blocking queue. Because the blocking queue uses locking to prevent more than one thread from accessing the queue at a time, you are guaranteed that each exchange instance can only be consumed once.

For a discussion of the differences between a SEDA endpoint and a thread pool created by thread(), see SEDA in EIP Component Reference.

The message dispatcher pattern, shown in Figure 9.4, is used to consume messages from a channel and then distribute them locally to performers, which are responsible for processing the messages. In a Apache Camel application, performers are usually represented by in-process endpoints, which are used to transfer messages to another section of the route.


You can implement the message dispatcher pattern in Apache Camel using one of the following approaches:

If your application consumes messages from a JMS queue, you can implement the message dispatcher pattern using JMS selectors. A JMS selector is a predicate expression involving JMS headers and JMS properties. If the selector evaluates to true, the JMS message is allowed to reach the consumer, and if the selector evaluates to false, the JMS message is blocked. In many respects, a JMS selector is like a filter processor, but it has the additional advantage that the filtering is implemented inside the JMS provider. This means that a JMS selector can block messages before they are transmitted to the Apache Camel application. This provides a significant efficiency advantage.

In Apache Camel, you can define a JMS selector on a consumer endpoint by setting the selector query option on a JMS endpoint URI. For example:

from("jms:dispatcher?selector=CountryCode='US'").to("cxf:bean:replica01");
from("jms:dispatcher?selector=CountryCode='IE'").to("cxf:bean:replica02");
from("jms:dispatcher?selector=CountryCode='DE'").to("cxf:bean:replica03");

Where the predicates that appear in a selector string are based on a subset of the SQL92 conditional expression syntax (for full details, see the JMS specification). The identifiers appearing in a selector string can refer either to JMS headers or to JMS properties. For example, in the preceding routes, the sender sets a JMS property called CountryCode.

If you want to add a JMS property to a message from within your Apache Camel application, you can do so by setting a message header (either on In message or on Out messages). When reading or writing to JMS endpoints, Apache Camel maps JMS headers and JMS properties to, and from, its native message headers.

Technically, the selector strings must be URL encoded according to the application/x-www-form-urlencoded MIME format (see the HTML specification). In practice, the &(ampersand) character might cause difficulties because it is used to delimit each query option in the URI. For more complex selector strings that might need to embed the & character, you can encode the strings using the java.net.URLEncoder utility class. For example:

from("jms:dispatcher?selector=" + java.net.URLEncoder.encode("CountryCode='US'","UTF-8")).
    to("cxf:bean:replica01");

Where the UTF-8 encoding must be used.

A durable subscriber, as shown in Figure 9.6, is a consumer that wants to receive all of the messages sent over a particular publish-subscribe channel, including messages sent while the consumer is disconnected from the messaging system. This requires the messaging system to store messages for later replay to the disconnected consumer. There also has to be a mechanism for a consumer to indicate that it wants to establish a durable subscription. Generally, a publish-subscribe channel (or topic) can have both durable and non-durable subscribers, which behave as follows:

  • non-durable subscriber—Can have two states: connected and disconnected. While a non-durable subscriber is connected to a topic, it receives all of the topic's messages in real time. However, a non-durable subscriber never receives messages sent to the topic while the subscriber is disconnected.

  • durable subscriber—Can have two states: connected and inactive. The inactive state means that the durable subscriber is disconnected from the topic, but wants to receive the messages that arrive in the interim. When the durable subscriber reconnects to the topic, it receives a replay of all the messages sent while it was inactive.


Another alternative is to combine the Message Dispatcher or Content-Based Router with File component in EIP Component Reference or JPA component in EIP Component Reference components for durable subscribers then something like SEDA component in EIP Component Reference for non-durable.

Here is a simple example of creating durable subscribers to a JMS in EIP Component Reference topic

Using the Fluent Builders

 from("direct:start").to("activemq:topic:foo");
 
 from("activemq:topic:foo?clientId=1&durableSubscriptionName=bar1").to("mock:result1");
 
 from("activemq:topic:foo?clientId=2&durableSubscriptionName=bar2").to("mock:result2");

Using the Spring XML Extensions

 <route>
     <from uri="direct:start"/>
     <to uri="activemq:topic:foo"/>
 </route>
 
 <route>
     <from uri="activemq:topic:foo?clientId=1&durableSubscriptionName=bar1"/>
     <to uri="mock:result1"/>
 </route>
 
 <route>
     <from uri="activemq:topic:foo?clientId=2&durableSubscriptionName=bar2"/>
     <to uri="mock:result2"/>
 </route>

Here is another example of JMS in EIP Component Reference durable subscribers, but this time using virtual topics (recommended by AMQ over durable subscriptions)

Using the Fluent Builders

 from("direct:start").to("activemq:topic:VirtualTopic.foo");
 
 from("activemq:queue:Consumer.1.VirtualTopic.foo").to("mock:result1");
 
 from("activemq:queue:Consumer.2.VirtualTopic.foo").to("mock:result2");

Using the Spring XML Extensions

 <route>
     <from uri="direct:start"/>
     <to uri="activemq:topic:VirtualTopic.foo"/>
 </route>
 
 <route>
     <from uri="activemq:queue:Consumer.1.VirtualTopic.foo"/>
     <to uri="mock:result1"/>
 </route>
 
 <route>
     <from uri="activemq:queue:Consumer.2.VirtualTopic.foo"/>
     <to uri="mock:result2"/>
 </route>

In Apache Camel, the idempotent consumer pattern is implemented by the idempotentConsumer() processor, which takes two arguments:

As each message comes in, the idempotent consumer processor looks up the current message ID in the repository to see if this message has been seen before. If yes, the message is discarded; if no, the message is allowed to pass and its ID is added to the repository.

The code shown in Example 9.1 uses the TransactionID header to filter out duplicates.


Where the call to memoryMessageIdRepository(200) creates an in-memory cache that can hold up to 200 message IDs.

You can also define an idempotent consumer using XML configuration. For example, you can define the preceding route in XML, as follows:

<camelContext id="buildIdempotentConsumer" xmlns="http://camel.apache.org/schema/spring">
  <route>
    <from uri="seda:a"/>
    <idempotentConsumer messageIdRepositoryRef="MsgIDRepos">
      <simple>header.TransactionID</simple>
      <to uri="seda:b"/>
    </idempotentConsumer>
  </route>
</camelContext>

<bean id="MsgIDRepos" class="org.apache.camel.processor.idempotent.MemoryMessageIdRepository">
    <!-- Specify the in-memory cache size. -->
    <constructor-arg type="int" value="200"/>
</bean>

A JDBC repository is also supported for storing message IDs in the idempotent consumer pattern. The implementation of the JDBC repository is provided by the SQL component, so if you are using the Maven build system, add a dependency on the camel-sql artifact.

You can use the SingleConnectionDataSource JDBC wrapper class from the Spring persistence API in order to instantiate the connection to a SQL database. For example, to instantiate a JDBC connection to a HyperSQL database instance, you could define the following JDBC data source:

<bean id="dataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource">
    <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
    <property name="url" value="jdbc:hsqldb:mem:camel_jdbc"/>
    <property name="username" value="sa"/>
    <property name="password" value=""/>
</bean>
[Note]Note

The preceding JDBC data source uses the HyperSQL mem protocol, which creates a memory-only database instance. This is a toy implementation of the HyperSQL database which is not actually persistent.

Using the preceding data source, you can define an idempotent consumer pattern that uses the JDBC message ID repository, as follows:

<bean id="messageIdRepository" class="org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository">
	<constructor-arg ref="dataSource" />
	<constructor-arg value="myProcessorName" />
</bean>

<camel:camelContext>
	<camel:errorHandler id="deadLetterChannel" type="DeadLetterChannel" deadLetterUri="mock:error">
		<camel:redeliveryPolicy maximumRedeliveries="0" maximumRedeliveryDelay="0" logStackTrace="false" />
	</camel:errorHandler>
	
	<camel:route id="JdbcMessageIdRepositoryTest" errorHandlerRef="deadLetterChannel">
		<camel:from uri="direct:start" />
		<camel:idempotentConsumer messageIdRepositoryRef="messageIdRepository">
			<camel:header>messageId</camel:header>
			<camel:to uri="mock:result" />
		</camel:idempotentConsumer>
	</camel:route>
	</camel:camelContext>

Available as of Camel 2.8

You can now set the skipDuplicate option to false which instructs the idempotent consumer to route duplicate messages as well. However the duplicate message has been marked as duplicate by having a property on the Exchange set to true. We can leverage this fact by using a Content-Based Router or Message Filter to detect this and handle duplicate messages.

For example in the following example we use the Message Filter to send the message to a duplicate endpoint, and then stop continue routing that message.

from("direct:start")
     // instruct idempotent consumer to not skip duplicates as we will filter then our self
     .idempotentConsumer(header("messageId")).messageIdRepository(repo).skipDuplicate(false)
     .filter(property(Exchange.DUPLICATE_MESSAGE).isEqualTo(true))
         // filter out duplicate messages by sending them to someplace else and then stop
         .to("mock:duplicate")
         .stop()
     .end()
     // and here we process only new messages (no duplicates)
     .to("mock:result");
 

The sample example in XML DSL would be:

 <!-- idempotent repository, just use a memory based for testing -->
 <bean id="myRepo" class="org.apache.camel.processor.idempotent.MemoryIdempotentRepository"/>
 
 <camelContext xmlns="http://camel.apache.org/schema/spring">
     <route>
         <from uri="direct:start"/>
         <!-- we do not want to skip any duplicate messages -->
         <idempotentConsumer messageIdRepositoryRef="myRepo" skipDuplicate="false">
             <!-- use the messageId header as key for identifying duplicate messages -->
             <header>messageId</header>
             <!-- we will to handle duplicate messages using a filter -->
             <filter>
                 <!-- the filter will only react on duplicate messages, if this property is set on the Exchange -->
                 <property>CamelDuplicateMessage</property>
                 <!-- and send the message to this mock, due its part of an unit test -->
                 <!-- but you can of course do anything as its part of the route -->
                 <to uri="mock:duplicate"/>
                 <!-- and then stop -->
                 <stop/>
             </filter>
             <!-- here we route only new messages -->
             <to uri="mock:result"/>
         </idempotentConsumer>
     </route>
 </camelContext>
 

If you have running Camel in a clustered environment, a in memory idempotent repository doesn't work (see above). You can setup either a central database or use the idempotent consumer implementation based on the Hazelcast data grid. Hazelcast finds the nodes over multicast (which is default - configure Hazelcast for tcp-ip) and creates automatically a map based repository:

HazelcastIdempotentRepository idempotentRepo = new HazelcastIdempotentRepository("myrepo");
 
from("direct:in").idempotentConsumer(header("messageId"), idempotentRepo).to("mock:out");

You have to define how long the repository should hold each message id (default is to delete it never). To avoid that you run out of memory you should create an eviction strategy based on the Hazelcast configuration. For additional information see camel-hazelcast in EIP Component Reference.

See this little tutorial, how setup such an idempotent repository on two cluster nodes using Apache Karaf.

The Idempotent Consumer has the following options:

Option Default Description
eager true Camel 2.0: Eager controls whether Camel adds the message to the repository before or after the exchange has been processed. If enabled before then Camel will be able to detect duplicate messages even when messages are currently in progress. By disabling Camel will only detect duplicates when a message has successfully been processed.
messageIdRepositoryRef null A reference to a IdempotentRepository to lookup in the registry. This option is mandatory when using XML DSL.
skipDuplicate true Camel 2.8: Sets whether to skip duplicate messages. If set to false then the message will be continued. However the Exchange has been marked as a duplicate by having the Exchange.DUPLICATE_MESSAG exchange property set to a Boolean.TRUE value.

The messaging gateway pattern, shown in Figure 9.8, describes an approach to integrating with a messaging system, where the messaging system's API remains hidden from the programmer at the application level. One of the more common example is when you want to translate synchronous method calls into request/reply message exchanges, without the programmer being aware of this.


The following Apache Camel components provide this kind of integration with the messaging system:

  • CXF in EIP Component Reference

  • Bean in EIP Component Reference

The service activator pattern, shown in Figure 9.9, describes the scenario where a service's operations are invoked in response to an incoming request message. The service activator identifies which operation to call and extracts the data to use as the operation's parameters. Finally, the service activator invokes an operation using the data extracted from the message. The operation invocation can be either oneway (request only) or two-way (request/reply).


In many respects, a service activator resembles a conventional remote procedure call (RPC), where operation invocations are encoded as messages. The main difference is that a service activator needs to be more flexible. An RPC framework standardizes the request and reply message encodings (for example, Web service operations are encoded as SOAP messages), whereas a service activator typically needs to improvise the mapping between the messaging system and the service's operations.

The main mechanism that Apache Camel provides to support the service activator pattern is bean integration. Bean integration provides a general framework for mapping incoming messages to method invocations on Java objects. For example, the Java fluent DSL provides the processors bean() and beanRef() that you can insert into a route to invoke methods on a registered Java bean. The detailed mapping of message data to Java method parameters is determined by the bean binding, which can be implemented by adding annotations to the bean class.

For example, consider the following route which calls the Java method, BankBean.getUserAccBalance(), to service requests incoming on a JMS/ActiveMQ queue:

from("activemq:BalanceQueries")
  .setProperty("userid", xpath("/Account/BalanceQuery/UserID").stringResult())
  .beanRef("bankBean", "getUserAccBalance")
  .to("velocity:file:src/scripts/acc_balance.vm")
  .to("activemq:BalanceResults");

The messages pulled from the ActiveMQ endpoint, activemq:BalanceQueries, have a simple XML format that provides the user ID of a bank account. For example:

<?xml version='1.0' encoding='UTF-8'?>
<Account>
  <BalanceQuery>
    <UserID>James.Strachan</UserID>
  </BalanceQuery>
</Account>

The first processor in the route, setProperty(), extracts the user ID from the In message and stores it in the userid exchange property. This is preferable to storing it in a header, because the In headers are not available after invoking the bean.

The service activation step is performed by the beanRef() processor, which binds the incoming message to the getUserAccBalance() method on the Java object identified by the bankBean bean ID. The following code shows a sample implementation of the BankBean class:

package tutorial;

import org.apache.camel.language.XPath;

public class BankBean {
    public int getUserAccBalance(@XPath("/Account/BalanceQuery/UserID") String user) {
        if (user.equals("James.Strachan")) {
            return 1200;
        }
        else {
            return 0;
        }
      }
}

Where the binding of message data to method parameter is enabled by the @XPath annotation, which injects the content of the UserID XML element into the user method parameter. On completion of the call, the return value is inserted into the body of the Out message which is then copied into the In message for the next step in the route. In order for the bean to be accessible to the beanRef() processor, you must instantiate an instance in Spring XML. For example, you can add the following lines to the META-INF/spring/camel-context.xml configuration file to instantiate the bean:

<?xml version="1.0" encoding="UTF-8"?>
<beans ... >
  ...
  <bean id="bankBean" class="tutorial.BankBean"/>
</beans>

Where the bean ID, bankBean, identifes this bean instance in the registry.

The output of the bean invocation is injected into a Velocity template, to produce a properly formatted result message. The Velocity endpoint, velocity:file:src/scripts/acc_balance.vm, specifies the location of a velocity script with the following contents:

<?xml version='1.0' encoding='UTF-8'?>
<Account>
  <BalanceResult>
    <UserID>${exchange.getProperty("userid")}</UserID>
    <Balance>${body}</Balance>
  </BalanceResult>
</Account>

The exchange instance is available as the Velocity variable, exchange, which enables you to retrieve the userid exchange property, using ${exchange.getProperty("userid")}. The body of the current In message, ${body}, contains the result of the getUserAccBalance() method invocation.

The Detour from the Introducing Enterprise Integration Patterns allows you to send messages through additional steps if a control condition is met. It can be useful for turning on extra validation, testing, debugging code when needed.

In this example we essentially have a route like from("direct:start").to("mock:result") with a conditional detour to the mock:detour endpoint in the middle of the route..

from("direct:start").choice()
    .when().method("controlBean", "isDetour").to("mock:detour").end()
    .to("mock:result");                

Using the Spring XML Extensions

<route>
  <from uri="direct:start"/>
    <choice>
      <when>
        <method bean="controlBean" method="isDetour"/>
	<to uri="mock:detour"/>
      </when>
    </choice>
    <to uri="mock:result"/>
  </split>
</route>

whether the detour is turned on or off is decided by the ControlBean. So, when the detour is on the message is routed to mock:detour and then mock:result. When the detour is off, the message is routed to mock:result.

For full details, check the example source here:

camel-core/src/test/java/org/apache/camel/processor/DetourTest.java

Apache Camel provides several ways to perform logging in a route:

  • Using the log DSL command.

  • Using the Log in EIP Component Reference component, which can log the message content.

  • Using the Tracer, which traces message flow.

  • Using a Processor or a Bean in EIP Component Reference endpoint to perform logging in Java.

[Important]Difference between the log DSL command and the log component

The log DSL is much lighter and meant for logging human logs such as Starting to do .... It can only log a message based on the Simple language. In contrast, the Log in EIP Component Reference component is a fully featured logging component. The Log in EIP Component Reference component is capable of logging the message itself and you have many URI options to control the logging.

You can define a wiretap with a new exchange instance by setting the copy flag to false (the default is true). In this case, an initially empty exchange is created for the wiretap.

For example, to create a new exchange instance using the processor approach:

from("direct:start")
    .wireTap("direct:foo", false, new Processor() {
        public void process(Exchange exchange) throws Exception {
            exchange.getIn().setBody("Bye World");
            exchange.getIn().setHeader("foo", "bar");
        }
    }).to("mock:result");


from("direct:foo").to("mock:foo");

Where the second wireTap argument sets the copy flag to false, indicating that the original exchange is not copied and an empty exchange is created instead.

To create a new exchange instance using the expression approach:

from("direct:start")
    .wireTap("direct:foo", false, constant("Bye World"))
    .to("mock:result");

from("direct:foo").to("mock:foo");

Using the Spring XML extensions, you can indicate that a new exchange is to be created by setting the wireTap element's copy attribute to false.

To create a new exchange instance using the processor approach, where the processorRef attribute references a spring bean with the myProcessor ID, as follows:

<route>
    <from uri="direct:start2"/>
    <wireTap uri="direct:foo" processorRef="myProcessor" copy="false"/>
    <to uri="mock:result"/>
</route>

And to create a new exchange instance using the expression approach:

<route>
    <from uri="direct:start"/>
    <wireTap uri="direct:foo" copy="false">
        <body><constant>Bye World</constant></body>
    </wireTap>
    <to uri="mock:result"/>
</route>

Sending a new Exchange and set headers in DSL

Available as of Camel 2.8

If you send a new messages using the Wire Tap then you could only set the message body using an Expression from the DSL. If you also need to set new headers you would have to use a Processor for that. So in Camel 2.8 onwards we have improved this situation so you can now set headers as well in the DSL.

The following example sends a new message which has

  • "Bye World" as message body

  • a header with key "id" with the value 123

  • a header with key "date" which has current date as value

The wireTap DSL command supports the following options:

Name Default Value Description
uri The endpoint uri where to send the wire tapped message. You should use either uri or ref.
ref Refers to the endpoint where to send the wire tapped message. You should use either uri or ref.
executorServiceRef Refers to a custom Thread Pool to be used when processing the wire tapped messages. If not set then Camel uses a default thread pool.
processorRef Refers to a custom Processor to be used for creating a new message (eg the send a new message mode). See below.
copy true Camel 2.3: Should a copy of the Exchange to used when wire tapping the message.
onPrepareRef Camel 2.8: Refers to a custom Processor to prepare the copy of the Exchange to be wire tapped. This allows you to do any custom logic, such as deep-cloning the message payload if that's needed etc.

The integration between Apache Camel and ServiceMix is provided by the servicemix-camel module. This module is provided with ServiceMix, but actually implements a plug-in for the Apache Camel product: the JBI component (see JBI in EIP Component Reference and JBI Component).

To access the JBI component from Apache Camel, make sure that the servicemix-camel JAR file is included on your Classpath or, if you are using Maven, include a dependency on the servicemix-camel artifact in your project POM. You can then access the JBI component by defining Apache Camel endpoint URIs with the jbi: component prefix.

ServiceMix defines a flexible format for defining URIs, which is described in detail in ServiceMix URIs. To translate a ServiceMix URI into a Apache Camel endpoint URI, perform the following steps:

  1. If the ServiceMix URI contains a namespace prefix, replace the prefix by its corresponding namespace.

    For example, after modifying the ServiceMix URI, service:test:messageFilter, where test corresponds to the namespace, http://progress.com/demos/test, you get service:http://progress.com/demos/test:messageFilter.

  2. Modify the separator character, depending on what kind of namespace appears in the URI:

    • If the namespace starts with http://, use the / character as the separator between namespace, service name, and endpoint name (if present).

      For example, the URI, service:http://progress.com/demos/test:messageFilter, would be modified to service:http://progress.com/demos/test/messageFilter.

    • If the namespace starts with urn:, use the : character as the separator between namespace, service name, and endpoint name (if present).

      For example, service:urn:progress:com:demos:test:messageFilter.

  3. Create a JBI endpoint URI by adding the jbi: prefix.

    For example, jbi:service:http://progress.com/demos/test/messageFilter.

For example, consider the following configuration of the static recipient list pattern in ServiceMix EIP. The eip:exchange-target elements define some targets using the ServiceMix URI format.

<beans xmlns:sm="http://servicemix.apache.org/config/1.0"
       xmlns:eip="http://servicemix.apache.org/eip/1.0"
       xmlns:test="http://progress.com/demos/test" >
    ...
    <eip:static-recipient-list service="test:recipients" endpoint="endpoint">
      <eip:recipients>
        <eip:exchange-target uri="service:test:messageFilter" />
        <eip:exchange-target uri="service:test:trace4" />
      </eip:recipients>
    </eip:static-recipient-list>
    ...
</beans>

When the preceding ServiceMix configuration is mapped to an equivalent Apache Camel configuration, you get the following route:

<route>
  <from uri="jbi:endpoint:http://progress.com/demos/test/recipients/endpoint"/>
  <to uri="jbi:service:http://progress.com/demos/test/messageFilter"/>
  <to uri="jbi:service:http://progress.com/demos/test/trace4"/>
</route>

A content enricher, shown in Figure A.2, is a pattern for augmenting a message with missing information. The ServiceMix EIP content enricher is roughly equivalent to a pipeline that adds missing data as the message passes through an enricher target. Consequently, when migrating to Apache Camel, you can re-implement the ServiceMix content enricher as a Apache Camel pipeline.


Example A.4 shows how to define a content enricher using the ServiceMix EIP component. Incoming messages pass through the enricher target, test:additionalInformationExtracter, which adds missing data to the message. The message is then sent on to its ultimate destination, test:myTarget.


A message filter, shown in Figure A.3, is a processor that eliminates undesired messages based on specific criteria. Filtering is controlled by specifying a predicate in the filter: when the predicate is true, the incoming message is allowed to pass; otherwise, it is blocked. This pattern maps to the corresponding message filter pattern in Apache Camel.


The ServiceMix EIP pipeline pattern, shown in Figure A.4, is used to pass messages through a single transformer endpoint, where the transformer's input is taken from the source endpoint and the transformer's output is routed to the target endpoint. This pattern is thus a special case of the more general Apache Camel pipes and filters pattern, which enables you to pass an In message through multiple transformer endpoints.


Example A.10 shows how to define a pipeline using the ServiceMix EIP component. Incoming messages are passed into the transformer endpoint, test:decrypt, and the output from the transformer endpoint is then passed into the target endpoint, test:plaintextOrder.


The resequencer pattern, shown in Figure A.5, enables you to resequence messages according to the sequence number stored in an NMR property. The ServiceMix EIP resequencer pattern maps to the Apache Camel resequencer configured with the stream resequencing algorithm.


A recipient list, shown in Figure A.6, is a type of router that sends each incoming message to multiple different destinations. The ServiceMix EIP recipient list is restricted to processing InOnly and RobustInOnly exchange patterns. Moreover, the list of recipients must be static. This pattern maps to the recipient list with fixed destination pattern in Apache Camel.


Example A.19 shows how to define a static routing slip using the ServiceMix EIP component. Incoming messages pass through each of the endpoints, test:procA, test:procB, and test:procC, where the output of each endpoint is connected to the input of the next endpoint in the chain. The final endpoint, test:procC, sends its output (Out message) back to the caller.


The wire tap pattern, shown in Figure A.7, allows you to route messages to a separate tap location before it is forwarded to the ultimate destination. The ServiceMix EIP wire tap pattern maps to the wire tap pattern in Apache Camel.


Example A.22 shows how to define a wire tap using the ServiceMix EIP component. The In message from the source endpoint is copied to the In-listener endpoint, before being forwarded on to the target endpoint. If you want to monitor any returned Out messages or Fault messages from the target endpoint, you also must define an Out listener (using the eip:outListner element) and a Fault listener (using the eip:faultListener element).


A splitter, shown in Figure A.8, is a type of router that splits an incoming message into a series of outgoing messages, where each of the messages contains a piece of the original message. The ServiceMix EIP XPath splitter pattern is restricted to using the InOnly and RobustInOnly exchange patterns. The expression that defines how to split up the original message is defined in the XPath language. The XPath splitter pattern maps to the splitter pattern in Apache Camel.


Example A.25 shows how to define a splitter using the ServiceMix EIP component. The specified XPath expression, /*/*, causes an incoming message to split at every occurrence of a nested XML element (for example, the /foo/bar and /foo/car elements are split into distinct messages).