Development Guide
For Red Hat JBoss Developers
Abstract
Chapter 1. Introduction
1.1. About Red Hat JBoss BPM Suite
1.2. Supported platforms
- Red Hat JBoss Enterprise Application Platform 6.1.1
- Red Hat JBoss Web Server 2.0 (Tomcat 7)
- IBM WebSphere Application Server 8
1.3. Use Case: Process-based solutions in the loan industry

Figure 1.1. High-level loan application process flow

Figure 1.2. Loan Application Process Automation
1.4. Integrated Maven Dependencies
pom.xml file and should be included like the following example:
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.1.1-redhat-2</version>
<scope>compile</scope>
</dependency>
Note
Chapter 2. Introduction to JBoss Rules
2.1. The Basics
2.1.1. Business Rules Engine
2.1.2. The JBoss Rules Engine
2.1.3. Expert Systems
2.1.4. Production Rules
when <conditions> then <actions>
2.1.5. The Inference Engine
2.1.7. Working Memory
working memory is the part of the JBoss Rules engine where facts are asserted. From here, the facts can be modified or retracted.
2.1.8. Conflict Resolution Strategy
2.1.9. Hybrid Rule Systems
2.1.10. Reasoning Capabilities
Chapter 3. Expert Systems
3.1. PHREAK Algorithm
3.1.1. PHREAK Algorithm
- Three layers of contextual memory: Node, Segment and Rule memories.
- Rule, segment, and node based linking.
- Lazy (delayed) rule evaluation.
- Stack based evaluations with pause and resume.
- Isolated rule evaluation.
- Set oriented propagations.
3.1.2. Three Layers of Contextual Memory

Figure 3.1. PHREAK 3 Layered memory system
3.1.3. Rule, Segment, and Node Based Linking
Example 1: Single rule, no sharing

Figure 3.2. Example for a Single rule with no sharing
Example 2: Two rules with sharing

Figure 3.3. Example for two rules with sharing
Example 3: Three rules with sharing

Figure 3.4. Example for Three rules with sharing
Example 4: Single rule, with sub-network and no sharing

Figure 3.5. Example for a Single rule with sub-network and no sharing
Example 5: Two rules: one with a sub-network and sharing

Figure 3.6. Example for Two rules, one with a sub-network and sharing
3.1.4. Delayed and Stack Based Evaluations
3.1.5. Propagations and Isolated Rules
3.1.6. RETE to PHREAK
Note
3.1.7. Switching Between PHREAK and ReteOO
Switching Using System Properties
drools.ruleEngine system properties need to be edited with the following values:
drools.ruleEngine=phreak
drools.ruleEngine=reteoo
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-reteoo</artifactId>
<version>${drools.version}</version>
</dependency>
Switching in KieBaseConfiguration
import org.kie.api.KieBase; import org.kie.api.KieBaseConfiguration; import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; ...
KieServices kservices = KieServices.Factory.get(); KieBaseConfiguration kconfig = kieServices.Factory.get().newKieBaseConfiguration(); // you can either specify phreak (default) kconfig.setOption(RuleEngineOption.PHREAK); // or legacy ReteOO kconfig.setOption(RuleEngineOption.RETEOO); // and then create a KieBase for the selected algorithm (getKieClasspathContainer() is just an example) KieContainer container = kservices.getKieClasspathContainer(); KieBase kbase = container.newKieBase(kieBaseName, kconfig);
Note
drools-reteoo-(version).jar to exist on the classpath. If not, the JBoss Rules Engine reverts back to PHREAK and issues a warning. This applies for switching with KieBaseConfiguration and System Properties.
3.2. Rete Algorithm
3.2.1. ReteOO
3.2.2. The Rete Root Node

Figure 3.7. ReteNode
3.2.3. The ObjectTypeNode
instanceof check.
3.2.4. AlphaNodes
3.2.5. Hashing
3.2.6. BetaNodes
3.2.7. Alpha Memory
3.2.8. Beta Memory
3.2.9. Lookups with BetaNodes
3.2.10. LeftInputNodeAdapters
3.2.11. Terminal Nodes
3.2.12. Node Sharing
rule
when
Cheese( $cheddar : name == "cheddar" )
$person: Person( favouriteCheese == $cheddar )
then
System.out.println( $person.getName() + " likes cheddar" );
end
rule
when
Cheese( $cheddar : name == "cheddar" )
$person : Person( favouriteCheese != $cheddar )
then
System.out.println( $person.getName() + " does not like cheddar" );
endTerminalNode.

Figure 3.8. Node Sharing
3.2.13. Join Attempts
3.3. Strong and Loose Coupling
3.3.1. Loose Coupling
3.3.2. Strong Coupling
3.4. Advantages of a Rule Engine
3.4.1. Declarative Programming
3.4.2. Logic and Data Separation
3.4.3. KIE Base
KieBuilder. It is a repository of all the application's knowledge definitions. It may contain rules, processes, functions, and type models. The KieBase itself does not contain instance data (known as facts). Instead, sessions are created from the KieBase into which data can be inserted and where process instances may be started. It is recommended that KieBases be cached where possible to allow for repeated session creation.

Figure 3.9. KieBase
Chapter 4. Maven
4.1. Learn about Maven
4.1.1. About Maven
http:// when located on an HTTP server, or file:// when located on a file server. The default repository is the public remote Maven 2 Central Repository.
settings.xml file. You can either configure global Maven settings in the M2_HOME/conf/settings.xml file, or user-level settings in the USER_HOME/.m2/settings.xml file.
4.1.2. About the Maven POM File
pom.xml file requires some configuration options and will default all others. See Section 4.1.3, “Minimum Requirements of a Maven POM File” for details.
pom.xml file can be found at http://maven.apache.org/maven-v4_0_0.xsd.
4.1.3. Minimum Requirements of a Maven POM File
The minimum requirements of a pom.xml file are as follows:
- project root
- modelVersion
- groupId - the id of the project's group
- artifactId - the id of the artifact (project)
- version - the version of the artifact under the specified group
A basic pom.xml file might look like this:
<project> <modelVersion>4.0.0</modelVersion> <groupId>com.jboss.app</groupId> <artifactId>my-app</artifactId> <version>1</version> </project>
4.1.4. About the Maven Settings File
settings.xml file contains user-specific configuration information for Maven. It contains information that should not be distributed with the pom.xml file, such as developer identity, proxy information, local repository location, and other settings specific to a user.
settings.xml can be found.
- In the Maven install
- The settings file can be found in the
M2_HOME/conf/directory. These settings are referred to asglobalsettings. The default Maven settings file is a template that can be copied and used as a starting point for the user settings file. - In the user's install
- The settings file can be found in the
USER_HOME/.m2/directory. If both the Maven and usersettings.xmlfiles exist, the contents are merged. Where there are overlaps, the user'ssettings.xmlfile takes precedence.
settings.xml file:
<settings>
<profiles>
<profile>
<id>my-profile</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>fusesource</id>
<url>http://repo.fusesource.com/nexus/content/groups/public/</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</repository>
...
</repositories>
</profile>
</profiles>
...
</settings>
settings.xml file can be found at http://maven.apache.org/xsd/settings-1.0.0.xsd.
4.1.5. KIE Plugin
Example 4.1. Adding the KIE plugin to a Maven pom.xml
<build>
<plugins>
<plugin>
<groupId>org.kie</groupId>
<artifactId>kie-maven-plugin</artifactId>
<version>${project.version}</version>
<extensions>true</extensions>
</plugin>
</plugins>
</build>
Note
org.drools:drools-decisiontables and for processes org.jbpm:jbpm-bpmn2.
4.1.6. Maven Versions and Dependencies
<metadata>
<groupId>com.foo</groupId>
<artifactId>my-foo</artifactId>
<version>2.0.0</version>
<versioning>
<release>1.1.1</release>
<versions>
<version>1.0</version>
<version>1.0.1</version>
<version>1.1</version>
<version>1.1.1</version>
<version>2.0.0</version>
</versions>
<lastUpdated>20090722140000</lastUpdated>
</versioning>
</metadata><version>[1.0.1]</version>Declare an explicit version (will always resolve to 1.0.1 unless a collision occurs, when Maven will select a matching version):
<version>1.0.1</version>Declare a version range for all 1.x (will currently resolve to 1.1.1):
<version>[1.0.0,2.0.0)</version>Declare an open-ended version range (will resolve to 2.0.0):
<version>[1.0.0,)</version>Declare the version as LATEST (will resolve to 2.0.0):
<version>LATEST</version>Declare the version as RELEASE (will resolve to 1.1.1):
<version>RELEASE</version>Note that by default your own deployments will update the "latest" entry in the Maven metadata, but to update the "release" entry, you need to activate the "release-profile" from the Maven super POM. You can do this with either "-Prelease-profile" or "-DperformRelease=true"
4.1.7. Remote Repository Setup
- The Maven install: $M2_HOME/conf/settings.xml
- A user's install: ${user.home}/.m2/settings.xml
- Folder location specified by the system propert kie.maven.settings.custom
<profiles>
<profile>
<id>profile-1</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
...
</profile>
</profiles>
Chapter 5. KIE API
5.1. KIE Framework
5.1.1. KIE Systems
- Author
- Knowledge author using UI metaphors such as DRL, BPMN2, decision tables, and class models.
- Build
- Builds the authored knowledge into deployable units.
- For KIE this unit is a JAR.
- Test
- Test KIE knowledge before it is deployed to the application.
- Deploy
- Deploys the unit to a location where applications may use them.
- KIE uses Maven style repository.
- Utilize
- The loading of a JAR to provide a KIE session (KieSession), for which the application can interact with.
- KIE exposes the JAR at runtime via a KIE container (KieContainer).
- KieSessions, for the runtimes to interact with, are created from the KieContainer.
- Run
- System interaction with the KieSession, via API.
- Work
- User interaction with the KieSession, via command line or UI.
- Manage
- Manage any KieSession or KieContainer.
5.1.2. KieBase
KieBase is a repository of all the application's knowledge definitions. It contains rules, processes, functions, and type models. The KieBase itself does not contain data; instead, sessions are created from the KieBase into which data can be inserted, and, ultimately, process instances may be started. Creating the KieBase can be quite heavy, whereas session creation is very light; therefore, it is recommended that KieBase be cached where possible to allow for repeated session creation. Accordingly, the caching mechanism is automatically provided by the KieContainer.
Table 5.1. kbase Attributes
| Attribute name | Default value | Admitted values | Meaning |
|---|---|---|---|
| name | none | any | The name which retrieves the KieBase from the KieContainer. This is the only mandatory attribute. |
| includes | none | any comma separated list | A comma separated list of other KieBases contained in this kmodule. The artifacts of all these KieBases will also be included in this one. |
| packages | all | any comma separated list | By default all the JBoss Rules artifacts under the resources folder, at any level, are included into the KieBase. This attribute allows to limit the artifacts that will be compiled in this KieBase to only the ones belonging to the list of packages. |
| default | false | true, false | Defines if this KieBase is the default one for this module, so it can be created from the KieContainer without passing any name to it. There can be at most one default KieBase in each module. |
| equalsBehavior | identity | identity, equality | Defines the behavior of JBoss Rules when a new fact is inserted into the Working Memory. With identity it always create a new FactHandle unless the same object isn't already present in the Working Memory, while with equality only if the newly inserted object is not equal (according to its equal method) to an already existing fact. |
| eventProcessingMode | cloud | cloud, stream | When compiled in cloud mode the KieBase treats events as normal facts, while in stream mode allow temporal reasoning on them. |
| declarativeAgenda | disabled | disabled, enabled | Defines if the Declarative Agenda is enabled or not. |
5.1.3. KieSession
KieSession stores and executes on runtime data. It is created from the KieBase, or, more easily, created directly from the KieContainer if it has been defined in the kmodule.xml file
Table 5.2. ksession Attributes
| Attribute name | Default value | Admitted values | Meaning |
|---|---|---|---|
| name | none | any | Unique name of this KieSession. Used to fetch the KieSession from the KieContainer. This is the only mandatory attribute. |
| type | stateful | stateful, stateless | A stateful session allows to iteratively work with the Working Memory, while a stateless one is a one-off execution of a Working Memory with a provided data set. |
| default | false | true, false | Defines if this KieSession is the default one for this module, so it can be created from the KieContainer without passing any name to it. In each module there can be at most one default KieSession for each type. |
| clockType | realtime | realtime, pseudo | Defines if events timestamps are determined by the system clock or by a psuedo clock controlled by the application. This clock is specially useful for unit testing temporal rules. |
| beliefSystem | simple | simple, jtms, defeasible | Defines the type of belief system used by the KieSession. |
5.1.4. KieFileSystem
KieBases and KieSessions belonging to a KieModule programmatically instead of the declarative definition in the kmodule.xml file. The same programmatic API also allows in explicitly adding the file containing the Kie artifacts instead of automatically read them from the resources folder of your project. To do that it is necessary to create a KieFileSystem, a sort of virtual file system, and add all the resources contained in your project to it.
KieFileSystem from the KieServices. The kmodule.xml configuration file must be added to the filesystem. This is a mandatory step. Kie also provides a convenient fluent API, implemented by the KieModuleModel, to programmatically create this file.
KieModuleModel from the KieServices, configure it with the desired KieBases and KieSessions, convert it in XML and add the XML to the KieFileSystem. This process is shown by the following example:
Example 5.1. Creating a kmodule.xml programmatically and adding it to a KieFileSystem
KieServices kieServices = KieServices.Factory.get();
KieModuleModel kieModuleModel = kieServices.newKieModuleModel();
KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel( "KBase1 ")
.setDefault( true )
.setEqualsBehavior( EqualityBehaviorOption.EQUALITY )
.setEventProcessingMode( EventProcessingOption.STREAM );
KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel( "KSession1" )
.setDefault( true )
.setType( KieSessionModel.KieSessionType.STATEFUL )
.setClockType( ClockTypeOption.get("realtime") );
KieFileSystem kfs = kieServices.newKieFileSystem();KieFileSystem, through its fluent API, all others Kie artifacts composing your project. These artifacts have to be added in the same position of a corresponding usual Maven project.
5.1.5. KieResources
Example 5.2. Adding Kie artifacts to a KieFileSystem
KieFileSystem kfs = ...
kfs.write( "src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL )
.write( "src/main/resources/dtable.xls",
kieServices.getResources().newInputStreamResource( dtableFileStream ) );Resources. In the latter case the Resources can be created by the KieResources factory, also provided by the KieServices. The KieResources provides many convenient factory methods to convert an InputStream, a URL, a File, or a String representing a path of your file system to a Resource that can be managed by the KieFileSystem.
Resource can be inferred from the extension of the name used to add it to the KieFileSystem. However it also possible to not follow the Kie conventions about file extensions and explicitly assign a specific ResourceType to a Resource as shown below:
Example 5.3. Creating and adding a Resource with an explicit type
KieFileSystem kfs = ...
kfs.write( "src/main/resources/myDrl.txt",
kieServices.getResources().newInputStreamResource( drlStream )
.setResourceType(ResourceType.DRL) );KieFileSystem and build it by passing the KieFileSystem to a KieBuilder
KieFileSystem are successfully built, the resulting KieModule is automatically added to the KieRepository. The KieRepository is a singleton acting as a repository for all the available KieModules.
5.2. Building with Maven
5.2.1. The kmodule
KieBases and for each KieBase all the different KieSessions that can be created from it, as shown by the following example:
Example 5.4. A sample kmodule.xml file
<kmodule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/kie/6.0.0/kmodule">
<kbase name="KBase1" default="true" eventProcessingMode="cloud" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg1">
<ksession name="KSession2_1" type="stateful" default="true/">
<ksession name="KSession2_1" type="stateless" default="false/" beliefSystem="jtms">
</kbase>
<kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1">
<ksession name="KSession2_1" type="stateful" default="false" clockType="realtime">
<fileLogger file="drools.log" threaded="true" interval="10"/>
<workItemHandlers>
<workItemHandler name="name" type="org.domain.WorkItemHandler"/>
</workItemHandlers>
<listeners>
<ruleRuntimeEventListener type="org.domain.RuleRuntimeListener"/>
<agendaEventListener type="org.domain.FirstAgendaListener"/>
<agendaEventListener type="org.domain.SecondAgendaListener"/>
<processEventListener type="org.domain.ProcessListener"/>
</listeners>
</ksession>
</kbase>
</kmodule>KieBases have been defined and it is possible to instantiate 2 different types of KieSessions from the first one, while only one from the second.
5.2.2. Creating a KIE Project
KieBases and KieSessions that can be created from it. This file has to be placed in the resources/META-INF folder of the Maven project while all the other Kie artifacts, such as DRL or a Excel files, must be stored in the resources folder or in any other subfolder under it.
Example 5.5. An empty kmodule.xml file
<?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule"/>
KieBase. All Kie assets stored under the resources folder, or any of its subfolders, will be compiled and added to it. To trigger the building of these artifacts it is enough to create a KieContainer for them.
5.2.3. Creating a KIE Container
KieContainer that reads files built from the classpath:
Example 5.6. Creating a KieContainer from the classpath
KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();
Example 5.7. Retriving KieBases and KieSessions from the KieContainer
KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
KieBase kBase1 = kContainer.getKieBase("KBase1");
KieSession kieSession1 = kContainer.newKieSession("KSession2_1");
StatelessKieSession kieSession2 = kContainer.newStatelessKieSession("KSession2_2");KieContainer according to their declared type. If the type of the KieSession requested to the KieContainer doesn't correspond with the one declared in the kmodule.xml file the KieContainer will throw a RuntimeException. Also since a KieBase and a KieSession have been flagged as default is it possible to get them from the KieContainer without passing any name.
Example 5.8. Retriving default KieBases and KieSessions from the KieContainer
KieContainer kContainer = ... KieBase kBase1 = kContainer.getKieBase(); // returns KBase1 KieSession kieSession1 = kContainer.newKieSession(); // returns KSession2_1
ReleaseId that uniquely identifies this project inside your application. This allows creation of a new KieContainer from the project by simply passing its ReleaseId to the KieServices.
Example 5.9. Creating a KieContainer of an existing project by ReleaseId
KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );
5.2.4. KieServices
KieServices is the interface from where it possible to access all the Kie building and runtime facilities:
5.3. KIE Deployment
5.3.1. KieRepository
KieFileSystem are successfully built, the resulting KieModule is automatically added to the KieRepository. The KieRepository is a singleton acting as a repository for all the available KieModules.
KieServices a new KieContainer for that KieModule using its ReleaseId. However, since in this case the KieFileSystem don't contain any pom.xml file (it is possible to add one using the KieFileSystem.writePomXML method), Kie cannot determine the ReleaseId of the KieModule and assign to it a default one. This default ReleaseId can be obtained from the KieRepository and used to identify the KieModule inside the KieRepository itself. The following example shows this whole process.
Example 5.10. Building the contents of a KieFileSystem and creating a KieContainer
KieServices kieServices = KieServices.Factory.get(); KieFileSystem kfs = ... kieServices.newKieBuilder( kfs ).buildAll(); KieContainer kieContainer = kieServices.newKieContainer(kieServices.getRepository().getDefaultReleaseId());
KieBases and create new KieSessions from this KieContainer exactly in the same way as in the case of a KieContainer created directly from the classpath.
KieBuilder reports compilation results of 3 different severities: ERROR, WARNING and INFO. An ERROR indicates that the compilation of the project failed and in the case no KieModule is produced and nothing is added to the KieRepository. WARNING and INFO results can be ignored, but are available for inspection.
Example 5.11. Checking that a compilation didn't produce any error
KieBuilder kieBuilder = kieServices.newKieBuilder( kfs ).buildAll(); assertEquals( 0, kieBuilder.getResults().getMessages( Message.Level.ERROR ).size() );
5.3.2. Session Modification
KieBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The KieBase itself does not contain data; instead, sessions are created from the KieBase into which data can be inserted and from which process instances may be started. The KieBase can be obtained from the KieContainer containing the KieModule where the KieBase has been defined.
KieBase needs to resolve types that are not in the default class loader. In this case it will be necessary to create a KieBaseConfiguration with an additional class loader and pass it to KieContainer when creating a new KieBase from it.
Example 5.12. Creating a new KieBase with a custom ClassLoader
KieServices kieServices = KieServices.Factory.get(); KieBaseConfiguration kbaseConf = kieServices.newKieBaseConfiguration( null, MyType.class.getClassLoader() ); KieBase kbase = kieContainer.newKieBase( kbaseConf );
KieBase creates and returns KieSession objects, and it may optionally keep references to those. When KieBase modifications occur those modifications are applied against the data in the sessions. This reference is a weak reference and it is also optional, which is controlled by a boolean flag.
5.3.3. KieScanner
KieScanner allows continuous monitoring of your Maven repository to check whether a new release of a Kie project has been installed. A new release is deployed in the KieContainer wrapping that project. The use of the KieScanner requires kie-ci.jar to be on the classpath.
KieScanner can be registered on a KieContainer as in the following example.
Example 5.13. Registering and starting a KieScanner on a KieContainer
KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0-SNAPSHOT" ); KieContainer kContainer = kieServices.newKieContainer( releaseId ); KieScanner kScanner = kieServices.newKieScanner( kContainer ); // Start the KieScanner polling the Maven repository every 10 seconds kScanner.start( 10000L );
KieScanner is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow() method on it. If the KieScanner finds in the Maven repository an updated version of the Kie project used by that KieContainer it automatically downloads the new version and triggers an incremental build of the new project. From this moment all the new KieBases and KieSessions created from that KieContainer will use the new project version.
5.4. Running in KIE
5.4.1. KieRuntime
KieRuntime provides methods that are applicable to both rules and processes, such as setting globals and registering channels. ("Exit point" is an obsolete synonym for "channel".)
5.4.2. Globals in KIE
global java.util.List list
ksession.setGlobal() with the global's name and an object, for any session, to associate the object with the global. Failure to declare the global type and identifier in DRL code will result in an exception being thrown from this call.
List list = new ArrayList();
ksession.setGlobal("list", list);NullPointerException.
5.4.3. Event Packages
KieRuntimeEventManager interface is implemented by the KieRuntime which provides two interfaces, RuleRuntimeEventManager and ProcessEventManager. We will only cover the RuleRuntimeEventManager here.
RuleRuntimeEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.
Example 5.14. Adding an AgendaEventListener
ksession.addEventListener( new DefaultAgendaEventListener() {
public void afterMatchFired(AfterMatchFiredEvent event) {
super.afterMatchFired( event );
System.out.println( event );
}
});DebugRuleRuntimeEventListener and DebugAgendaEventListener which implement each method with a debug print statement. To print all Working Memory events, you add a listener like this:
Example 5.15. Adding a DebugRuleRuntimeEventListener
ksession.addEventListener( new DebugRuleRuntimeEventListener() );
KieRuntimeEvent interface which can be used to retrieve the actual KnowlegeRuntime the event originated from.
- MatchCreatedEvent
- MatchCancelledEvent
- BeforeMatchFiredEvent
- AfterMatchFiredEvent
- AgendaGroupPushedEvent
- AgendaGroupPoppedEvent
- ObjectInsertEvent
- ObjectDeletedEvent
- ObjectUpdatedEvent
- ProcessCompletedEvent
- ProcessNodeLeftEvent
- ProcessNodeTriggeredEvent
- ProcessStartEvent
5.4.4. KieRuntimeLogger
Example 5.16. FileLogger
KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger( session, "audit" ); ... // Be sure to close the logger otherwise it will not write. logger.close();
5.4.5. CommandExecutor Interface
CommandExecutor interface, which both the stateful and stateless interfaces extend. This returns an ExecutionResults:
CommandExecutor allows for commands to be executed on those sessions, the only difference being that the StatelessKieSession executes fireAllRules() at the end before disposing the session. The commands can be created using the CommandExecutor .The Javadocs provide the full list of the allowed comands using the CommandExecutor.
ExecutionResults. If true it uses the same name as the global name. A String can be used instead of the boolean, if an alternative name is desired.
Example 5.17. Set Global Command
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.newSetGlobal( "stilton", new Cheese( "stilton" ), true);
Cheese stilton = bresults.getValue( "stilton" );
Example 5.18. Get Global Command
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.getGlobal( "stilton" );
Cheese stilton = bresults.getValue( "stilton" );
BatchExecution represents a composite command, created from a list of commands. It will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful.
fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules command and wondered how that works with a StatelessKieSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.
ExecutionResults instance.
Example 5.19. BatchExecution Command
StatelessKieSession ksession = kbase.newStatelessKieSession(); List cmds = new ArrayList(); cmds.add( CommandFactory.newInsertObject( new Cheese( "stilton", 1), "stilton") ); cmds.add( CommandFactory.newStartProcess( "process cheeses" ) ); cmds.add( CommandFactory.newQuery( "cheeses" ) ); ExecutionResults bresults = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); Cheese stilton = ( Cheese ) bresults.getValue( "stilton" ); QueryResults qresults = ( QueryResults ) bresults.getValue( "cheeses" );
ExecutionResults. The query command defaults to use the same identifier as the query name, but it can also be mapped to a different identifier.
5.5. KIE Configuration
5.5.1. Build Result Severity
Example 5.20. Setting the severity using properties
// sets the severity of rule updates drools.kbuilder.severity.duplicateRule = <INFO|WARNING|ERROR> // sets the severity of function updates drools.kbuilder.severity.duplicateFunction = <INFO|WARNING|ERROR>
5.5.2. StatelessKieSession
StatelessKieSession wraps the KieSession, instead of extending it. Its main focus is on the decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a KieSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section.
Example 5.21. Simple StatelessKieSession execution with a Collection
StatelessKieSession ksession = kbase.newStatelessKieSession(); ksession.execute( collection );
Example 5.22. Simple StatelessKieSession execution with InsertElements Command
ksession.execute( CommandFactory.newInsertElements( collection ) );
CommandFactory.newInsert(collection) would do the job.
CommandFactory create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution and ExecutionResults.
StatelessKieSession supports globals, scoped in a number of ways. We cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways.
- The StatelessKieSession method
getGlobals()returns a Globals instance which provides access to the session's globals. These are shared for all execution calls. Exercise caution regarding mutable globals because execution calls can be executing simultaneously in different threads.Example 5.23. Session scoped global
StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal( "hbnSession", hibernateSession ); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute( collection );
- Using a delegate is another way of global resolution. Assigning a value to a global (with
setGlobal(String, Object)) results in the value being stored in an internal collection mapping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used. - The third way of resolving globals is to have execution scoped globals. Here, a
Commandto set a global is passed to theCommandExecutor.
CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned.
Example 5.24. Out identifiers
// Set up a list of commands List cmds = new ArrayList(); cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) ); cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) ); cmds.add( CommandFactory.newQuery( "Get People" "getPeople" ); // Execute the list ExecutionResults results = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); // Retrieve the ArrayList results.getValue( "list1" ); // Retrieve the inserted Person fact results.getValue( "person" ); // Retrieve the query as a QueryResults instance. results.getValue( "Get People" );
5.5.3. Marshalling
KieMarshallers are used to marshal and unmarshal KieSessions.
KieMarshallers can be retrieved from the KieServices. A simple example is shown below:
Example 5.25. Simple Marshaller Example
// ksession is the KieSession // kbase is the KieBase ByteArrayOutputStream baos = new ByteArrayOutputStream(); Marshaller marshaller = KieServices.Factory.get().getMarshallers().newMarshaller( kbase ); marshaller.marshall( baos, ksession ); baos.close();
ObjectMarshallingStrategy interface. Two implementations are provided, but users can implement their own. The two supplied strategies are IdentityMarshallingStrategy and SerializeMarshallingStrategy. SerializeMarshallingStrategy is the default, as shown in the example above, and it just calls the Serializable or Externalizable methods on a user instance. IdentityMarshallingStrategy creates an integer id for each user object and stores them in a Map, while the id is written to the stream. When unmarshalling it accesses the IdentityMarshallingStrategy map to retrieve the instance. This means that if you use the IdentityMarshallingStrategy, it is stateful for the life of the Marshaller instance and will create ids and keep references to all objects that it attempts to marshal. Below is the code to use an Identity Marshalling Strategy.
Example 5.26. IdentityMarshallingStrategy
ByteArrayOutputStream baos = new ByteArrayOutputStream();
KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers()
ObjectMarshallingStrategy oms = kMarshallers.newIdentityMarshallingStrategy()
Marshaller marshaller =
kMarshallers.newMarshaller( kbase, new ObjectMarshallingStrategy[]{ oms } );
marshaller.marshall( baos, ksession );
baos.close();
ObjectMarshallingStrategyAcceptor interface can be used. This Marshaller has a chain of strategies, and while reading or writing a user object it iterates the strategies asking if they accept responsibility for marshalling the user object. One of the provided implementations is ClassFilterAcceptor. This allows strings and wild cards to be used to match class names. The default is "*.*", so in the above example the Identity Marshalling Strategy is used which has a default "*.*" acceptor.
Example 5.27. IdentityMarshallingStrategy with Acceptor
ByteArrayOutputStream baos = new ByteArrayOutputStream();
KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers()
ObjectMarshallingStrategyAcceptor identityAcceptor =
kMarshallers.newClassFilterAcceptor( new String[] { "org.domain.pkg1.*" } );
ObjectMarshallingStrategy identityStrategy =
kMarshallers.newIdentityMarshallingStrategy( identityAcceptor );
ObjectMarshallingStrategy sms = kMarshallers.newSerializeMarshallingStrategy();
Marshaller marshaller =
kMarshallers.newMarshaller( kbase,
new ObjectMarshallingStrategy[]{ identityStrategy, sms } );
marshaller.marshall( baos, ksession );
baos.close();
Example 5.28. Configuring a trackable timer job factory manager
KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration();
ksconf.setOption(TimerJobFactoryOption.get("trackable"));
KSession ksession = kbase.newKieSession(ksconf, null);5.5.4. KIE Persistence
Example 5.29. Simple example using transactions
KieServices kieServices = KieServices.Factory.get();
Environment env = kieServices.newEnvironment();
env.set( EnvironmentName.ENTITY_MANAGER_FACTORY,
Persistence.createEntityManagerFactory( "emf-name" ) );
env.set( EnvironmentName.TRANSACTION_MANAGER,
TransactionManagerServices.getTransactionManager() );
// KieSessionConfiguration may be null, and a default will be used
KieSession ksession =
kieServices.getStoreServices().newKieSession( kbase, null, env );
int sessionId = ksession.getId();
UserTransaction ut =
(UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" );
ut.begin();
ksession.insert( data1 );
ksession.insert( data2 );
ksession.startProcess( "process1" );
ut.commit();EntityManagerFactory and the TransactionManager. If rollback occurs the ksession state is also rolled back, hence it is possible to continue to use it after a rollback. To load a previously persisted KieSession you'll need the id, as shown below:
Example 5.30. Loading a KieSession
KieSession ksession =
kieServices.getStoreServices().loadKieSession( sessionId, kbase, null, env );Example 5.31. Configuring JPA
<persistence-unit name="org.drools.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/BitronixJTADataSource</jta-data-source>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.BTMTransactionManagerLookup" />
</properties>
</persistence-unit>
Example 5.32. Configuring JTA DataSource
PoolingDataSource ds = new PoolingDataSource(); ds.setUniqueName( "jdbc/BitronixJTADataSource" ); ds.setClassName( "org.h2.jdbcx.JdbcDataSource" ); ds.setMaxPoolSize( 3 ); ds.setAllowLocalTransactions( true ); ds.getDriverProperties().put( "user", "sa" ); ds.getDriverProperties().put( "password", "sasa" ); ds.getDriverProperties().put( "URL", "jdbc:h2:mem:mydb" ); ds.init();
Example 5.33. JNDI properties
java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory
Chapter 6. Rule Systems
6.1. Forward-Chaining

Figure 6.1. Forward Chaining Chart
6.2. Backward-Chaining
6.2.1. Backward-Chaining
Prolog is an example of a backward-chaining engine.

Figure 6.2. Backward Chaining Chart
Important
6.2.2. Backward-Chaining Systems
6.2.3. Cloning Transitive Closures

Figure 6.3. Reasoning Graph
Procedure 6.1. Configure Transitive Closures
- First, create some java rules to develop reasoning for transitive items. It inserts each of the locations.
- Next, create the
Locationclass; it has the item and where it is located. - Type the rules for the House example as depicted below:
ksession.insert( new Location("office", "house") ); ksession.insert( new Location("kitchen", "house") ); ksession.insert( new Location("knife", "kitchen") ); ksession.insert( new Location("cheese", "kitchen") ); ksession.insert( new Location("desk", "office") ); ksession.insert( new Location("chair", "office") ); ksession.insert( new Location("computer", "desk") ); ksession.insert( new Location("drawer", "desk") ); - A transitive design is created in which the item is in its designated location such as a "desk" located in an "office."

Figure 6.4. Transitive Reasoning Graph of a House.
Note
"key" item in a "drawer" location. This will become evident in a later topic.
6.2.4. Defining a Query
Procedure 6.2. Define a Query
- Create a query to look at the data inserted into the rules engine:
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) endNotice how the query is recursive and is calling "isContainedIn." - Create a rule to print out every string inserted into the system to see how things are implemented. The rule should resemble the following format:
rule "go" salience 10 when $s : String( ) then System.out.println( $s ); end - Using Step 2 as a model, create a rule that calls upon the Step 1 query "isContainedIn."
rule "go1" when String( this == "go1" ) isContainedIn("office", "house"; ) then System.out.println( "office is in the house" ); endThe "go1" rule will fire when the first string is inserted into the engine. That is, it asks if the item "office" is in the location "house." Therefore, the Step 1 query is evoked by the previous rule when the "go1" String is inserted. - Create the "go1," insert it into the engine, and call the fireAllRules.
ksession.insert( "go1" ); ksession.fireAllRules(); --- go1 office is in the houseThe --- line indicates the separation of the output of the engine from the firing of the "go" rule and the "go1" rule.- "go1" is inserted
- Salience ensures it goes first
- The rule matches the query
6.2.5. Transitive Closure Example
Procedure 6.3. Create a Transitive Closure
- Create a Transitive Closure by implementing the following rule:
rule "go2" when String( this == "go2" ) isContainedIn("drawer", "house"; ) then System.out.println( "Drawer in the House" ); end - Recall from the Cloning Transitive Closure's topic, there was no instance of "drawer" in "house." "drawer" was located in "desk."

Figure 6.5. Transitive Reasoning Graph of a Drawer.
- Use the previous query for this recursive information.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go2," insert it into the engine, and call the fireAllRules.
ksession.insert( "go2" ); ksession.fireAllRules(); --- go2 Drawer in the HouseWhen the rule is fired, it correctly tells you "go2" has been inserted and that the "drawer" is in the "house." - Check how the engine determined this outcome.
- The query has to recurse down several levels to determine this.
- Instead of using
Location( x, y; ), The query uses the value of(z, y; )since "drawer" is not in "house." - The
zis currently unbound which means it has no value and will return everything that is in the argument. yis currently bound to "house," sozwill return "office" and "kitchen."- Information is gathered from "office" and checks recursively if the "drawer" is in the "office." The following query line is being called for these parameters:
isContainedIn (x ,z; )
There is no instance of "drawer" in "office;" therefore, it does not match. Withzbeing unbound, it will return data that is within the "office," and it will gather thatz == desk.isContainedIn(x==drawer, z==desk)isContainedInrecurses three times. On the final recurse, an instance triggers of "drawer" in the "desk."Location(x==drawer, y==desk)This matches on the first location and recurses back up, so we know that "drawer" is in the "desk," the "desk" is in the "office," and the "office" is in the "house;" therefore, the "drawer" is in the "house" and returnstrue.
6.2.6. Reactive Transitive Queries
Procedure 6.4. Create a Reactive Transitive Query
- Create a Reactive Transitive Query by implementing the following rule:
rule "go3" when String( this == "go3" ) isContainedIn("key", "office"; ) then System.out.println( "Key in the Office" ); endReactive Transitive Queries can ask a question even if the answer can not be satisfied. Later, if it is satisfied, it will return an answer.Note
Recall from the Cloning Transitive Closures example that there was no "key" item in the system. - Use the same query for this reactive information.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go3," insert it into the engine, and call the fireAllRules.
ksession.insert( "go3" ); ksession.fireAllRules(); --- go3- "go3" is inserted
fireAllRules();is called
The first rule that matches any String returns "go3" but nothing else is returned because there is no answer; however, while "go3" is inserted in the system, it will continuously wait until it is satisfied. - Insert a new location of "key" in the "drawer":
ksession.insert( new Location("key", "drawer") ); ksession.fireAllRules(); --- Key in the OfficeThis new location satisfies the transitive closure because it is monitoring the entire graph. In addition, this process now has four recursive levels in which it goes through to match and fire the rule.
6.2.7. Queries with Unbound Arguments
Procedure 6.5. Create an Unbound Argument's Query
- Create a Query with Unbound Arguments by implementing the following rule:
rule "go4" when String( this == "go4" ) isContainedIn(thing, "office"; ) then System.out.println( "thing" + thing + "is in the Office" ); endThis rule is asking for everything in the "office," and it will tell everything in all the rows below. The unbound argument (out variablething) in this example will return every possible value; accordingly, it is very similar to thezvalue used in the Reactive Transitive Query example. - Use the query for the unbound arguments.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go4," insert it into the engine, and call the fireAllRules.
ksession.insert( "go4" ); ksession.fireAllRules(); --- go4 thing Key is in the Office thing Computer is in the Office thing Drawer is in the Office thing Desk is in the Office thing Chair is in the OfficeWhen "go4" is inserted, it returns all the previous information that is transitively below "Office."
6.2.8. Multiple Unbound Arguments
Procedure 6.6. Creating Multiple Unbound Arguments
- Create a query with Mulitple Unbound Arguments by implementing the following rule:
rule "go5" when String( this == "go5" ) isContainedIn(thing, location; ) then System.out.println( "thing" + thing + "is in" + location ); endBoththingandlocationare unbound out variables, and without bound arguments, everything is called upon. - Use the query for multiple unbound arguments.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go5," insert it into the engine, and call the fireAllRules.
ksession.insert( "go5" ); ksession.fireAllRules(); --- go5 thing Knife is in House thing Cheese is in House thing Key is in House thing Computer is in House thing Drawer is in House thing Desk is in House thing Chair is in House thing Key is in Office thing Computer is in Office thing Drawer is in Office thing Key is in Desk thing Office is in House thing Computer is in Desk thing Knife is in Kitchen thing Cheese is in Kitchen thing Kitchen is in House thing Key is in Drawer thing Drawer is in Desk thing Desk is in Office thing Chair is in OfficeWhen "go5" is called, it returns everything within everything.
Chapter 7. Rule Languages
7.1. Rule Overview
7.1.1. Overview
7.1.2. A rule file
7.1.3. The structure of a rule file
Example 7.1. Rules file
package package-name
imports
globals
functions
queries
rules7.1.4. What is a rule
rule "name" attributes when LHS then RHS endMostly punctuation is not needed, even the double quotes for "name" are optional, as are newlines. Attributes are simple (always optional) hints to how the rule should behave. LHS is the conditional parts of the rule, which follows a certain syntax which is covered below. RHS is basically a block that allows dialect specific semantic code to be executed.
7.2. Rule Language Keywords
7.2.1. Hard Keywords
true, false, and null.
7.2.2. Soft Keywords
7.2.3. List of Soft Keywords

Figure 7.1. Rule Attributes
Table 7.1. Soft Keywords
| Name | Default Value | Type | Description |
|---|---|---|---|
no-loop | false | Boolean | When a rule's consequence modifies a fact, it may cause the rule to activate again, causing an infinite loop. Setting 'no-loop' to "true" will skip the creation of another activation for the rule with the current set of facts. |
lock-on-active | false | Boolean | Whenever a 'ruleflow-group' becomes active or an 'agenda-group' receives the focus, any rule within that group that has 'lock-on-active' set to "true" will not be activated any more. Regardless of the origin of the update, the activation of a matching rule is discarded. This is a stronger version of 'no-loop' because the change is not only caused by the rule itself. It is ideal for calculation rules where you have a number of rules that modify a fact, and you do not want any rule re-matching and firing again. Only when the 'ruleflow-group' is no longer active or the 'agenda-group' loses the focus, those rules with 'lock-on-active' set to "true" become eligible again for their activations to be placed onto the agenda. |
salience | 0 | integer | Each rule has an integer salience attribute which defaults to zero and can be negative or positive. Salience is a form of priority where rules with higher salience values are given higher priority when ordered in the activation queue. BRMS also supports dynamic salience where you can use an expression involving bound variables like the following:
rule "Fire in rank order 1,2,.."
salience( -$rank )
when
Element( $rank : rank,... )
then
...
end
|
ruleflow-group | N/A | String | Ruleflow is a BRMS feature that lets you exercise control over the firing of rules. Rules that are assembled by the same ''ruleflow-group' identifier fire only when their group is active. This attribute has been merged with 'agenda-group' and the behaviours are basically the same. |
agenda-group | MAIN | String | Agenda groups allow the user to partition the agenda, which provides more execution control. Only rules in the agenda group that have acquired the focus are allowed to fire. This attribute has been merged with 'ruleflow-group' and the behaviours are basically the same. |
auto-focus | false | Boolean | When a rule is activated where the 'auto-focus' value is "true" and the rule's agenda group does not have focus yet, it is automatically given focus, allowing the rule to potentially fire. |
activation-group | N/A | String | Rules that belong to the same 'activation-group' identified by this attribute's String value, will only fire exclusively. More precisely, the first rule in an 'activation-group' to fire will cancel all pending activations of all rules in the group, i.e., stop them from firing. |
dialect | specified by package | String | Java and MVEL are the possible values of the 'dialect' attribute. This attribute specifies the language to be used for any code expressions in the LHS or the RHS code block. While the 'dialect' can be specified at the package level, this attribute allows the package definition to be overridden for a rule. |
date-effective | N/A | String, date and time definition | A rule can only activate if the current date and time is after the 'date-effective' attribute. An example 'date-effective' attribute is displayed below:
rule "Start Exercising" date-effective "4-Sep-2014" when $m : org.drools.compiler.Message() then $m.setFired(true); end |
date-expires | N/A | String, date and time definition | A rule cannot activate if the current date and time is after the 'date-expires' attribute. An example 'date-expires' attribute is displayed below:
rule "Run 4km" date-effective "4-Sep-2014" date-expires "9-Sep-2014" when $m : org.drools.compiler.Message() then $m.setFired(true); end |
duration | no default | long | If a rule is still "true", the 'duration' attribute will dictate that the rule will fire after a specified duration. |
Note
7.3. Rule Language Comments
7.3.1. Comments
7.3.2. Single Line Comment Example
rule "Testing Comments"
when
// this is a single line comment
eval( true ) // this is a comment in the same line of a pattern
then
// this is a comment inside a semantic code block
end
7.3.3. Multi-Line Comment Example
rule "Test Multi-line Comments"
when
/* this is a multi-line comment
in the left hand side of a rule */
eval( true )
then
/* and this is a multi-line comment
in the right hand side of a rule */
end7.4. Rule Language Messages
7.4.1. Error Messages
7.4.2. Error Message Format

Figure 7.2. Error Message Format Example
7.4.3. Error Messages Description
Table 7.2. Error Messages
| Error Message | Description | Example | |
|---|---|---|---|
|
[ERR 101] Line 4:4 no viable alternative at input 'exits' in rule one
|
Indicates when the parser came to a decision point but couldn't identify an alternative.
|
1: rule one 2: when 3: exists Foo() 4: exits Bar() 5: then 6: end | |
|
[ERR 101] Line 3:2 no viable alternative at input 'WHEN'
|
This message means the parser has encountered the token
WHEN (a hard keyword) which is in the wrong place, since the rule name is missing.
|
1: package org.drools;
2: rule
3: when
4: Object()
5: then
6: System.out.println("A RHS");
7: end
| |
|
[ERR 101] Line 0:-1 no viable alternative at input '<eof>' in rule simple_rule in pattern [name]
|
Indicates an open quote, apostrophe or parentheses.
|
1: rule simple_rule 2: when 3: Student( name == "Andy ) 4: then 5: end | |
|
[ERR 102] Line 0:-1 mismatched input '<eof>' expecting ')' in rule simple_rule in pattern Bar
|
Indicates that the parser was looking for a particular symbol that it didn't end at the current input position.
|
1: rule simple_rule 2: when 3: foo3 : Bar( | |
|
[ERR 102] Line 0:-1 mismatched input '<eof>' expecting ')' in rule simple_rule in pattern [name]
|
This error is the result of an incomplete rule statement. Usually when you get a 0:-1 position, it means that parser reached the end of source. To fix this problem, it is necessary to complete the rule statement.
|
1: package org.drools;
2:
3: rule "Avoid NPE on wrong syntax"
4: when
5: not( Cheese( ( type == "stilton", price == 10 ) || ( type == "brie", price == 15 ) ) from $cheeseList )
6: then
7: System.out.println("OK");
8: end
| |
|
[ERR 103] Line 7:0 rule 'rule_key' failed predicate: {(validateIdentifierKey(DroolsSoftKeywords.RULE))}? in rule
|
A validating semantic predicate evaluated to false. Usually these semantic predicates are used to identify soft keywords.
|
1: package nesting; 2: dialect "mvel" 3: 4: import org.drools.Person 5: import org.drools.Address 6: 7: fdsfdsfds 8: 9: rule "test something" 10: when 11: p: Person( name=="Michael" ) 12: then 13: p.name = "other"; 14: System.out.println(p.name); 15: end | |
|
[ERR 104] Line 3:4 trailing semi-colon not allowed in rule simple_rule
|
This error is associated with the
eval clause, where its expression may not be terminated with a semicolon. This problem is simple to fix: just remove the semi-colon.
|
1: rule simple_rule 2: when 3: eval(abc();) 4: then 5: end | |
|
[ERR 105] Line 2:2 required (...)+ loop did not match anything at input 'aa' in template test_error
|
The recognizer came to a subrule in the grammar that must match an alternative at least once, but the subrule did not match anything. To fix this problem it is necessary to remove the numeric value as it is neither a valid data type which might begin a new template slot nor a possible start for any other rule file construct.
|
1: template test_error 2: aa s 11; 3: end |
7.4.4. Package
7.4.5. Import Statements
java.lang.
7.4.6. Using Globals
- Declare the global variable in the rules file and use it in rules. Example:
global java.util.List myGlobalList; rule "Using a global" when eval( true ) then myGlobalList.add( "Hello World" ); end - Set the global value on the working memory. It is best practice to set all global values before asserting any fact to the working memory. Example:
List list = new ArrayList(); WorkingMemory wm = rulebase.newStatefulSession(); wm.setGlobal( "myGlobalList", list );
7.4.7. The From Element
7.4.8. Using Globals with an e-Mail Service
Procedure 7.1. Task
- Open the integration code that is calling the rule engine.
- Obtain your emailService object and then set it in the working memory.
- In the DRL, declare that you have a global of type emailService and give it the name "email".
- In your rule consequences, you can use things like email.sendSMS(number, message).
Warning
Globals are not designed to share data between rules and they should never be used for that purpose. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory.Important
Do not set or change a global value from inside the rules. We recommend to you always set the value from your application using the working memory interface.
7.5. Domain Specific Languages (DSLs)
7.5.1. Domain Specific Languages
7.5.2. Using DSLs
7.5.3. DSL Example
Table 7.3. DSL Example
| Example | Description |
|---|---|
[when]Something is {colour}=Something(colour=="{colour}")
| [when] indicates the scope of the expression (that is, whether it is valid for the LHS or the RHS of a rule).
The part after the bracketed keyword is the expression that you use in the rule.
The part to the right of the equal sign ("=") is the mapping of the expression into the rule language. The form of this string depends on its destination, RHS or LHS. If it is for the LHS, then it ought to be a term according to the regular LHS syntax; if it is for the RHS then it might be a Java statement.
|
7.5.4. How the DSL Parser Works
- The DSL extracts the string values appearing where the expression contains variable names in brackets.
- The values obtained from these captures are interpolated wherever that name occurs on the right hand side of the mapping.
- The interpolated string replaces whatever was matched by the entire expression in the line of the DSL rule file.
Note
7.5.5. The DSL Compiler
7.5.6. DSL Syntax Examples
Table 7.4. DSL Syntax Examples
| Name | Description | Example |
|---|---|---|
| Quotes | Use quotes for textual data that a rule editor may want to enter. You can also enclose the capture with words to ensure that the text is correctly matched. |
[when]something is "{color}"=Something(color=="{color}")
[when]another {state} thing=OtherThing(state=="{state}"
|
| Braces | In a DSL mapping, the braces "{" and "}" should only be used to enclose a variable definition or reference, resulting in a capture. If they should occur literally, either in the expression or within the replacement text on the right hand side, they must be escaped with a preceding backslash ("\"). |
[then]do something= if (foo) \{ doSomething(); \}
|
| Mapping with correct syntax example | n/a |
# This is a comment to be ignored.
[when]There is a person with name of "{name}"=Person(name=="{name}")
[when]Person is at least {age} years old and lives in "{location}"=
Person(age >= {age}, location=="{location}")
[then]Log "{message}"=System.out.println("{message}");
[when]And = and
|
| Expanded DSL example | n/a |
There is a person with name of "Kitty"
==> Person(name="Kitty")
Person is at least 42 years old and lives in "Atlanta"
==> Person(age >= 42, location="Atlanta")
Log "boo"
==> System.out.println("boo");
There is a person with name of "Bob" and Person is at least 30 years old and lives in "Utah"
==> Person(name="Bob") and Person(age >= 30, location="Utah")
|
Note
7.5.7. Chaining DSL Expressions
7.5.8. Adding Constraints to Facts
Table 7.5. Adding Constraints to Facts
| Name | Description | Example |
|---|---|---|
| Expressing LHS conditions |
The DSL facility allows you to add constraints to a pattern by a simple convention: if your DSL expression starts with a hyphen (minus character, "-") it is assumed to be a field constraint and, consequently, is is added to the last pattern line preceding it.
In the example, the class
Cheese, has these fields: type, price, age and country. You can express some LHS condition in normal DRL.
|
Cheese(age < 5, price == 20, type=="stilton", country=="ch") |
| DSL definitions |
The DSL definitions given in this example result in three DSL phrases which may be used to create any combination of constraint involving these fields.
|
[when]There is a Cheese with=Cheese()
[when]- age is less than {age}=age<{age}
[when]- type is '{type}'=type=='{type}'
[when]- country equal to '{country}'=country=='{country}'
|
| "-" |
The parser will pick up a line beginning with "-" and add it as a constraint to the preceding pattern, inserting a comma when it is required.
| There is a Cheese with
- age is less than 42
- type is 'stilton'
Cheese(age<42, type=='stilton') |
| Defining DSL phrases |
Defining DSL phrases for various operators and even a generic expression that handles any field constraint reduces the amount of DSL entries.
|
[when][]is less than or equal to=<= [when][]is less than=< [when][]is greater than or equal to=>= [when][]is greater than=> [when][]is equal to=== [when][]equals=== [when][]There is a Cheese with=Cheese()[when][]- {field:\w*} {operator} {value:\d*}={field} {operator} {value} |
| DSL definition rule | n/a |
There is a Cheese with - age is less than 42 - rating is greater than 50 - type equals 'stilton'
In this specific case, a phrase such as "is less than" is replaced by
<, and then the line matches the last DSL entry. This removes the hyphen, but the final result is still added as a constraint to the preceding pattern. After processing all of the lines, the resulting DRL text is:
Cheese(age<42, rating > 50, type=='stilton') |
Note
7.5.9. Tips for Developing DSLs
- Write representative samples of the rules your application requires and test them as you develop.
- Rules, both in DRL and in DSLR, refer to entities according to the data model representing the application data that should be subject to the reasoning process defined in rules.
- Writing rules is easier if most of the data model's types are facts.
- Mark variable parts as parameters. This provides reliable leads for useful DSL entries.
- You may postpone implementation decisions concerning conditions and actions during this first design phase by leaving certain conditional elements and actions in their DRL form by prefixing a line with a greater sign (">"). (This is also handy for inserting debugging statements.)
- New rules can be written by reusing the existing DSL definitions, or by adding a parameter to an existing condition or consequence entry.
- Keep the number of DSL entries small. Using parameters lets you apply the same DSL sentence for similar rule patterns or constraints.
7.5.10. DSL and DSLR Reference
- A line starting with "#" or "//" (with or without preceding white space) is treated as a comment. A comment line starting with "#/" is scanned for words requesting a debug option, see below.
- Any line starting with an opening bracket ("[") is assumed to be the first line of a DSL entry definition.
- Any other line is appended to the preceding DSL entry definition, with the line end replaced by a space.
7.5.11. The Make Up of a DSL Entry
- A scope definition, written as one of the keywords "when" or "condition", "then" or "consequence", "*" and "keyword", enclosed in brackets ("[" and "]"). This indicates whether the DSL entry is valid for the condition or the consequence of a rule, or both. A scope indication of "keyword" means that the entry has global significance, that is, it is recognized anywhere in a DSLR file.
- A type definition, written as a Java class name, enclosed in brackets. This part is optional unless the next part begins with an opening bracket. An empty pair of brackets is valid, too.
- A DSL expression consists of a (Java) regular expression, with any number of embedded variable definitions, terminated by an equal sign ("="). A variable definition is enclosed in braces ("{" and "}"). It consists of a variable name and two optional attachments, separated by colons (":"). If there is one attachment, it is a regular expression for matching text that is to be assigned to the variable. If there are two attachments, the first one is a hint for the GUI editor and the second one the regular expression.Note that all characters that are "magic" in regular expressions must be escaped with a preceding backslash ("\") if they should occur literally within the expression.
- The remaining part of the line after the delimiting equal sign is the replacement text for any DSLR text matching the regular expression. It may contain variable references, i.e., a variable name enclosed in braces. Optionally, the variable name may be followed by an exclamation mark ("!") and a transformation function, see below.Note that braces ("{" and "}") must be escaped with a preceding backslash ("\") if they should occur literally within the replacement string.
7.5.12. Debug Options for DSL Expansion
Table 7.6. Debug Options for DSL Expansion
| Word | Description |
|---|---|
| result | Prints the resulting DRL text, with line numbers. |
| steps | Prints each expansion step of condition and consequence lines. |
| keyword | Dumps the internal representation of all DSL entries with scope "keyword". |
| when | Dumps the internal representation of all DSL entries with scope "when" or "*". |
| then | Dumps the internal representation of all DSL entries with scope "then" or "*". |
| usage | Displays a usage statistic of all DSL entries. |
7.5.13. DSL Definition Example
# Comment: DSL examples
#/ debug: display result and usage
# keyword definition: replaces "regula" by "rule"
[keyword][]regula=rule
# conditional element: "T" or "t", "a" or "an", convert matched word
[when][][Tt]here is an? {entity:\w+}=
${entity!lc}: {entity!ucfirst} ()
# consequence statement: convert matched word, literal braces
[then][]update {entity:\w+}=modify( ${entity!lc} )\{ \}
7.5.14. Transformation of a DSLR File
- The text is read into memory.
- Each of the "keyword" entries is applied to the entire text. The regular expression from the keyword definition is modified by replacing white space sequences with a pattern matching any number of white space characters, and by replacing variable definitions with a capture made from the regular expression provided with the definition, or with the default (".*?"). Then, the DSLR text is searched exhaustively for occurrences of strings matching the modified regular expression. Substrings of a matching string corresponding to variable captures are extracted and replace variable references in the corresponding replacement text, and this text replaces the matching string in the DSLR text.
- Sections of the DSLR text between "when" and "then", and "then" and "end", respectively, are located and processed in a uniform manner, line by line, as described below.For a line, each DSL entry pertaining to the line's section is taken in turn, in the order it appears in the DSL file. Its regular expression part is modified: white space is replaced by a pattern matching any number of white space characters; variable definitions with a regular expression are replaced by a capture with this regular expression, its default being ".*?". If the resulting regular expression matches all or part of the line, the matched part is replaced by the suitably modified replacement text.Modification of the replacement text is done by replacing variable references with the text corresponding to the regular expression capture. This text may be modified according to the string transformation function given in the variable reference; see below for details.If there is a variable reference naming a variable that is not defined in the same entry, the expander substitutes a value bound to a variable of that name, provided it was defined in one of the preceding lines of the current rule.
- If a DSLR line in a condition is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a pattern CE, that is, a type name followed by a pair of parentheses. if this pair is empty, the expanded line (which should contain a valid constraint) is simply inserted, otherwise a comma (",") is inserted beforehand.If a DSLR line in a consequence is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a "modify" statement, ending in a pair of braces ("{" and "}"). If this pair is empty, the expanded line (which should contain a valid method call) is simply inserted, otherwise a comma (",") is inserted beforehand.
Note
7.5.15. String Transformation Functions
Table 7.7. String Transformation Functions
| Name | Description |
|---|---|
| uc | Converts all letters to upper case. |
| lc | Converts all letters to lower case. |
| ucfirst | Converts the first letter to upper case, and all other letters to lower case. |
| num | Extracts all digits and "-" from the string. If the last two digits in the original string are preceded by "." or ",", a decimal period is inserted in the corresponding position. |
| a?b/c | Compares the string with string a, and if they are equal, replaces it with b, otherwise with c. But c can be another triplet a, b, c, so that the entire structure is, in fact, a translation table. |
7.5.16. Stringing DSL Transformation Functions
Table 7.8. Stringing DSL Transformation Functions
| Name | Description | Example |
|---|---|---|
| .dsl |
A file containing a DSL definition is customarily given the extension
.dsl. It is passed to the Knowledge Builder with ResourceType.DSL. For a file using DSL definition, the extension .dslr should be used. The Knowledge Builder expects ResourceType.DSLR. The IDE, however, relies on file extensions to correctly recognize and work with your rules file.
|
# definitions for conditions
[when][]There is an? {entity}=${entity!lc}: {entity!ucfirst}()
[when][]- with an? {attr} greater than {amount}={attr} <= {amount!num}
[when][]- with a {what} {attr}={attr} {what!positive?>0/negative?%lt;0/zero?==0/ERROR}
|
| DSL passing |
The DSL must be passed to the Knowledge Builder ahead of any rules file using the DSL.
For parsing and expanding a DSLR file the DSL configuration is read and supplied to the parser. Thus, the parser can "recognize" the DSL expressions and transform them into native rule language expressions.
|
KnowledgeBuilder kBuilder = new KnowledgeBuilder(); Resource dsl = ResourceFactory.newClassPathResource( dslPath, getClass() ); kBuilder.add( dsl, ResourceType.DSL ); Resource dslr = ResourceFactory.newClassPathResource( dslrPath, getClass() ); kBuilder.add( dslr, ResourceType.DSLR ); |
Chapter 8. Rule Commands
8.1. Available API
- http://fisheye.jboss.org/browse/JBossRules/trunk/drools-camel/src/test/resources/org/drools/camel/component/jaxb.mvt?r=HEAD
- http://fisheye.jboss.org/browse/JBossRules/trunk/drools-camel/src/test/resources/org/drools/camel/component/jaxb.mvt?r=HEAD
- http://fisheye.jboss.org/browse/JBossRules/trunk/drools-camel/src/test/resources/org/drools/camel/component/xstream.mvt?r=HEAD
XStream
- MarshallingBatchExecutionHelperProviderImpl.newXStreamMarshaller().toXML(command);
- UnmarshallingBatchExecutionHelperProviderImpl.newXStreamMarshaller().fromXML(xml)
JSON
- MarshallingBatchExecutionHelper.newJSonMarshaller().toXML(command);
- UnmarshallingBatchExecutionHelper.newJSonMarshaller().fromXML(xml)
JAXB
Using an XSD file to define the model
Options xjcOpts = new Options();
xjcOpts.setSchemaLanguage(Language.XMLSCHEMA);
JaxbConfiguration jaxbConfiguration = KnowledgeBuilderFactory.newJaxbConfiguration( xjcOpts, "xsd" );
kbuilder.add(ResourceFactory.newClassPathResource("person.xsd", getClass()), ResourceType.XSD, jaxbConfiguration);
KnowledgeBase kbase = kbuilder.newKnowledgeBase();
List<String> classesName = new ArrayList<String>();
classesName.add("org.drools.compiler.test.Person");
JAXBContext jaxbContext = KnowledgeBuilderHelper.newJAXBContext(classesName.toArray(new String[classesName.size()]), kbase);Using a POJO model
- classNames: A List with the canonical name of the classes that you want to use in the marshalling/unmarshalling process.
- properties: JAXB custom properties
List<String> classNames = new ArrayList<String>();
classNames.add("org.drools.compiler.test.Person");
JAXBContext jaxbContext = DroolsJaxbHelperProviderImpl.createDroolsJaxbContext(classNames, null);
Marshaller marshaller = jaxbContext.createMarshaller();8.2. Commands Supported
- BatchExecutionCommand
- InsertObjectCommand
- RetractCommand
- ModifyCommand
- GetObjectCommand
- InsertElementsCommand
- FireAllRulesCommand
- StartProcessCommand
- SignalEventCommand
- CompleteWorkItemCommand
- AbortWorkItemCommand
- QueryCommand
- SetGlobalCommand
- GetGlobalCommand
- GetObjectsCommand
Note
- name: String
- age: Integer
Note
- XStream
String xml = BatchExecutionHelper.newXStreamMarshaller().toXML(command);
- JSON
String xml = BatchExecutionHelper.newJSonMarshaller().toXML(command);
- JAXB
Marshaller marshaller = jaxbContext.createMarshaller(); StringWriter xml = new StringWriter(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(command, xml);
8.3. Commands
8.3.1. BatchExecutionCommand
- Description: The command that contains a list of commands, which will be sent and executed.
- Attributes
Table 8.1. BatchExecutionCommand attributes
Name Description required lookup Sets the knowledge session id on which the commands are going to be executed true commands List of commands to be executed false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); InsertObjectCommand insertObjectCommand = new InsertObjectCommand(new Person("john", 25)); FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); command.getCommands().add(insertObjectCommand); command.getCommands().add(fireAllRulesCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <insert> <org.drools.compiler.test.Person> <name>john</name> <age>25</age> </org.drools.compiler.test.Person> </insert> <fire-all-rules/> </batch-execution> - JSON
{"batch-execution":{"lookup":"ksession1","commands":[{"insert":{"object":{"org.drools.compiler.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <insert> <object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </object> </insert> <fire-all-rules max="-1"/> </batch-execution>
8.3.2. InsertObjectCommand
- Description: Insert an object in the knowledge session.
- Attributes
Table 8.2. InsertObjectCommand attributes
Name Description required object The object to be inserted true outIdentifier Id to identify the FactHandle created in the object insertion and added to the execution results false returnObject Boolean to establish if the object must be returned in the execution results. Default value: true false entryPoint Entrypoint for the insertion false - Command creation
List<Command> cmds = ArrayList<Command>(); Command insertObjectCommand = CommandFactory.newInsert(new Person("john", 25), "john", false, null); cmds.add( insertObjectCommand ); BatchExecutionCommand command = CommandFactory.createBatchExecution(cmds, "ksession1" ); - XML output
- XStream
<batch-execution lookup="ksession1"> <insert out-identifier="john" entry-point="my stream" return-object="false"> <org.drools.compiler.test.Person> <name>john</name> <age>25</age> </org.drools.compiler.test.Person> </insert> </batch-execution> - JSON
{"batch-execution":{"lookup":"ksession1","commands":{"insert":{"entry-point":"my stream", "out-identifier":"john","return-object":false,"object":{"org.drools.compiler.test.Person":{"name":"john","age":25}}}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <insert out-identifier="john" entry-point="my stream" > <object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </object> </insert> </batch-execution>
8.3.3. RetractCommand
- Description: Retract an object from the knowledge session.
- Attributes
Table 8.3. RetractCommand attributes
Name Description required handle The FactHandle associated to the object to be retracted true - Command creation: we have two options, with the same output result:
- Create the Fact Handle from a string
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); RetractCommand retractCommand = new RetractCommand(); retractCommand.setFactHandleFromString("123:234:345:456:567"); command.getCommands().add(retractCommand); - Set the Fact Handle that you received when the object was inserted
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); RetractCommand retractCommand = new RetractCommand(factHandle); command.getCommands().add(retractCommand);
- XML output
- XStream
<batch-execution lookup="ksession1"> <retract fact-handle="0:234:345:456:567"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"retract":{"fact-handle":"0:234:345:456:567"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <retract fact-handle="0:234:345:456:567"/> </batch-execution>
8.3.4. ModifyCommand
- Description: Allows you to modify a previously inserted object in the knowledge session.
- Attributes
Table 8.4. ModifyCommand attributes
Name Description required handle The FactHandle associated to the object to be retracted true setters List of setters object's modifications true - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); ModifyCommand modifyCommand = new ModifyCommand(); modifyCommand.setFactHandleFromString("123:234:345:456:567"); List<Setter> setters = new ArrayList<Setter>(); setters.add(new SetterImpl("age", "30")); modifyCommand.setSetters(setters); command.getCommands().add(modifyCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <modify fact-handle="0:234:345:456:567"> <set accessor="age" value="30"/> </modify> </batch-execution> - JSON
{"batch-execution":{"lookup":"ksession1","commands":{"modify":{"fact-handle":"0:234:345:456:567","setters":{"accessor":"age","value":30}}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <modify fact-handle="0:234:345:456:567"> <set value="30" accessor="age"/> </modify> </batch-execution>
8.3.5. GetObjectCommand
- Description: Used to get an object from a knowledge session
- Attributes
Table 8.5. GetObjectCommand attributes
Name Description required factHandle The FactHandle associated to the object to be retracted true outIdentifier Id to identify the FactHandle created in the object insertion and added to the execution results false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); GetObjectCommand getObjectCommand = new GetObjectCommand(); getObjectCommand.setFactHandleFromString("123:234:345:456:567"); getObjectCommand.setOutIdentifier("john"); command.getCommands().add(getObjectCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <get-object fact-handle="0:234:345:456:567" out-identifier="john"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"get-object":{"fact-handle":"0:234:345:456:567","out-identifier":"john"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <get-object out-identifier="john" fact-handle="0:234:345:456:567"/> </batch-execution>
8.3.6. InsertElementsCommand
- Description: Used to insert a list of objects.
- Attributes
Table 8.6. InsertElementsCommand attributes
Name Description required objects The list of objects to be inserted on the knowledge session true outIdentifier Id to identify the FactHandle created in the object insertion and added to the execution results false returnObject Boolean to establish if the object must be returned in the execution results. Default value: true false entryPoint Entrypoint for the insertion false - Command creation
List<Command> cmds = ArrayList<Command>(); List<Object> objects = new ArrayList<Object>(); objects.add(new Person("john", 25)); objects.add(new Person("sarah", 35)); Command insertElementsCommand = CommandFactory.newInsertElements( objects ); cmds.add( insertElementsCommand ); BatchExecutionCommand command = CommandFactory.createBatchExecution(cmds, "ksession1" ); - XML output
- XStream
<batch-execution lookup="ksession1"> <insert-elements> <org.drools.compiler.test.Person> <name>john</name> <age>25</age> </org.drools.compiler.test.Person> <org.drools.compiler.test.Person> <name>sarah</name> <age>35</age> </org.drools.compiler.test.Person> </insert-elements> </batch-execution> - JSON
{"batch-execution":{"lookup":"ksession1","commands":{"insert-elements":{"objects":[{"containedObject":{"@class":"org.drools.compiler.test.Person","name":"john","age":25}},{"containedObject":{"@class":"Person","name":"sarah","age":35}}]}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <insert-elements return-objects="true"> <list> <element xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </element> <element xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>35</age> <name>sarah</name> </element> <list> </insert-elements> </batch-execution>
8.3.7. FireAllRulesCommand
- Description: Allow execution of the rules activations created.
- Attributes
Table 8.7. FireAllRulesCommand attributes
Name Description required max The max number of rules activations to be executed. default is -1 and will not put any restriction on execution false outIdentifier Add the number of rules activations fired on the execution results false agendaFilter Allow the rules execution using an Agenda Filter false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); fireAllRulesCommand.setMax(10); fireAllRulesCommand.setOutIdentifier("firedActivations"); command.getCommands().add(fireAllRulesCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <fire-all-rules max="10" out-identifier="firedActivations"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"fire-all-rules":{"max":10,"out-identifier":"firedActivations"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <fire-all-rules out-identifier="firedActivations" max="10"/> </batch-execution>
8.3.8. StartProcessCommand
- Description: Allows you to start a process using the ID. Also you can pass parameters and initial data to be inserted.
- Attributes
Table 8.8. StartProcessCommand attributes
Name Description required processId The ID of the process to be started true parameters A Map<String, Object> to pass parameters in the process startup false data A list of objects to be inserted in the knowledge session before the process startup false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); StartProcessCommand startProcessCommand = new StartProcessCommand(); startProcessCommand.setProcessId("org.drools.task.processOne"); command.getCommands().add(startProcessCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <start-process processId="org.drools.task.processOne"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"start-process":{"process-id":"org.drools.task.processOne"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <start-process processId="org.drools.task.processOne"> <parameter/> </start-process> </batch-execution>
8.3.9. SignalEventCommand
- Description: Send a signal event.
- Attributes
Table 8.9. SignalEventCommand attributes
Name Description required event-type true processInstanceId false event false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); SignalEventCommand signalEventCommand = new SignalEventCommand(); signalEventCommand.setProcessInstanceId(1001); signalEventCommand.setEventType("start"); signalEventCommand.setEvent(new Person("john", 25)); command.getCommands().add(signalEventCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <signal-event process-instance-id="1001" event-type="start"> <org.drools.pipeline.camel.Person> <name>john</name> <age>25</age> </org.drools.pipeline.camel.Person> </signal-event> </batch-execution> - JSON
{"batch-execution":{"lookup":"ksession1","commands":{"signal-event":{"process-instance-id":1001,"@event-type":"start","event-type":"start","object":{"org.drools.pipeline.camel.Person":{"name":"john","age":25}}}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <signal-event event-type="start" process-instance-id="1001"> <event xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </event> </signal-event> </batch-execution>
8.3.10. CompleteWorkItemCommand
- Description: Allows you to complete a WorkItem.
- Attributes
Table 8.10. CompleteWorkItemCommand attributes
Name Description required workItemId The ID of the WorkItem to be completed true results false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); CompleteWorkItemCommand completeWorkItemCommand = new CompleteWorkItemCommand(); completeWorkItemCommand.setWorkItemId(1001); command.getCommands().add(completeWorkItemCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <complete-work-item id="1001"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"complete-work-item":{"id":1001}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <complete-work-item id="1001"/> </batch-execution>
8.3.11. AbortWorkItemCommand
- Description: Allows you abort an WorkItem. The same as session.getWorkItemManager().abortWorkItem(workItemId)
- Attributes
Table 8.11. AbortWorkItemCommand attributes
Name Description required workItemId The ID of the WorkItem to be completed true - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); AbortWorkItemCommand abortWorkItemCommand = new AbortWorkItemCommand(); abortWorkItemCommand.setWorkItemId(1001); command.getCommands().add(abortWorkItemCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <abort-work-item id="1001"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"abort-work-item":{"id":1001}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <abort-work-item id="1001"/> </batch-execution>
8.3.12. QueryCommand
- Description: Executes a query defined in knowledge base.
- Attributes
Table 8.12. QueryCommand attributes
Name Description required name The query name true outIdentifier The identifier of the query results. The query results are going to be added in the execution results with this identifier false arguments A list of objects to be passed as a query parameter false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); QueryCommand queryCommand = new QueryCommand(); queryCommand.setName("persons"); queryCommand.setOutIdentifier("persons"); command.getCommands().add(queryCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <query out-identifier="persons" name="persons"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"query":{"out-identifier":"persons","name":"persons"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <query name="persons" out-identifier="persons"/> </batch-execution>
8.3.13. SetGlobalCommand
- Description: Allows you to set a global.
- Attributes
Table 8.13. SetGlobalCommand attributes
Name Description required identifier The identifier of the global defined in the knowledge base true object The object to be set into the global false out A boolean to add, or not, the set global result into the execution results false outIdentifier The identifier of the global execution result false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); SetGlobalCommand setGlobalCommand = new SetGlobalCommand(); setGlobalCommand.setIdentifier("helper"); setGlobalCommand.setObject(new Person("kyle", 30)); setGlobalCommand.setOut(true); setGlobalCommand.setOutIdentifier("output"); command.getCommands().add(setGlobalCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <set-global identifier="helper" out-identifier="output"> <org.drools.compiler.test.Person> <name>kyle</name> <age>30</age> </org.drools.compiler.test.Person> </set-global> </batch-execution> - JSON
{"batch-execution":{"lookup":"ksession1","commands":{"set-global":{"identifier":"helper","out-identifier":"output","object":{"org.drools.compiler.test.Person":{"name":"kyle","age":30}}}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <set-global out="true" out-identifier="output" identifier="helper"> <object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>30</age> <name>kyle</name> </object> </set-global> </batch-execution>
8.3.14. GetGlobalCommand
- Description: Allows you to get a global previously defined.
- Attributes
Table 8.14. GetGlobalCommand attributes
Name Description required identifier The identifier of the global defined in the knowledge base true outIdentifier The identifier to be used in the execution results false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); GetGlobalCommand getGlobalCommand = new GetGlobalCommand(); getGlobalCommand.setIdentifier("helper"); getGlobalCommand.setOutIdentifier("helperOutput"); command.getCommands().add(getGlobalCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <get-global identifier="helper" out-identifier="helperOutput"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"get-global":{"identifier":"helper","out-identifier":"helperOutput"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <get-global out-identifier="helperOutput" identifier="helper"/> </batch-execution>
8.3.15. GetObjectsCommand
- Description: Returns all the objects from the current session as a Collection.
- Attributes
Table 8.15. GetObjectsCommand attributes
Name Description required objectFilter An ObjectFilter to filter the objects returned from the current session false outIdentifier The identifier to be used in the execution results false - Command creation
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); GetObjectsCommand getObjectsCommand = new GetObjectsCommand(); getObjectsCommand.setOutIdentifier("objects"); command.getCommands().add(getObjectsCommand); - XML output
- XStream
<batch-execution lookup="ksession1"> <get-objects out-identifier="objects"/> </batch-execution>
- JSON
{"batch-execution":{"lookup":"ksession1","commands":{"get-objects":{"out-identifier":"objects"}}}} - JAXB
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <get-objects out-identifier="objects"/> </batch-execution>
Chapter 9. XML
9.1. The XML Format
Warning
9.2. XML Rule Example
<?xml version="1.0" encoding="UTF-8"?>
<package name="com.sample"
xmlns="http://drools.org/drools-5.0"
xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xs:schemaLocation="http://drools.org/drools-5.0 drools-5.0.xsd">
<import name="java.util.HashMap" />
<import name="org.drools.*" />
<global identifier="x" type="com.sample.X" />
<global identifier="yada" type="com.sample.Yada" />
<function return-type="void" name="myFunc">
<parameter identifier="foo" type="Bar" />
<parameter identifier="bada" type="Bing" />
<body>
System.out.println("hello world");
</body>
</function>
<rule name="simple_rule">
<rule-attribute name="salience" value="10" />
<rule-attribute name="no-loop" value="true" />
<rule-attribute name="agenda-group" value="agenda-group" />
<rule-attribute name="activation-group" value="activation-group" />
<lhs>
<pattern identifier="foo2" object-type="Bar" >
<or-constraint-connective>
<and-constraint-connective>
<field-constraint field-name="a">
<or-restriction-connective>
<and-restriction-connective>
<literal-restriction evaluator=">" value="60" />
<literal-restriction evaluator="<" value="70" />
</and-restriction-connective>
<and-restriction-connective>
<literal-restriction evaluator="<" value="50" />
<literal-restriction evaluator=">" value="55" />
</and-restriction-connective>
</or-restriction-connective>
</field-constraint>
<field-constraint field-name="a3">
<literal-restriction evaluator="==" value="black" />
</field-constraint>
</and-constraint-connective>
<and-constraint-connective>
<field-constraint field-name="a">
<literal-restriction evaluator="==" value="40" />
</field-constraint>
<field-constraint field-name="a3">
<literal-restriction evaluator="==" value="pink" />
</field-constraint>
</and-constraint-connective>
<and-constraint-connective>
<field-constraint field-name="a">
<literal-restriction evaluator="==" value="12"/>
</field-constraint>
<field-constraint field-name="a3">
<or-restriction-connective>
<literal-restriction evaluator="==" value="yellow"/>
<literal-restriction evaluator="==" value="blue" />
</or-restriction-connective>
</field-constraint>
</and-constraint-connective>
</or-constraint-connective>
</pattern>
<not>
<pattern object-type="Person">
<field-constraint field-name="likes">
<variable-restriction evaluator="==" identifier="type"/>
</field-constraint>
</pattern>
<exists>
<pattern object-type="Person">
<field-constraint field-name="likes">
<variable-restriction evaluator="==" identifier="type"/>
</field-constraint>
</pattern>
</exists>
</not>
<or-conditional-element>
<pattern identifier="foo3" object-type="Bar" >
<field-constraint field-name="a">
<or-restriction-connective>
<literal-restriction evaluator="==" value="3" />
<literal-restriction evaluator="==" value="4" />
</or-restriction-connective>
</field-constraint>
<field-constraint field-name="a3">
<literal-restriction evaluator="==" value="hello" />
</field-constraint>
<field-constraint field-name="a4">
<literal-restriction evaluator="==" value="null" />
</field-constraint>
</pattern>
<pattern identifier="foo4" object-type="Bar" >
<field-binding field-name="a" identifier="a4" />
<field-constraint field-name="a">
<literal-restriction evaluator="!=" value="4" />
<literal-restriction evaluator="!=" value="5" />
</field-constraint>
</pattern>
</or-conditional-element>
<pattern identifier="foo5" object-type="Bar" >
<field-constraint field-name="b">
<or-restriction-connective>
<return-value-restriction evaluator="==" >a4 + 1</return-value-restriction>
<variable-restriction evaluator=">" identifier="a4" />
<qualified-identifier-restriction evaluator="==">
org.drools.Bar.BAR_ENUM_VALUE
</qualified-identifier-restriction>
</or-restriction-connective>
</field-constraint>
</pattern>
<pattern identifier="foo6" object-type="Bar" >
<field-binding field-name="a" identifier="a4" />
<field-constraint field-name="b">
<literal-restriction evaluator="==" value="6" />
</field-constraint>
</pattern>
</lhs>
<rhs>
if ( a == b ) {
assert( foo3 );
} else {
retract( foo4 );
}
System.out.println( a4 );
</rhs>
</rule>
</package>
9.3. XML Elements
Table 9.1. XML Elements
| Name | Description |
|---|---|
| global |
Defines global objects that can be referred to in the rules.
|
| function |
Contains a function declaration for a function to be used in the rules. You have to specify a return type, a unique name and parameters, in the body goes a snippet of code.
|
| import |
Imports the types you wish to use in the rule.
|
9.4. Detail of a Rule Element
<rule name="simple_rule">
<rule-attribute name="salience" value="10" />
<rule-attribute name="no-loop" value="true" />
<rule-attribute name="agenda-group" value="agenda-group" />
<rule-attribute name="activation-group" value="activation-group" />
<lhs>
<pattern identifier="cheese" object-type="Cheese">
<from>
<accumulate>
<pattern object-type="Person"></pattern>
<init>
int total = 0;
</init>
<action>
total += $cheese.getPrice();
</action>
<result>
new Integer( total ) );
</result>
</accumulate>
</from>
</pattern>
<pattern identifier="max" object-type="Number">
<from>
<accumulate>
<pattern identifier="cheese" object-type="Cheese"></pattern>
<external-function evaluator="max" expression="$price"/>
</accumulate>
</from>
</pattern>
</lhs>
<rhs>
list1.add( $cheese );
</rhs>
</rule>
9.5. XML Rule Elements
Table 9.2. XML Rule Elements
| Element | Description |
|---|---|
| Pattern |
This allows you to specify a type (class) and perhaps bind a variable to an instance of that class. Nested under the pattern object are constraints and restrictions that have to be met. The Predicate and Return Value constraints allow Java expressions to be embedded.
|
| Conditional elements (not, exists, and, or) |
These work like their DRL counterparts. Elements that are nested under and an "and" element are logically "anded" together. Likewise with "or" (and you can nest things further). "Exists" and "Not" work around patterns, to check for the existence or nonexistence of a fact meeting the pattern's constraints.
|
| Eval |
Allows the execution of a valid snippet of Java code as long as it evaluates to a boolean (do not end it with a semi-colon, as it is just a fragment). This can include calling a function. The Eval is less efficient than the columns, as the rule engine has to evaluate it each time, but it is a "catch all" feature for when you can express what you need to do with Column constraints.
|
9.6. Automatic Transforming Between XML and DRL
9.7. Classes for Automatic Transforming Between XML and DRL
- DrlDumper - for exporting DRL
- DrlParser - for reading DRL
- XmlPackageReader - for reading XML
Note
Chapter 10. Objects and Interfaces
10.1. Globals
10.2. Working With Globals
Procedure 10.1. Task
- To start implementing globals into the Working Memory, declare a global in a rules file and back it up with a Java object:
global java.util.List list
- With the Knowledge Base now aware of the global identifier and its type, you can call
ksession.setGlobal()with the global's name and an object (for any session) to associate the object with the global:List list = new ArrayList(); ksession.setGlobal("list", list);Important
Failure to declare the global type and identifier in DRL code will result in an exception being thrown from this call. - Set the global before it is used in the evaluation of a rule. Failure to do so results in a
NullPointerException.
10.3. Resolving Globals
- getGlobals()
- The Stateless Knowledge Session method
getGlobals()returns a Globals instance which provides access to the session's globals. These are shared for all execution calls. Exercise caution regarding mutable globals because execution calls can be executing simultaneously in different threads. - Delegates
- Using a delegate is another way of providing global resolution. Assigning a value to a global (with
setGlobal(String, Object)) results in the value being stored in an internal collection mapping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. If an identifier cannot be found in this internal collection, the delegate global (if any) will be used. - Execution
- Execution scoped globals use a
Commandto set a global which is then passed to theCommandExecutor.
10.4. Session Scoped Global Example
StatelessKieSession ksession = kbase.newStatelessKnowledgeSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal( "hbnSession", hibernateSession ); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute( collection );
10.5. StatefulRuleSessions
StatefulRuleSession property is inherited by the StatefulKnowledgeSession and provides the rule-related methods that are relevant from outside of the engine.
10.6. AgendaFilter Objects
AgendaFilter objects are optional implementations of the filter interface which are used to allow or deny the firing of an activation. What is filtered depends on the implementation.
10.7. Using the AgendaFilter
Procedure 10.2. Task
- To use a filter specify it while calling
fireAllRules(). The following example permits only rules ending in the string"Test". All others will be filtered out:ksession.fireAllRules( new RuleNameEndsWithAgendaFilter( "Test" ) );
10.8. Rule Engine Phases
- Working Memory Actions
- This is where most of the work takes place, either in the Consequence (the RHS itself) or the main Java application process. Once the Consequence has finished or the main Java application process calls
fireAllRules()the engine switches to the Agenda Evaluation phase. - Agenda Evaluation
- This attempts to select a rule to fire. If no rule is found it exits. Otherwise it fires the found rule, switching the phase back to Working Memory Actions.
10.9. The Event Model
10.10. The KnowlegeRuntimeEventManager
KnowlegeRuntimeEventManager interface is implemented by the KnowledgeRuntime which provides two interfaces, WorkingMemoryEventManager and ProcessEventManager.
10.11. The WorkingMemoryEventManager
WorkingMemoryEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.
10.12. Adding an AgendaEventListener
ksession.addEventListener( new DefaultAgendaEventListener() {
public void afterActivationFired(AfterActivationFiredEvent event) {
super.afterActivationFired( event );
System.out.println( event );
}
});10.13. Printing Working Memory Events
ksession.addEventListener( new DebugWorkingMemoryEventListener() );
10.14. KnowlegeRuntimeEvents
KnowlegeRuntimeEvent interface which can be used to retrieve the actual KnowlegeRuntime the event originated from.
10.15. Supported Events for the KnowledgeRuntimeEvent Interface
- ActivationCreatedEvent
- ActivationCancelledEvent
- BeforeActivationFiredEvent
- AfterActivationFiredEvent
- AgendaGroupPushedEvent
- AgendaGroupPoppedEvent
- ObjectInsertEvent
- ObjectRetractedEvent
- ObjectUpdatedEvent
- ProcessCompletedEvent
- ProcessNodeLeftEvent
- ProcessNodeTriggeredEvent
- ProcessStartEvent
10.16. The KnowledgeRuntimeLogger
10.17. Enabling a FileLogger
KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "logdir/mylogfile"); ... logger.close();
10.18. Using StatelessKnowledgeSession in JBoss Rules
StatelessKnowledgeSession wraps the StatefulKnowledgeSession, instead of extending it. Its main focus is on decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code. The act of calling execute() is a single-shot method that will internally instantiate a StatefulKnowledgeSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section.
10.19. Performing a StatelessKnowledgeSession Execution with a Collection
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add( ResourceFactory.newFileSystemResource( fileName ), ResourceType.DRL );
if (kbuilder.hasErrors() ) {
System.out.println( kbuilder.getErrors() );
} else {
Kie kbase = KnowledgeBaseFactory.newKnowledgeBase();
kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );
StatelessKieSession ksession = kbase.newStatelessKnowledgeSession();
ksession.execute( collection );
}10.20. Performing a StatelessKnowledgeSession Execution with the InsertElements Command
ksession.execute( CommandFactory.newInsertElements( collection ) );
Note
CommandFactory.newInsert(collection).
10.21. The BatchExecutionHelper
CommandFactory create the supported commands, all of which can be marshaled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the XML format as well as how to use JBoss Rules Pipeline to automate the marshaling of BatchExecution and ExecutionResults.
10.22. The CommandExecutor Interface
CommandExecutor interface allows users to export data using "out" parameters. This means that inserted facts, globals and query results can all be returned using this interface.
10.23. Out Identifiers
// Set up a list of commands List cmds = new ArrayList(); cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) ); cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) ); cmds.add( CommandFactory.newQuery( "Get People" "getPeople" ); // Execute the list ExecutionResults results = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); // Retrieve the ArrayList results.getValue( "list1" ); // Retrieve the inserted Person fact results.getValue( "person" ); // Retrieve the query as a QueryResults instance. results.getValue( "Get People" );
Chapter 11. Complex Event Processing
11.1. Introduction to Complex Event Processing
- On an algorithmic trading application: Take an action if the security price increases X% above the day's opening price.The price increases are denoted by events on a stock trade application.
- On a monitoring application: Take an action if the temperature in the server room increases X degrees in Y minutes.The sensor readings are denoted by events.
- Both business rules and event processing require seamless integration with the enterprise infrastructure and applications. This is particularly important with regard to life-cycle management, auditing, and security.
- Both business rules and event processing have functional requirements like pattern matching and non-functional requirements like response time limits and query/rule explanations.
Note
- They usually process large numbers of events, but only a small percentage of the events are of interest.
- The events are usually immutable, as they represent a record of change in state.
- The rules and queries run against events and must react to detected event patterns.
- There are usually strong temporal relationships between related events.
- Individual events are not important. The system is concerned with patterns of related events and the relationships between them.
- It is often necessary to perform composition and aggregation of events.
- Support events, with their proper semantics, as first class citizens.
- Allow detection, correlation, aggregation, and composition of events.
- Support processing streams of events.
- Support temporal constraints in order to model the temporal relationships between events.
- Support sliding windows of interesting events.
- Support a session-scoped unified clock.
- Support the required volumes of events for complex event processing use cases.
- Support reactive rules.
- Support adapters for event input into the engine (pipeline).
Chapter 12. Features of JBoss BRMS Complex Event Processing
12.1. Events
- Events are immutable
- An event is a record of change which has occurred at some time in the past, and as such it cannot be changed.
Note
The rules engine does not enforce immutability on the Java objects representing events; this makes event data enrichment possible.The application should be able to populate un-populated event attributes, which can be used to enrich the event with inferred data; however, event attributes that have already been populated should not be changed. - Events have strong temporal constraints
- Rules involving events usually require the correlation of multiple events that occur at different points in time relative to each other.
- Events have managed life-cycles
- Because events are immutable and have temporal constraints, they are usually only of interest for a specified period of time. This means the engine can automatically manage the life-cycle of events.
- Events can use sliding windows
- It is possible to define and use sliding windows with events since all events have timestamps associated with them. Therefore, sliding windows allow the creation of rules on aggregations of values over a time period.
12.2. Event Declaration
@role meta-data tag to the fact with the event parameter. The @role meta-data tag can accept two possible values:
fact: Assigning the fact role declares the type is to be handled as a regular fact. Fact is the default role.event: Assigning the event role declares the type is to be handled as an event.
StockTick fact type will be handled as an event:
Example 12.1. Declaring a Fact Type as an Event
import some.package.StockTick declare StockTick @role( event ) end
StockTick was a fact type declared in the DRL instead of in a pre-existing class, the code would be as follows:
Example 12.2. Declaring a Fact Type and Assigning it to an Event Role
declare StockTick @role( event ) datetime : java.util.Date symbol : String price : double end
12.3. Event Meta-Data
- @role
- @timestamp
- @duration
- @expires
Example 12.3. The VoiceCall Fact Class
/**
* A class that represents a voice call in
* a Telecom domain model
*/
public class VoiceCall {
private String originNumber;
private String destinationNumber;
private Date callDateTime;
private long callDuration; // in milliseconds
// constructors, getters, and setters
}
- @role
- The @role meta-data tag indicates whether a given fact type is either a regular fact or an event. It accepts either
factoreventas a parameter. The default isfact.@role( <fact|event> )Example 12.4. Declaring VoiceCall as an Event Type
declare VoiceCall @role( event ) end
- @timestamp
- A timestamp is automatically assigned to every event. By default, the time is provided by the session clock and assigned to the event at insertion into the working memory. Events can have their own timestamp attribute, which can be included by telling the engine to use the attribute's timestamp instead of the session clock.To use the attribute's timestamp, use the attribute name as the parameter for the
@timestamptag.@timestamp( <attributeName> )Example 12.5. Declaring the VoiceCall Timestamp Attribute
declare VoiceCall @role( event ) @timestamp( callDateTime ) end
- @duration
- JBoss BRMS Complex Event Processing supports both point-in-time and interval-based events. A point-in-time event is represented as an interval-based event with a duration of zero time units. By default, every event has a duration of zero. To assign a different duration to an event, use the attribute name as the parameter for the
@durationtag.@duration( <attributeName> )Example 12.6. Declaring the VoiceCall Duration Attribute
declare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) end
- @expires
- Events may be set to expire automatically after a specific duration in the working memory. By default, this happens when the event can no longer match and activate any of the current rules. You can also explicitly define when an event should expire. The @expires tag is only used when the engine is running in stream mode.
@expires( <timeOffset> )The value oftimeOffsetis a temporal interval that sets the relative duration of the event.[#d][#h][#m][#s][#[ms]]
All parameters are optional and the#parameter should be replaced by the appropriate value.To declare that theVoiceCallfacts should expire one hour and thirty-five minutes after insertion into the working memory, use the following:Example 12.7. Declaring the Expiration Offset for the VoiceCall Events
declare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) @expires( 1h35m ) end
12.4. Session Clock
- Rules testing: Testing always requires a controlled environment, and when the tests include rules with temporal constraints, it is necessary to control the input rules, facts, and the flow of time.
- Regular execution: A rules engine that reacts to events in real time needs a real-time clock.
- Special environments: Specific environments may have specific time control requirements. For instance, clustered environments may require clock synchronization or JEE environments may require you to use an application server-provided clock.
- Rules replay or simulation: In order to replay or simulate scenarios, it is necessary that the application controls the flow of time.
12.5. Available Clock Implementations
- Real-Time Clock
- The real-time clock is the default implementation based on the system clock. The real-time clock uses the system clock to determine the current time for timestamps.To explicitly configure the engine to use the real-time clock, set the session configuration parameter to
realtime:KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration() config.setOption( ClockTypeOption.get("realtime") );
- Pseudo-Clock
- The pseudo-clock is useful for testing temporal rules since it can be controlled by the application.To explicitly configure the engine to use the pseudo-clock, set the session configuration parameter to
pseudo:KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption( ClockTypeOption.get("pseudo") );This example shows how to control the pseudo-clock:KieSessionConfiguration conf = KieServices.Factory.get().newKieSessionConfiguration(); conf.setOption( ClockTypeOption.get( "pseudo" ) ); KieSession session = kbase.newKieSession( conf, null ); SessionPseudoClock clock = session.getSessionClock(); // then, while inserting facts, advance the clock as necessary: FactHandle handle1 = session.insert( tick1 ); clock.advanceTime( 10, TimeUnit.SECONDS ); FactHandle handle2 = session.insert( tick2 ); clock.advanceTime( 30, TimeUnit.SECONDS ); FactHandle handle3 = session.insert( tick3 );
12.6. Event Processing Modes
12.7. Cloud Mode
- No need for clock synchronization since there is no notion of time.
- No requirement on ordering events since the engine looks at the events as an unordered cloud against which the engine tries to match rules.
KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration(); config.setOption( EventProcessingOption.CLOUD );
drools.eventProcessingMode = cloud
12.8. Stream Mode
- Events in each stream must be ordered chronologically.
- A session clock must be present to synchronize event streams.
Note
KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration(); config.setOption( EventProcessingOption.STREAM );
drools.eventProcessingMode = stream
12.9. Support for Event Streams
- Events in the stream are ordered by timestamp. The timestamps may have different semantics for different streams, but they are always ordered internally.
- There is usually a high volume of events in the stream.
- Atomic events contained in the streams are rarely useful by themselves.
- Streams are either homogeneous (they contain a single type of event) or heterogeneous (they contain events of different types).
12.10. Declaring and Using Entry Points
Example 12.8. Example ATM Rule
rule "authorize withdraw"
when
WithdrawRequest( $ai : accountId, $am : amount ) from entry-point "ATM Stream"
CheckingAccount( accountId == $ai, balance > $am )
then
// authorize withdraw
end
WithdrawRequest events coming from the "ATM Stream."
WithdrawalRequest) from the stream with a fact from the main working memory (CheckingAccount).
Example 12.9. Using Multiple Streams
rule "apply fee on withdraws on branches"
when
WithdrawRequest( $ai : accountId, processed == true ) from entry-point "Branch Stream"
CheckingAccount( accountId == $ai )
then
// apply a $2 fee on the account
end
WithdrawRequest) as the example ATM rule but from a different stream. Events inserted into the "ATM Stream" will never match the pattern on the second rule, which is tied to the "Branch Stream;" accordingly, events inserted into the "Branch Stream" will never match the pattern on the example ATM rule, which is tied to the "ATM Stream".
Example 12.10. Inserting Facts into an Entry Point
// create your rulebase and your session as usual KieSession session = ... // get a reference to the entry point WorkingMemoryEntryPoint atmStream = session.getWorkingMemoryEntryPoint( "ATM Stream" ); // and start inserting your facts into the entry point atmStream.insert( aWithdrawRequest );
12.11. Negative Pattern in Stream Mode
Example 12.11. A Rule with a Negative Pattern
rule "Sound the alarm"
when
$f : FireDetected( )
not( SprinklerActivated( ) )
then
// sound the alarm
end
Example 12.12. A Rule with a Negative Pattern, Temporal Constraints, and an Explicit Duration Parameter.
rule "Sound the alarm"
duration( 10s )
when
$f : FireDetected( )
not( SprinklerActivated( this after[0s,10s] $f ) )
then
// sound the alarm
end
Example 12.13. A Rule with a Negative Pattern with Temporal Constraints
rule "Sound the alarm"
when
$f : FireDetected( )
not( SprinklerActivated( this after[0s,10s] $f ) )
then
// sound the alarm
end
Example 12.14. Excluding Bound Events in Negative Patterns
rule "Sound the alarm"
when
$h: Heartbeat( ) from entry-point "MonitoringStream"
not( Heartbeat( this != $h, this after[0s,10s] $h ) from entry-point "MonitoringStream" )
then
// Sound the alarm
end
12.12. Temporal Reasoning
12.12.1. Temporal Reasoning
Note
12.12.2. Temporal Operations
12.12.2.1. Temporal Operations
- After
- Before
- Coincides
- During
- Finishes
- Finishes By
- Includes
- Meets
- Met By
- Overlaps
- Overlapped By
- Starts
- Started By
12.12.2.2. After
after operator correlates two events and matches when the temporal distance (the time between the two events) from the current event to the event being correlated falls into the distance range declared for the operator.
$eventA : EventA( this after[ 3m30s, 4m ] $eventB )
$eventB finished and the time when $eventA started is between the lower limit of three minutes and thirty seconds and the upper limit of four minutes.
3m30s <= $eventA.startTimestamp - $eventB.endTimeStamp <= 4m
after operator accepts one or two optional parameters:
- If two values are defined, the interval starts on the first value (3 minutes and 30 seconds in the example) and ends on the second value (4 minutes in the example).
- If only one value is defined, the interval starts on the provided value and runs indefinitely with no end time.
- If no value is defined, the interval starts at one millisecond and runs indefinitely with no end time.
after operator also accepts negative temporal distances.
$eventA : EventA( this after[ -3m30s, -2m ] $eventB )
$eventA : EventA( this after[ -3m30s, -2m ] $eventB ) $eventA : EventA( this after[ -2m, -3m30s ] $eventB )
12.12.2.3. Before
before operator correlates two events and matches when the temporal distance (time between the two events) from the event being correlated to the current event falls within the distance range declared for the operator.
$eventA : EventA( this before[ 3m30s, 4m ] $eventB )
$eventA finished and the time when $eventB started is between the lower limit of three minutes and thirty seconds and the upper limit of four minutes.
3m30s <= $eventB.startTimestamp - $eventA.endTimeStamp <= 4m
before operator accepts one or two optional parameters:
- If two values are defined, the interval starts on the first value (3 minutes and 30 seconds in the example) and ends on the second value (4 minutes in the example).
- If only one value is defined, the interval starts on the provided value and runs indefinitely with no end time.
- If no value is defined, the interval starts at one millisecond and runs indefinitely with no end time.
before operator also accepts negative temporal distances.
$eventA : EventA( this before[ -3m30s, -2m ] $eventB )
$eventA : EventA( this before[ -3m30s, -2m ] $eventB ) $eventA : EventA( this before[ -2m, -3m30s ] $eventB )
12.12.2.4. Coincides
coincides operator correlates two events and matches when both events happen at the same time.
$eventA : EventA( this coincides $eventB )
$eventA and $eventB are identical and the end timestamps of both $eventA and $eventB are also identical.
coincides operator accepts optional thresholds for the distance between the events' start times and the events' end times, so the events do not have to start at exactly the same time or end at exactly the same time, but they need to be within the provided thresholds.
coincides operator:
- If only one parameter is given, it is used to set the threshold for both the start and end times of both events.
- If two parameters are given, the first is used as a threshold for the start time and the second one is used as a threshold for the end time.
$eventA : EventA( this coincides[15s, 10s] $eventB )
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 15s && abs( $eventA.endTimestamp - $eventB.endTimestamp ) <= 10s
Warning
coincides operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance internals.
12.12.2.5. During
during operator correlates two events and matches when the current event happens during the event being correlated.
$eventA : EventA( this during $eventB )
$eventA starts after $eventB and ends before $eventB ends.
$eventB.startTimestamp < $eventA.startTimestamp <= $eventA.endTimestamp < $eventB.endTimestamp
during operator accepts one, two, or four optional parameters:
during operator:
- If one value is defined, this value will represent the maximum distance between the start times of the two events and the maximum distance between the end times of the two events.
- If two values are defined, these values represent a threshold that the current event's start time and end time must occur between in relation to the correlated event's start and end times.If the values 5s and 10s are provided, the current event must start between 5 and 10 seconds after the correlated event, and similarly the current event must end between 5 and 10 seconds before the correlated event.
- If four values are defined, the first and second values will be used as the minimum and maximum distances between the starting times of the events, and the third and fourth values will be used as the minimum and maximum distances between the end times of the two events.
12.12.2.6. Finishes
finishes operator correlates two events and matches when the current event's start timestamp post-dates the correlated event's start timestamp and both events end simultaneously.
$eventA : EventA( this finishes $eventB )
$eventA starts after $eventB starts and ends at the same time as $eventB ends.
$eventB.startTimestamp < $eventA.startTimestamp && $eventA.endTimestamp == $eventB.endTimestamp
finishes operator accepts one optional parameter. If defined, the optional parameter sets the maximum time allowed between the end times of the two events.
$eventA : EventA( this finishes[ 5s ] $eventB )
$eventB.startTimestamp < $eventA.startTimestamp && abs( $eventA.endTimestamp - $eventB.endTimestamp ) <= 5s
Warning
finishes operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
12.12.2.7. Finishes By
finishedby operator correlates two events and matches when the current event's start time predates the correlated event's start time but both events end simultaneously. finishedby is the symmetrical opposite of the finishes operator.
$eventA : EventA( this finishedby $eventB )
$eventA starts before $eventB starts and ends at the same time as $eventB ends.
$eventA.startTimestamp < $eventB.startTimestamp && $eventA.endTimestamp == $eventB.endTimestamp
finishedby operator accepts one optional parameter. If defined, the optional parameter sets the maximum time allowed between the end times of the two events.
$eventA : EventA( this finishedby[ 5s ] $eventB )
$eventA.startTimestamp < $eventB.startTimestamp && abs( $eventA.endTimestamp - $eventB.endTimestamp ) <= 5s
Warning
finishedby operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
12.12.2.8. Includes
includes operator examines two events and matches when the event being correlated happens during the current event. It is the symmetrical opposite of the during operator.
$eventA : EventA( this includes $eventB )
$eventB starts after $eventA and ends before $eventA ends.
$eventA.startTimestamp < $eventB.startTimestamp <= $eventB.endTimestamp < $eventA.endTimestamp
includes operator accepts 1, 2 or 4 optional parameters:
- If one value is defined, this value will represent the maximum distance between the start times of the two events and the maximum distance between the end times of the two events.
- If two values are defined, these values represent a threshold that the current event's start time and end time must occur between in relation to the correlated event's start and end times.If the values 5s and 10s are provided, the current event must start between 5 and 10 seconds after the correlated event, and similarly the current event must end between 5 and 10 seconds before the correlated event.
- If four values are defined, the first and second values will be used as the minimum and maximum distances between the starting times of the events, and the third and fourth values will be used as the minimum and maximum distances between the end times of the two events.
12.12.2.9. Meets
meets operator correlates two events and matches when the current event ends at the same time as the correlated event starts.
$eventA : EventA( this meets $eventB )
$eventA ends at the same time as $eventB starts.
abs( $eventB.startTimestamp - $eventA.endTimestamp ) == 0
meets operator accepts one optional parameter. If defined, it determines the maximum time allowed between the end time of the current event and the start time of the correlated event.
$eventA : EventA( this meets[ 5s ] $eventB )
abs( $eventB.startTimestamp - $eventA.endTimestamp) <= 5s
Warning
meets operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
12.12.2.10. Met By
metby operator correlates two events and matches when the current event starts at the same time as the correlated event ends.
$eventA : EventA( this metby $eventB )
$eventA starts at the same time as $eventB ends.
abs( $eventA.startTimestamp - $eventB.endTimestamp ) == 0
metby operator accepts one optional parameter. If defined, it sets the maximum distance between the end time of the correlated event and the start time of the current event.
$eventA : EventA( this metby[ 5s ] $eventB )
abs( $eventA.startTimestamp - $eventB.endTimestamp) <= 5s
Warning
metby operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
12.12.2.11. Overlaps
overlaps operator correlates two events and matches when the current event starts before the correlated event starts and ends after the correlated event starts, but it ends before the correlated event ends.
$eventA : EventA( this overlaps $eventB )
$eventA.startTimestamp < $eventB.startTimestamp < $eventA.endTimestamp < $eventB.endTimestamp
overlaps operator accepts one or two optional parameters:
- If one parameter is defined, it will define the maximum distance between the start time of the correlated event and the end time of the current event.
- If two values are defined, the first value will be the minimum distance, and the second value will be the maximum distance between the start time of the correlated event and the end time of the current event.
12.12.2.12. Overlapped By
overlappedby operator correlates two events and matches when the correlated event starts before the current event, and the correlated event ends after the current event starts but before the current event ends.
$eventA : EventA( this overlappedby $eventB )
$eventB.startTimestamp < $eventA.startTimestamp < $eventB.endTimestamp < $eventA.endTimestamp
overlappedby operator accepts one or two optional parameters:
- If one parameter is defined, it sets the maximum distance between the start time of the correlated event and the end time of the current event.
- If two values are defined, the first value will be the minimum distance, and the second value will be the maximum distance between the start time of the correlated event and the end time of the current event.
12.12.2.13. Starts
starts operator correlates two events and matches when they start at the same time, but the current event ends before the correlated event ends.
$eventA : EventA( this starts $eventB )
$eventA and $eventB start at the same time, and $eventA ends before $eventB ends.
$eventA.startTimestamp == $eventB.startTimestamp && $eventA.endTimestamp < $eventB.endTimestamp
starts operator accepts one optional parameter. If defined, it determines the maximum distance between the start times of events in order for the operator to still match:
$eventA : EventA( this starts[ 5s ] $eventB )
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 5s && $eventA.endTimestamp < $eventB.endTimestamp
Warning
starts operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
12.12.2.14. Started By
startedby operator correlates two events. It matches when both events start at the same time and the correlating event ends before the current event.
$eventA : EventA( this startedby $eventB )
$eventA and $eventB start at the same time, and $eventB ends before $eventA ends.
$eventA.startTimestamp == $eventB.startTimestamp && $eventA.endTimestamp > $eventB.endTimestamp
startedby operator accepts one optional parameter. If defined, it sets the maximum distance between the start time of the two events in order for the operator to still match:
$eventA : EventA( this starts[ 5s ] $eventB )
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 5s && $eventA.endTimestamp > $eventB.endTimestamp
Warning
startsby operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
12.13. Sliding Windows
12.13.1. Sliding Time Windows
StockTick() over window:time( 2m )
over keyword to associate windows with patterns.
Example 12.15. Average Value over Time
rule "Sound the alarm in case temperature rises above threshold"
when
TemperatureThreshold( $max : max )
Number( doubleValue > $max ) from accumulate(
SensorReading( $temp : temperature ) over window:time( 10m ),
average( $temp ) )
then
// sound the alarm
end
SensorReading more than ten minutes old and keep re-calculating the average.
12.13.2. Sliding Length Windows
StockTick( company == "RHT" ) over window:length( 10 )
Example 12.16. Average Value over Length
rule "Sound the alarm in case temperature rises above threshold"
when
TemperatureThreshold( $max : max )
Number( doubleValue > $max ) from accumulate(
SensorReading( $temp : temperature ) over window:length( 100 ),
average( $temp ) )
then
// sound the alarm
end
Note
Note
12.14. Memory Management for Events
12.14.1. Memory Management for Events
- Explicitly
- Event expiration can be explicitly set with the @expires
- Implicitly
- The rules engine can analyze the temporal constraints in rules to determine the window of interest for events.
12.14.2. Explicit Expiration
declare statement and the metadata @expires tag.
Example 12.17. Declaring Explicit Expiration
declare StockTick
@expires( 30m )
end
StockTick events, remove any StockTick events from the session automatically after the defined expiration time if no rules still need the events.
12.14.3. Inferred Expiration
Example 12.18. A Rule with Temporal Constraints
rule "correlate orders"
when
$bo : BuyOrder( $id : id )
$ae : AckOrder( id == $id, this after[0,10s] $bo )
then
// do something
end
BuyOrder event occurs it needs to store the event for up to ten seconds to wait for the matching AckOrder event, making the implicit expiration offset for BuyOrder events ten seconds. An AckOrder event can only match an existing BuyOrder event making its implicit expiration offset zero seconds.
Chapter 13. REST API
- Knowledge Store (Artifact Repository) REST API calls are calls to the static data (definitions) and are asynchronous, that is, they continue running after the call as a job. These calls return a job ID, which can be used after the REST API call was performed to request the job status and verify whether the job finished successfully. Parameters of these calls are provided in the form of JSON entities.
- Deployment REST API calls are asynchronous or synchronous, depending on the operation performed. These calls perform actions on the deployments or retrieve information about one ore more deployments.
- Runtime REST API calls are calls to the Execution Server and to the Process Execution Engine, Task Execution Engine, and Business Rule Engine. They are synchronous and return the requested data as JAXB objects.
http://SERVER_ADDRESS:PORT/business-central/rest/REQUEST_BODY
Note
13.1. Knowledge Store REST API
13.1.1. Job calls
- ACCEPTED: the job was accepted and is being processed.
- BAD_REQUEST: the request was not accepted as it contained incorrect content.
- RESOURCE_NOT_EXIST: the requested resource (path) does not exist.
- DUPLICATE_RESOURCE: the resource already exists.
- SERVER_ERROR: an error on the server occurred.
- SUCCESS: the job finished successfully.
- FAIL: the job failed.
- APPROVED: the job was approved.
- DENIED: the job was denied.
- GONE: the job ID could not be found.A job can be GONE in the following cases:
- The job was explicitly removed.
- The job finished and has been deleted from the status cache (the job is removed from status cache after the cache has reached its maximum capacity).
- The job never existed.
job calls are provided:
- [GET] /jobs/{jobID}
- returns the job status - [GET]
Example 13.1. Response of the job call on a repository clone request
"{"status":"SUCCESS","jodId":"1377770574783-27","result":"Alias: testInstallAndDeployProject, Scheme: git, Uri: git://testInstallAndDeployProject","lastModified":1377770578194,"detailedResult":null}" - [DELETE] /jobs/{jobID}
- removes the job - [DELETE]
13.1.2. Repository calls
repositories calls are provided:
- [GET] /repositories
- This returns a list of the repositories in the Knowledge Store as a JSON entity - [GET]
Example 13.2. Response of the repositories call
[{"name":"bpms-assets","description":"generic assets","userName":null,"password":null,"requestType":null,"gitURL":"git://bpms-assets"},{"name":"loanProject","description":"Loan processes and rules","userName":null,"password":null,"requestType":null,"gitURL":"git://loansProject"}] - [DELETE] /repositories/{repositoryName}
- This removes the repository from the Knowledge Store - [DELETE]
- [POST] /repositories/
- This creates or clones the repository defined by the JSON entity - [POST]
Example 13.3. JSON entity with repository details of a repository to be cloned
{"name":"myClonedRepository", "description":"", "userName":"", "password":"", "requestType":"clone", "gitURL":"git://localhost/example-repository"} - [POST] /repositories/{repositoryName}/projects/
- This creates a project in the repository - [POST]
Example 13.4. Request body that defines the project to be created
"{"name":"myProject","description": "my project"}" - [DELETE] /repositories/{repositoryName}/projects/
- This deletes the project in the repository - [DELETE]
Example 13.5. Request body that defines the project to be deleted
"{"name":"myProject","description": "my project"}"
13.1.3. Organizational unit calls
organizationalUnits calls are provided:
- [GET] /organizationalunits/
- This returns a list of all the organizational units - [GET].
- [POST] /organizationalunits/
- This creates an organizational unit in the Knowledge Store - [POST]. The organizational unit is defined as a JSON entity. This consumes an
OrganizationalUnitinstance and returns aCreateOrganizationalUnitRequestinstance.Example 13.6. Organizational unit in JSON
{ "name":"testgroup", "description":"", "owner":"tester", "repositories":["testGroupRepository"] } - [POST] /organizationalunits/{organizationalUnitName}/repositories/{repositoryName}
- This adds the repository to the organizational unit - [POST]. It also returns a
AddRepositoryToOrganizationalUnitRequestinstance.
Note
13.1.4. Maven calls
maven calls are provided below:
- [POST] /repositories/{repositoryName}/projects/{projectName}/maven/compile/
- This compiles the project (equivalent to
mvn compile) - [POST]. It consumes aBuildConfiginstance, which must be supplied but is not needed for the operation and may be left blank. It also returns aCompileProjectRequestinstance. - [POST] /repositories/{repositoryName}/projects/{projectName}/maven/install/
- This installs the project (equivalent to
mvn install) - [POST]. It consumes aBuildConfiginstance, which must be supplied but is not needed for the operation and may be left blank. It also returns aInstallProjectRequestinstance. - [POST] /repositories/{repositoryName}/projects/{projectName}/maven/test/
- This compiles and runs the tests - [POST]. It consumes a
BuildConfiginstance and returns aTestProjectRequestinstance. - [POST] /repositories/{repositoryName}/projects/{projectName}/maven/deploy/
- This deploys the project (equivalent to mvn deploy) - [POST]. It consumes a
BuildConfiginstance, which must be supplied but is not needed for the operation and may be left blank. It also returns aDeployProjectRequestinstance.
13.2. Deployment REST API
Note
[\w\.-]+(:[\w\.-]+){2,2}(:[\w\.-]*){0,2}
- [A-Z]
- [a-z]
- [0-9]
- _
- .
- -
- Group Id
- Artifact Id
- Version
- kbase Id (optional)
- ksession Id (optional)
13.2.1. Asynchronous calls
/deployment/{deploymentId}/deploy/deployment/{deploymentId}/undeploy
- The posted request would have been successfully accepted but the actual operation (deploying or undeploying the deployment unit) may have failed.
- The deployment information retrieved on calling the GET operations may even have changed (including the status of the deployment unit).
13.2.2. Deployment calls
- /deployment/
- returns a list of all available deployed instances [GET]
- /deployment/{deploymentId}
- returns a JaxbDeploymentUnit instance containing the information (including the configuration) of the deployment unit [GET]
- /deployment/{deploymentId}/deploy
- deploys the deployment unit which is referenced by the deploymentid and returns a JaxbDeploymentJobResult instance with the status of the request [POST]
- /deployment/{deploymentId}/undeploy
- Undeploys the deployment unit referenced by the deploymentId and returns a JaxbDeploymentJobResult instance with the status of the request [POST]
Note
- An identical job has already been submitted to the queue and has not yet completed.
- The amount of (deploy/undeploy) jobs submitted but not yet processed exceeds the job cache size.
13.3. Runtime REST API
? symbol to the REQUEST_BODY and the parameter with the parameter value; for example, rest/task/query?workItemId=393 returns a TaskSumamry list of all tasks based on the work item with ID 393. Note that parameters and their values are case-sensitive.
map_ keyword; for example,
map_age=5000
{ "age" => Long.parseLong("5000") }Example 13.7. A GET call that returns all tasks to a locally running application using curl
curl -v -H 'Accept: application/json' -u eko 'localhost:8080/kie/rest/tasks/'
newRuntimeEngine() object from the RemoteRestSessionFactory. The RuntimeEngine can be then used to create a KieSession.
Example 13.8. A GET call that returns a task details to a locally running application in Java with the direct tasks/TASKID request
public Task getTaskInstanceInfo(long taskId) throws Exception {
URL address = new URL(url + "/task/" + taskId);
ClientRequest restRequest = createRequest(address);
ClientResponse<JaxbTaskResponse> responseObj = restRequest.get(JaxbTaskResponse.class);
ClientResponse<InputStream> taskResponse = responseObj.get(InputStream.class);
JAXBContext jaxbTaskContext = JAXBContext.newInstance(JaxbTaskResponse.class);
StreamSource source = new StreamSource(taskResponse.getEntity());
return jaxbTaskContext.createUnmarshaller().unmarshal(source, JaxbTaskResponse.class).getValue();
}
private ClientRequest createRequest(URL address) {
return getClientRequestFactory().createRequest(address.toExternalForm());
}
private ClientRequestFactory getClientRequestFactory() {
DefaultHttpClient httpClient = new DefaultHttpClient();
httpClient.getCredentialsProvider().setCredentials(new AuthScope(AuthScope.ANY_HOST,
AuthScope.ANY_PORT, AuthScope.ANY_REALM), new UsernamePasswordCredentials(userId, password));
ClientExecutor clientExecutor = new ApacheHttpClient4Executor(httpClient);
return new ClientRequestFactory(clientExecutor, ResteasyProviderFactory.getInstance());
}task, consider using the execute call (refer to Section 13.3.4, “Execute operations”).
13.3.1. Usage Information
13.3.1.1. Pagination
pageorp- number of the page to be returned (by default set to
1, that is, page number1is returned) pageSizeors- number of items per page (default value
10)
/task/query
/history/instance
/history/instance/{id: [0-9]+}
/history/instance/{id: [0-9]+}/child
/history/instance/{id: [0-9]+}/node
/history/instance/{id: [0-9]+}/node/{id: [a-zA-Z0-9-:\\.]+}
/history/instance/{id: [0-9]+}/variable/
/history/instance/{id: [0-9]+}/variable/{id: [a-zA-Z0-9-:\\.]+}
/history/process/{id: [a-zA-Z0-9-:\\.]+}Example 13.9. REST request body with the pagination parameter
/history/instance?page=3&pageSize=20 /history/instance?p=3&s=20
13.3.1.2. Object data type parameters
\d+i: Integer\d+l: Long
Example 13.10. REST request body with the Integer mySignal parameter
/rest/runtime/business-central/process/org.jbpm.test/instance/2/signal?mySignal=1234i
startProcess command in the execute call; refer to Section 13.3.4, “Execute operations”).
13.3.2. Runtime calls
/runtime/{deploymentId}/execute/{CommandObject}; refer to Section 13.3.4, “Execute operations”).
13.3.2.1. Process calls
/runtime/{deploymentId}/process/ calls are send to the Process Execution Engine.
process calls are provided:
- /runtime/{deploymentId}/process/{procDefID}/start
- creates and starts a Process instance of the provided Process definition [POST]
- /runtime/{deploymentId}/process/instance/{procInstanceID}
- returns the details of the given Process instance [GET]
- /runtime/{deploymentId}/process/instance/{procInstanceID}/signal
- sends a signal event to the given Process instance [POST]The call accepts query map parameter with the Signal details.
Example 13.11. A local signal invocation and its REST version
ksession.signalEvent("MySignal", "value", 23l);curl -v -u admin 'localhost:8080/business-central/rest/runtime/myDeployment/process/instance/23/signal?signal=MySignal&event=value'
- /runtime/{deploymentId}/process/instance/{procInstanceID}/abort
- aborts the Process instance [POST]
- /runtime/{deploymentId}/process/instance/{procInstanceID}/variables
- returns variable of the Process instance [GET]Variables are returned as JaxbVariablesResponse objects. Note that the returned variable values are strings.
13.3.2.2. Signal calls
signal/ calls send a signal defined by the provided query map parameters either to the deployment or to a particular process instance.
signal calls are provided:
- /runtime/{deploymentId}/process/instance/{procInstanceID}/signal
- sends a signal to the given process instance [POST]See the previous subsection for an example of this call.
- /runtime/{deploymentId}/signal
- This operation takes a signal and a event query parameter and sends a signal to the deployment [POST].- The
signalparameter value is used as the name of the signal. This parameter is required.- Theeventparameter value is used as the value of the event. This value may use the number query parameter syntax described earlier.Example 13.12. Signal Call Example
/runtime/{deploymentId}/signal?signal={signalCode}This call is equivalent to theksession.signal("signalName", eventValue)method.
13.3.2.3. Work item calls
/runtime/{deploymentId}/workitem/ calls allow you to complete or abort a particular work item.
task calls are provided:
- /runtime/{deploymentId}/workitem/{workItemID}/complete
- completes the given work item [POST]The call accepts query map parameters containing information about the results.
Example 13.13. A local invocation and its REST version
Map<String, Object> results = new HashMap<String, Object>(); results.put("one", "done"); results.put("two", 2); kieSession.getWorkItemManager().completeWorkItem(23l, results);curl -v -u admin 'localhost:8080/business-central/rest/runtime/myDeployment/workitem/23/complete?map_one=done&map_two=2i'
- /runtime/{deploymentId}/workitem/{workItemID}/abort
- aborts the given work item [POST]
13.3.2.4. History calls
/history/ calls administer logs of process instances, their nodes, and process variables.
Note
/history/calls specified in 6.0.0.GA of BPMS are still available, as of 6.0.1.GA, the /history/ calls have been made independent of any deployment, which is also reflected in the URLS used.
history calls are provided:
- /history/clear
- clears all process, variable, and node logs [POST]
- /history/instances
- returns logs of all Process instances [GET]
- /history/instance/{procInstanceID}
- returns all logs of Process instance (including child logs) [GET]
- /history/instance/{procInstanceID}/child
- returns logs of child Process instances [GET]
- /history/instance/{procInstanceID}/node
- returns logs of all nodes of the Process instance [GET]
- /history/instance/{procInstanceID}/node/{nodeID}
- returns logs of the node of the Process instance [GET]
- /history/instance/{procInstanceID}/variable
- returns variables of the Process instance with their values [GET]
- /history/instance/{procInstanceID}/variable/{variableID}
- returns the log of the process instance that have the given variable id [GET]
- /history/process/{procInstanceID}
- returns the logs of the given Process instance excluding logs of its nodes and variables [GET]
History calls that search by variable
activeProcesses parameter that limits the selection to information from active process instances.
- /history/variable/{varId}
- returns the variable logs of the specified process variable [GET]
- /history/variable/{varId}/instances
- returns the process instance logs for processes that contain the specified process variable [GET]
- /history/variable/{varId}/value/{value}
- returns the variable logs for specified process variable with the specified value [GET]
Example 13.14. A local invocation and its REST version
auditLogService.findVariableInstancesByNameAndValue("countVar", "three", true);curl -v -u admin 'localhost:8080/business-central/rest/history/variable/countVar/value/three?activeProcesses=true'
- /history/variable/{varId}/value/{value}/instances
- returns the process instance logs for process instances that contain the specified process variable with the specified value [GET]
13.3.2.5. Calls to process variables
/runtime/{deploymentId}/withvars/ calls allow you to work with Process variables. Note that all variable values are returned as strings in the JaxbVariablesResponse object.
withvars calls are provided:
- /runtime/{deploymentId}/withvars/process/{procDefinitionID}/start
- creates and starts Process instance and returns the Process instance with its variables Note that even if a passed variable is not defined in the underlying Process definition, it is created and initialized with the passed value. [POST]
- /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}
- returns Process instance with its variables [GET]
- /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}/signal
- sends signal event to Process instance (accepts query map parameters).
13.3.3. Task calls
task calls are provided:
- /task/{taskId: \\d+}
- returns the task in JAXB format [GET]Further call paths are provided to perform other actions on tasks; refer to Section 13.3.3.1, “Task ID operations”)
- /task/query
- returns a TaskSummary list returned [GET]Further call paths are provided to perform other actions on task/query; refer to Section 13.3.3.3, “Query operations”).
- /task/content/{contentId: \\d+}
- returns the task content in the JAXB format [GET]For further information, refer to Section 13.3.3.2, “Content operations”)
13.3.3.1. Task ID operations
task/{taskId: \\d+}/ACTION calls allow you to execute an action on the given task (if no action is defined, the call is a GET call that returns the JAXB representation of the task).
Table 13.1. Task Actions
| Task | Action |
|---|---|
activate | activate task (taskId as query param.. ) |
claim | claim task [POST] (The user used in the authentication of the REST url call claims it.) |
claimnextavailable | claim next available task [POST] (This operation claims the next available task assigned to the user.) |
complete | complete task [POST] (accepts "query map parameters".) |
delegate | delegate task [POST] (Requires a targetIdquery parameter, which identifies the user or group to which the task is delegated.) |
exit | exit task [POST] |
fail | fail task [POST] |
forward | forward task [POST] |
release | release task [POST] |
resume | resume task [POST] |
skip | skip task [POST] |
start | start task [POST] |
stop | stop task [POST] |
suspend | suspend task [POST] |
nominate | nominate task [POST] (Requires at least one of either the user or group query parameter, which identify the user(s) or group(s) that are nominated for the task.) |
13.3.3.2. Content operations
task/content/{contentId: \\d+} and task/{taskId: \\d+}/content operations return the serialized content associated with the given task.
org.jbpm.services.task.utils.ContentMarshallerHelper class.
org.jbpm.services.task.utils.ContentMarshallerHelper class, they cannot deserialize the task content. When using the REST call to obtain task content, the content is first deserialized usint the ContentMarshallerHelper class and then serialized with the common Java serialization mechanism.
- The requested objects are instances of a class that implements the
Serializableinterface. In the case of Map objects, they only contain values that implement theSerializableinterface. - The objects are not instances of a local class, an anonymous class or arrays of a local or anonymous class.
- The object classes are present on the class path of the server .
13.3.3.3. Query operations
/task/query call is a GET call that returns a TaskSummary list of the tasks that meet the criteria defined in the call parameters. Note that you can use the pagination feature to define the amount of data to be return.
Parameters
task/query call:
workItemId: returns only tasks based on the work item.taskId: returns only the task with the particular ID.businessAdministrator: returns task with an identified business administrator.potentialOwner: returns tasks that can be claimed by the potentialOwner user.status: returns tasks that are in the given status (Created,Ready,Reserved,InProgress,CompletedorFailed);taskOwner: returns tasks assigned to the particular user (Created,Ready,Reserved,InProgress,Suspended,Completed,Failed,Error,Exited, orObsolete).processInstanceId: returns tasks generated by the Process instance.union: specifies whether the query should query the union or intersection of the parameters.
Example 13.15. Query usage
http://server:port/rest/task/query?workItemId=3&workItemId=4&workItemId=5
http://server:port/rest/task/query?workItemId=11&taskId=27
union parameter is being used here so that the union of the two queries (the work item id query and the task id query) is returned.
http://server:port/rest/task/query?workItemId=11&taskId=27&union=true
Created` and the potential owner of the task is `Bob`. Note that the letter case for the status parameter value is case-insensitve.
http://server:port/rest/task/query?status=creAted&potentialOwner=Bob
Created` and the potential owner of the task is `bob`. Note that the potential owner parameter is case-sensitive. `bob` is not the same user id as `Bob`!
http://server:port/rest/task/query?status=created&potentialOwner=bob
Created` or `Ready`.
http://server:port/rest/task/query?status=created&status=ready&potentialOwner=bob&processInstanceId=201
- process instance id 201, potential owner `bob`, status `Ready`
- process instance id 201, potential owner `bob`, status `Created`
- process instance id 183, potential owner `bob`, status `Created`
- process instance id 201, potential owner `mary`, status `Ready`
- process instance id 201, potential owner `bob`, status `Complete`
Usage
workItemId, taskId, businessAdministrator, potentialOwner, taskOwner, and processInstanceId. If entering the status parameter multiple times, the intersection of tasks that have any of the status values and union of tasks that satisfy the other criteria.
language parameter is required and if not defined the en-UK value is used. The parameter can be defined only once.
13.3.4. Execute operations
Important
execute operations were created in order to support the Java Remote Runtime API. As a result, these calls are also available to the user. However, all of the functionality that these operations expose can be more easily accessed either via other REST operations or via the Java Remote Runtime API.
execute operation. This is the only way to have the REST API process multiple commands in one operation.
execute call: the call takes the JaxbCommandsRequest object as its parameter. The JaxbCommandsRequest object contains a List of org.kie.api.COMMAND.Command objects (the commands are "stored" in the JaxbCommandsRequest object as Strings and send via the execute REST call). The JaxbCommandsRequest parameters are deploymentId, if applicable processInstanceId, and a Command object.
public List<JaxbCommandResponse<?>> executeCommand(String deploymentId, List<Command<?>> commands) throws Exception {
URL address = new URL(url + "/runtime/" + deploymentId + "/execute");
ClientRequest restRequest = createRequest(address);
JaxbCommandsRequest commandMessage = new JaxbCommandsRequest(deploymentId, commands);
String body = JaxbSerializationProvider.convertJaxbObjectToString(commandMessage);
restRequest.body(MediaType.APPLICATION_XML, body);
ClientResponse<JaxbCommandsResponse> responseObj = restRequest.post(JaxbCommandsResponse.class);
checkResponse(responseObj);
JaxbCommandsResponse cmdsResp = responseObj.getEntity();
return cmdsResp.getResponses();
}
private ClientRequest createRequest(URL address) {
return getClientRequestFactory().createRequest(address.toExternalForm());
}
private ClientRequestFactory getClientRequestFactory() {
DefaultHttpClient httpClient = new DefaultHttpClient();
httpClient.getCredentialsProvider().setCredentials(new AuthScope(AuthScope.ANY_HOST,
AuthScope.ANY_PORT, AuthScope.ANY_REALM), new UsernamePasswordCredentials(userId, password));
ClientExecutor clientExecutor = new ApacheHttpClient4Executor(httpClient);
return new ClientRequestFactory(clientExecutor, ResteasyProviderFactory.getInstance());
}
Figure 13.1. Method implementing the execute REST call
execute operation will accept. See the constructor and set methods on the actual command classes for further information about which parameters these commands will accept.
| AbortWorkItemCommand | SignalEventCommand |
| CompleteWorkItemCommand | StartCorrelatedProcessCommand |
| GetWorkItemCommand | StartProcessCommand |
| AbortProcessInstanceCommand | GetVariableCommand |
| GetProcessIdsCommand | GetFactCountCommand |
| GetProcessInstanceByCorrelationKeyCommand | GetGlobalCommand |
| GetProcessInstanceCommand | GetIdCommand |
| GetProcessInstancesCommand | FireAllRulesCommand |
| SetProcessInstanceVariablesCommand |
| ActivateTaskCommand | GetTaskAssignedAsPotentialOwnerCommand |
| AddTaskCommand | GetTaskByWorkItemIdCommand |
| CancelDeadlineCommand | GetTaskCommand |
| ClaimNextAvailableTaskCommand | GetTasksByProcessInstanceIdCommand |
| ClaimTaskCommand | GetTasksByStatusByProcessInstanceIdCommand |
| CompleteTaskCommand | GetTasksOwnedCommand |
| CompositeCommand | NominateTaskCommand |
| DelegateTaskCommand | ProcessSubTaskCommand |
| ExecuteTaskRulesCommand | ReleaseTaskCommand |
| ExitTaskCommand | ResumeTaskCommand |
| FailTaskCommand | SkipTaskCommand |
| ForwardTaskCommand | StartTaskCommand |
| GetAttachmentCommand | StopTaskCommand |
| GetContentCommand | SuspendTaskCommand |
| GetTaskAssignedAsBusinessAdminCommand |
| ClearHistoryLogsCommand | FindSubProcessInstancesCommand |
| FindActiveProcessInstancesCommand | FindSubProcessInstancesCommand |
| FindNodeInstancesCommand | FindVariableInstancesByNameCommand |
| FindProcessInstanceCommand | FindVariableInstancesCommand |
| FindProcessInstancesCommand |
13.4. REST summary
http://server:port/business-central/rest
Table 13.2. Knowledge Store REST calls
| URL Template | Type | Description |
|---|---|---|
| /jobs/{jobID} | GET | return the job status |
| /jobs/{jobID} | DELETE | remove the job |
| /organizationalunits | GET | return a list of organizational units |
| /organizationalunits | POST |
create an organizational unit in the Knowledge Store described by the JSON
OrganizationalUnit entity
|
| /organizationalunits/{organizationalUnitName}/repositories/{repositoryName} | POST | add a repository to an organizational unit |
| /repositories/ | POST |
add the repository to the organizational unit described by the JSON
RepositoryReqest entity
|
| /repositories | GET | return the repositories in the Knowledge Store |
| /repositories/{repositoryName} | DELETE | remove the repository from the Knowledge Store |
| /repositories/ | POST | create or clone the repository defined by the JSON RepositoryRequest entity |
| /repositories/{repositoryName}/projects/ | POST | create the project defined by the JSON entity in the repository |
| /repositories/{repositoryName}/projects/{projectName}/maven/compile/ | POST | compile the project |
| /repositories/{repositoryName}/projects/{projectName}/maven/install | POST | install the project |
| /repositories/{repositoryName}/projects/{projectName}/maven/test/ | POST |
compile the project and run tests as part of compilation
|
| /repositories/{repositoryName}/projects/{projectName}/maven/deploy/ | POST | deploy the project |
Table 13.3. runtime REST calls
| URL Template | Type | Description |
|---|---|---|
| /runtime/{deploymentId}/process/{procDefID}/start | POST | start a process instance based on the Process definition (accepts query map parameters) |
| /runtime/{deploymentId}/process/instance/{procInstanceID} | GET | return a process instance details |
| /runtime/{deploymentId}/process/instance/{procInstanceID}/abort | POST | abort the process instance |
| /runtime/{deploymentId}/process/instance/{procInstanceID}/signal | POST | send a signal event to process instance (accepts query map parameters) |
| /runtime/{deploymentId}/process/instance/{procInstanceID}/variable/{varId} | GET | return a variable from a process instance |
| /runtime/{deploymentId}/signal/{signalCode} | POST | send a signal event to deployment |
| /runtime/{deploymentId}/workitem/{workItemID}/complete | POST | complete a work item (accepts query map parameters) |
| /runtime/{deploymentId}/workitem/{workItemID}/abort | POST | abort a work item |
| /runtime/{deploymentId}/withvars/process/{procDefinitionID}/start | POST |
start a process instance and return the process instance with its variables
Note that even if a passed variable is not defined in the underlying process definition, it is created and initialized with the passed value.
|
| /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}/ | GET |
return a process instance with its variables
|
| /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}/signal | POST |
send a signal event to the process instance (accepts query map parameters)
The following query parameters are accepted:
|
Table 13.4. task REST calls
| URL Template | Type | Description |
|---|---|---|
| /task/query | GET |
return a TaskSummary list
|
| /task/content/{contentID} | GET |
returns the content of a task
|
| /task/{taskID} | GET |
return the task
|
| /task/{taskID}/activate | POST |
activate the task
|
| /task/{taskID}/claim | POST |
claim the task
|
| /task/{taskID}/claimnextavailable | POST |
claim the next available task
|
| /task/{taskID}/complete | POST |
complete the task (accepts query map paramaters)
|
| /task/{taskID}/delegate | POST |
delegate the task
|
| /task/{taskID}/exit | POST |
exit the task
|
| /task/{taskID}/fail | POST |
fail the task
|
| /task/{taskID}/forward | POST |
forward the task
|
| /task/{taskID}/nominate | POST |
nominate the task
|
| /task/{taskID}/release | POST |
release the task
|
| /task/{taskID}/resume | POST |
resume the task (after suspending)
|
| /task/{taskID}/skip | POST |
skip the task
|
| /task/{taskID}/start | POST |
start the task
|
| /task/{taskID}/stop | POST |
stop the task
|
| /task/{taskID}/suspend | POST |
suspend the task
|
| /task/{taskID}/content | GET |
returns the content of a task
|
Table 13.5. history REST calls
| URL Template | Type | Description |
|---|---|---|
| /history/clear/ | POST | delete all process, node and history records |
| /history/instances | GET | return the list of all process instance history records |
| /history/instance/{procInstId} | GET | return a list of process instance history records for a process instance |
| /history/instance/{procInstId}/child | GET | return a list of process instance history records for the subprocesses of the process instance |
| /history/instance/{procInstId}/node | GET | return a list of node history records for a process instance |
| /history/instance/{procInstId}/node/{nodeId} | GET | return a list of node history records for a node in a process instance |
| /history/instance/{procInstId}/variable | GET | return a list of variable history records for a process instance |
| /history/instance/{procInstId}/variable/{variableId} | GET | return a list of variable history records for a variable in a process instance |
| /history/process/{procDefId} | GET | return a list of process instance history records for process instances using a given process definition |
| /history/variable/{varId} | GET | return a list of variable history records for a variable |
| /history/variable/{varId}/instances | GET | return a list of process instance history records for process instances that contain a variable with the given variable id |
| /history/variable/{varId}/value/{value} | GET | return a list of variable history records for variable(s) with the given variable id and given value |
| /history/variable/{varId}/value/{value}/instances | GET | return a list of process instance history records for process instances with the specified variable that contains the specified variable value |
Table 13.6. deployment REST calls
| URL Template | Type | Description |
|---|---|---|
| /deployment | GET | return a list of (deployed) deployments |
| /deployment/{deploymentId} | GET | return the status and information about the deployment |
| /deployment/{deploymentId}/deploy | POST |
submit a request to deploy a deployment
|
| /deployment/{deploymentId}/undeploy | POST |
submit a request to undeploy a deployment
|
13.5. JMS
13.5.1. JMS Queue Setup
jms/queue/KIE.SESSIONjms/queue/KIE.TASKjms/queue/KIE.RESPONSE
KIE.SESSION and KIE.TASK queues should be used to send request messages to the JMS API. Command response messages will be then placed on the KIE.RESPONSE queues. Command request messages that involve starting and managing business processes should be sent to the KIE.SESSION and command request messages that involve managing human tasks, should be sent to the KIE.TASK queue.
KIE.SESSION and KIE.TASK, this is only in order to provide multiple input queues so as to optimize processing: command request messages will be processed in the same manner regardless of which queue they're sent to. However, in some cases, users may send many more requests involving human tasks than requests involving business processes, but then not want the processing of business process-related request messages to be delayed by the human task messages. By sending the appropriate command request messages to the appropriate queues, this problem can be avoided.
JaxbCommandsRequest object. At the moment, only XML serialization (as opposed to, JSON or protobuf, for example) is supported.
13.5.2. Serialization issues
- The user-defined class satisfy the following in order to be property serialized and deserialized by the JMS or REST API:
- The user-defined class must be correctly annotated with JAXB annotations, including the following:
- The user-defined class must be annotated with a
javax.xml.bind.annotation.XmlRootElementannotation with a non-emptynamevalue - All fields or getter/setter methods must be annotated with a
javax.xml.bind.annotation.XmlElementorjavax.xml.bind.annotation.XmlAttributeannotations.
Furthermore, the following usage of JAXB annotations is recommended:- Annotate the user-defined class with a
javax.xml.bind.annotation.XmlAccessorTypeannotation specifying that fields should be used, (javax.xml.bind.annotation.XmlAccessType.FIELD). This also means that you should annotate the fields (instead of the getter or setter methods) with@XmlElementor@XmlAttributeannotations. - Fields annotated with
@XmlElementor@XmlAttributeannotations should also be annotated withjavax.xml.bind.annotation.XmlSchemaTypeannotations specifying the type of the field, even if the fields contain primitive values. - Use objects to store primitive values. For example, use the
java.lang.Integerclass for storing an integer value, and not theintclass. This way it will always be obvious if the field is storing a value.
- The user-defined class definition must implement a no-arg constructor.
- Any fields in the user-defined class must either be object primitives (such as a
LongorString) or otherwise be objects that satisfy the first 2 requiremends in this list (correct usage of JAXB annotations and a no-arg constructor).
- The class definition must be included in the deployment jar of the deployment that the JMS message content is meant for.
- The sender must set a “deploymentId” string property on the JMS bytes message to the name of the deploymentId. This property is necessary in order to be able to load the proper classes from the deployment itself before deserializing the message on the server side.
Note
13.5.3. Example JMS Usage
// normal java imports skipped import org.drools.core.command.runtime.process.StartProcessCommand; import org.jbpm.services.task.commands.GetTaskAssignedAsPotentialOwnerCommand; import org.kie.api.command.Command; import org.kie.api.runtime.process.ProcessInstance; import org.kie.api.task.model.TaskSummary; import org.kie.services.client.api.command.exception.RemoteCommunicationException;import org.kie.services.client.serialization.JaxbSerializationProvider; import org.kie.services.client.serialization.SerializationConstants; import org.kie.services.client.serialization.SerializationException; import org.kie.services.client.serialization.jaxb.impl.JaxbCommandResponse; import org.kie.services.client.serialization.jaxb.impl.JaxbCommandsRequest; import org.kie.services.client.serialization.jaxb.impl.JaxbCommandsResponse; import org.kie.services.client.serialization.jaxb.rest.JaxbExceptionResponse; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class DocumentationJmsExamples { protected static final Logger logger = LoggerFactory.getLogger(DocumentationJmsExamples.class); public void sendAndReceiveJmsMessage() { String USER = "charlie"; String PASSWORD = "ch0c0licious"; String DEPLOYMENT_ID = "test-project"; String PROCESS_ID_1 = "oompa-processing"; URL serverUrl; try { serverUrl = new URL("http://localhost:8080/business-central/"); } catch (MalformedURLException murle) { logger.error("Malformed URL for the server instance!", murle); return; } // Create JaxbCommandsRequest instance and add commands Command<?> cmd = new StartProcessCommand(PROCESS_ID_1); int oompaProcessingResultIndex = 0;
JaxbCommandsRequest req = new JaxbCommandsRequest(DEPLOYMENT_ID, cmd);
req.getCommands().add(new GetTaskAssignedAsPotentialOwnerCommand(USER, "en-UK")); int loompaMonitoringResultIndex = 1;
// Get JNDI context from server InitialContext context = getRemoteJbossInitialContext(serverUrl, USER, PASSWORD); // Create JMS connection ConnectionFactory connectionFactory; try { connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory"); } catch (NamingException ne) { throw new RuntimeException("Unable to lookup JMS connection factory.", ne); } // Setup queues Queue sendQueue, responseQueue; try { sendQueue = (Queue) context.lookup("jms/queue/KIE.SESSION"); responseQueue = (Queue) context.lookup("jms/queue/KIE.RESPONSE"); } catch (NamingException ne) { throw new RuntimeException("Unable to lookup send or response queue", ne); } // Send command request Long processInstanceId = null; // needed if you're doing an operation on a PER_PROCESS_INSTANCE deployment String humanTaskUser = USER; JaxbCommandsResponse cmdResponse = sendJmsCommands( DEPLOYMENT_ID, processInstanceId, humanTaskUser, req, connectionFactory, sendQueue, responseQueue, USER, PASSWORD, 5); // Retrieve results ProcessInstance oompaProcInst = null; List<TaskSummary> charliesTasks = null; for (JaxbCommandResponse<?> response : cmdResponse.getResponses()) {
if (response instanceof JaxbExceptionResponse) { // something went wrong on the server side JaxbExceptionResponse exceptionResponse = (JaxbExceptionResponse) response; throw new RuntimeException(exceptionResponse.getMessage()); } if (response.getIndex() == oompaProcessingResultIndex) {
oompaProcInst = (ProcessInstance) response.getResult();
} else if (response.getIndex() == loompaMonitoringResultIndex) {
charliesTasks = (List<TaskSummary>) response.getResult();
} } } private JaxbCommandsResponse sendJmsCommands(String deploymentId, Long processInstanceId, String user, JaxbCommandsRequest req, ConnectionFactory factory, Queue sendQueue, Queue responseQueue, String jmsUser, String jmsPassword, int timeout) { req.setProcessInstanceId(processInstanceId); req.setUser(user); Connection connection = null; Session session = null; String corrId = UUID.randomUUID().toString(); String selector = "JMSCorrelationID = '" + corrId + "'"; JaxbCommandsResponse cmdResponses = null; try { // setup MessageProducer producer; MessageConsumer consumer; try { if (jmsPassword != null) { connection = factory.createConnection(jmsUser, jmsPassword); } else { connection = factory.createConnection(); } session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); producer = session.createProducer(sendQueue); consumer = session.createConsumer(responseQueue, selector); connection.start(); } catch (JMSException jmse) { throw new RemoteCommunicationException("Unable to setup a JMS connection.", jmse); } JaxbSerializationProvider serializationProvider = new JaxbSerializationProvider(); // if necessary, add user-created classes here: // xmlSerializer.addJaxbClasses(MyType.class, AnotherJaxbAnnotatedType.class); // Create msg BytesMessage msg; try { msg = session.createBytesMessage();
// set properties msg.setJMSCorrelationID(corrId);
msg.setIntProperty(SerializationConstants.SERIALIZATION_TYPE_PROPERTY_NAME, JaxbSerializationProvider.JMS_SERIALIZATION_TYPE);
Collection<Class<?>> extraJaxbClasses = serializationProvider.getExtraJaxbClasses(); if (!extraJaxbClasses.isEmpty()) { String extraJaxbClassesPropertyValue = JaxbSerializationProvider .classSetToCommaSeperatedString(extraJaxbClasses); msg.setStringProperty(SerializationConstants.EXTRA_JAXB_CLASSES_PROPERTY_NAME, extraJaxbClassesPropertyValue); msg.setStringProperty(SerializationConstants.DEPLOYMENT_ID_PROPERTY_NAME, deploymentId); } // serialize request String xmlStr = serializationProvider.serialize(req);
msg.writeUTF(xmlStr); } catch (JMSException jmse) { throw new RemoteCommunicationException("Unable to create and fill a JMS message.", jmse); } catch (SerializationException se) { throw new RemoteCommunicationException("Unable to deserialze JMS message.", se.getCause()); } // send try { producer.send(msg); } catch (JMSException jmse) { throw new RemoteCommunicationException("Unable to send a JMS message.", jmse); } // receive Message response; try { response = consumer.receive(timeout); } catch (JMSException jmse) { throw new RemoteCommunicationException("Unable to receive or retrieve the JMS response.", jmse); } if (response == null) { logger.warn("Response is empty, leaving"); return null; } // extract response assert response != null : "Response is empty."; try { String xmlStr = ((BytesMessage) response).readUTF(); cmdResponses = (JaxbCommandsResponse) serializationProvider.deserialize(xmlStr);
} catch (JMSException jmse) { throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName() + " instance from JMS response.", jmse); } catch (SerializationException se) { throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName() + " instance from JMS response.", se.getCause()); } assert cmdResponses != null : "Jaxb Cmd Response was null!"; } finally { if (connection != null) { try { connection.close(); session.close(); } catch (JMSException jmse) { logger.warn("Unable to close connection or session!", jmse); } } } return cmdResponses; } private InitialContext getRemoteJbossInitialContext(URL url, String user, String password) { Properties initialProps = new Properties(); initialProps.setProperty(InitialContext.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory"); String jbossServerHostName = url.getHost(); initialProps.setProperty(InitialContext.PROVIDER_URL, "remote://"+ jbossServerHostName + ":4447"); initialProps.setProperty(InitialContext.SECURITY_PRINCIPAL, user); initialProps.setProperty(InitialContext.SECURITY_CREDENTIALS, password); for (Object keyObj : initialProps.keySet()) { String key = (String) keyObj; System.setProperty(key, (String) initialProps.get(key)); } try { return new InitialContext(initialProps); } catch (NamingException e) { throw new RemoteCommunicationException("Unable to create " + InitialContext.class.getSimpleName(), e); } } }
|
These classes can all be found in the kie-services-client and the kie-services-jaxb JAR.
|
|
The JaxbCommandsRequest instance is the "holder" object in which you can place all of the commands you want to execute in a particular request. By using the JaxbCommandsRequest.getCommands() method, you can retrieve the list of commands in order to add more commands to the request.
A deployment id is required for command request messages that deal with business processes. Command request messages that only contain human task-related commands do not require a deployment id.
|
|
Note that the JMS message sent to the remote JMS API must be constructed as follows:
|
|
The same serialization mechanism used to serialize the request message will be used to serialize the response message.
|
|
In order to match the response to a command, to the initial command, use the index field of the returned JaxbCommandResponse instances. This index field will match the index of the initial command. Because not all commands will return a result, it's possible to send 3 commands with a command request message, and then receive a command response message that only includes one JaxbCommandResponse message with an index value of 1. That 1 then identifies it as the response to the second command.
|
|
Since many of the results returned by various commands are not serializable, the jBPM JMS Remote API converts these results into JAXB equivalents, all of which implement the JaxbCommandResponse interface. The JaxbCommandResponse.getResult() method then returns the JAXB equivalent to the actual result, which will conform to the interface of the result.
For example, in the code above, the StartProcessCommand returns a ProcessInstance. In order to return this object to the requester, the ProcessInstance is converted to a JaxbProcessInstanceResponse and then added as a JaxbCommandResponse to the command response message. The same applies to the List<TaskSummary> that's returned by the GetTaskAssignedAsPotentialOwnerCommand.
However, not all methods that can be called on a normal ProcessInstance can be called on the JaxbProcessInstanceResponse because the JaxbProcessInstanceResponse is simply a representation of a ProcessInstance object. This applies to various other command response as well. In particular, methods which require an active (backing) KieSession, such as ProcessInstance.getProess() or ProcessInstance.signalEvent(String type, Object event) will throw an UnsupportedOperationException.
|
13.6. Remote Java API
KieSession, TaskService and AuditLogService interfaces to the JMS and REST APIs.
KieSession or TaskService interface, without having to deal with the underlying transport and serialization details.
Important
KieSession, TaskSerivce and AuditLogService instances provided by the Remote Java API may "look" and "feel" like local instances of the same interfaces, please make sure to remember that these instances are only wrappers around a REST or JMS client that interacts with a remote REST or JMS API.
RuntimeException indicating that the REST call failed. This is different from the behaviour of a "real" (or local) instance of a KieSession, TaskSerivce and AuditLogService instance because the exception the local instances will throw will relate to how the operation failed. Also, while local instances require different handling (such as having to dispose of a KieSession), client instances provided by the Remote Java API hold no state and thus do not require any special handling.
TaskService.claim(taskId, userId) operation when called by a user who is not a potential owner), will now throw a RuntimeException instead when the requested operation fails on the server.
RemoteRestRuntimeEngineFactory or RemoteJmsRuntimeEngineFactory, both of which are instances of the RemoteRuntimeEngineFactory interface.
RemoteRuntimeEngineFactory instance: there are a number of different constructors for both the JMS and REST implementations that allow the configuration of such things as the base URL of the REST API, JMS queue location or timeout while waiting for responses.
Remote Java API Methods
RemoteRuntimeEngine RemoteRuntimeEngineFactory.newRuntimeEngine()- This method instantiates a new
RemoteRuntimeEngine(client) instance. KieSession RemoteRuntimeEngine.getKieSession()- This method instantiates a new (client)
KieSessioninstance. TaskService RemoteRuntimeEngine.getTaskService()- This method instantiates a new (client)
TaskServiceinstance. AuditLogService RemoteRuntimeEngine.getAuditLogService()- This method instantiates a new (client)
AuditLogServiceinstance.
Note
RemoteRuntimeEngineFactory.addExtraJaxbClasses(Collection<Class<?>> extraJaxbClasses ); method can only be called on builder now. This method adds extra classes to the classpath available to the serialization mechanisms. When passing instances of user-defined classes in a Remote Java API call, it's important to have added the classes via this method first so that the class instances can be serialized correctly.
13.6.1. The REST Remote Java RuntimeEngine Factory
RemoteRestRuntimeEngineFactory class is the starting point for building and configuring a new RuntimeEngine instance that can interact with the remote API. The main use for this class is to create builder instances of REST using the newBuilder() method. These builder instances are then used to either directly create a RuntimeEngine instance that will act as a client to the remote REST API or to create an instance of this factory. Illustrated in the table below are the various methods available in the RemoteRestRuntimeEngineBuilder class:
Table 13.7. RemoteRestRuntimeEngineBuilder Methods
| Method Name | Parameter Type | Description |
|---|---|---|
addDeploymentId | java.lang.String |
This is the name (id) of the deployment the
RuntimeEngine should interact with.
|
addUrl | java.net.URL |
This is the URL of the deployed business-central or BPMS instance.
For example:
http://localhost:8080/business-central/
|
addUserName | java.lang.String |
This is the user name needed to access the REST API.
|
addPassword | java.lang.String |
This is the password needed to access the REST API.
|
addProcessInstanceId | long |
This is the name (id) of the process the
RuntimeEngine should interact with.
|
addTimeout | int |
This maximum number of seconds to wait when waiting for a response from the server.
|
addExtraJaxbClasses | class |
This adds extra classes to the classpath available to the serialization mechanisms.
|
The following example illustrates how the Remote Java API can be used with the REST API.
import org.kie.api.runtime.KieSession;
import org.kie.api.task.TaskService;
import org.kie.api.runtime.process.ProcessInstance;
import org.kie.services.client.api.RemoteRestRuntimeEngineFactory;
import org.kie.services.client.api.RemoteRestRuntimeEngineFactoryBuilderImpl;
import org.kie.services.client.api.command.RemoteRuntimeEngine;
public void javaRemoteApiRestExample(String deploymentId, URL baseUrl, String user, String password) {
// The serverRestUrl should contain a URL similar to "http://localhost:8080/business-central/"
RemoteRestRuntimeEngineFactory remoteRestRuntimeEngineFactory = RemoteRestRuntimeEngineFactory.newBuilder()
.addDeploymentId(deploymentId)
.addUrl(url)
.addUserName(userName)
.addPassword(passWord)
.addTimeout(timeOut)
.build();
RemoteRuntimeEngine engine = remoteRestRuntimeEngineFactory.newRuntimeEngine();
// Create KieSession and TaskService instances and use them
KieSession ksession = engine.getKieSession();
TaskService taskService = engine.getTaskService();
// Each opertion on a KieSession, TaskService or AuditLogService (client) instance
// sends a request for the operation to the server side and waits for the response
// If something goes wrong on the server side, the client will throw an exception.
ProcessInstance processInstance
= ksession.startProcess("com.burns.reactor.maintenance.cycle");
long procId = processInstance.getId();
String taskUserId = user;
taskService = engine.getTaskService();
List<TaskSummary> tasks = taskService.getTasksAssignedAsPotentialOwner(user, "en-UK");
long taskId = -1;
for (TaskSummary task : tasks) {
if (task.getProcessInstanceId() == procId) {
taskId = task.getId();
}
}
if (taskId == -1) {
throw new IllegalStateException("Unable to find task for " + user + " in process instance " + procId);
}
taskService.start(taskId, taskUserId);
}
}
13.6.2. Custom Model Objects and Remote API
Note
Procedure 13.1. Accessing custom model objects using the Remote API
- Make sure that the custom model objects have been installed into the local Maven repository of the project that they are a part of (by a process of building the project successfully).
- If your client application is a Maven based project include the custom model objects project as a Maven dependency in the
pom.xmlconfiguration file of the client application.<dependency> <groupId>${groupid}</groupId> <artifactId>${artifactid}</artifactId> <version>${version}</version> </dependency>The value of these fields can be found in your Project Editor within Business Central: → on the main menu and then → from the perspective menu.- If the client application is NOT a Maven based project download the BPMS project, which includes the model classes, from Business Central by clicking on → . Add this jar file of the project on the build path of your client application so that the model object classes can be found and used.
- You can now use the custom model objects within your client application and invoke methods on them using the Remote API. The following listing shows an example of this, where
Personis a custom model object.import org.jbpm.services.task.utils.ContentMarshallerHelper; import org.kie.api.runtime.KieSession; import org.kie.api.runtime.process.ProcessInstance; import org.kie.api.task.TaskService; import org.kie.api.task.model.Content; import org.kie.api.task.model.Task; import org.kie.services.client.api.RemoteRestRuntimeEngineFactory; import org.kie.services.client.api.command.RemoteRuntimeEngine; // the rest of the code here . . . // the following code in a method RemoteRestRuntimeEngineFactory factory = RemoteRestRuntimeEngineFactory.newBuilder().addUrl(url).addUserName(username).addPassword(password).addDeploymentId(deploymentId).addExtraJaxbClasses(new Class[]{UserDefinedClass.class, AnotherUserDefinedClass.class}).build(); runtimeEngine = factory.newRuntimeEngine(); ksession = runtimeEngine.getKieSession(); Map<String, Object> params = new HashMap<String, Object>(); Person person = new Person(); person.setName("anton"); params.put("pVar", person); ProcessInstance pi = kieSession.startProcess(PROCESS_ID, params);Make sure that your client application has imported the correct BPMS libraries for the example to work.
13.6.3. The JMS Remote Java RuntimeEngine Factory
RemoteJmsRuntimeEngineFactory works similar to the REST variation in that it is a starting point for building and configuring a new RuntimeEngine instance that can interact with the remote API. The main use for this class is to create builder instances of JMS using the newBuilder() method. These builder instances are then used to either directly create a RuntimeEngine instance that will act as a client to the remote JMS API or to create an instance of this factory. Illustrated in the table below are the various methods available for the RemoteJmsRuntimeEngineFactoryBuilder:
Table 13.8. RemoteJmsRuntimeEngineFactoryBuilder Methods
| Method Name | Parameter Type | Description |
|---|---|---|
addDeploymentId | java.lang.String |
This is the name (id) of the deployment the
RuntimeEngine should interact with.
|
addProcessInstanceId | long |
This is the name (id) of the process the
RuntimeEngine should interact with.
|
addUserName | java.lang.String |
This is the user name needed to access the JMS queues (in your application server configuration).
|
addPassword | java.lang.String |
This is the password needed to access the JMS queues (in your application server configuration).
|
addTimeout | int |
This maximum number of seconds to wait when waiting for a response from the server.
|
addExtraJaxbClasses | class |
This adds extra classes to the classpath available to the serialization mechanisms.
|
addRemoteInitialContext | javax.jms.InitialContext |
This is a remote InitialContext instance (created using JNDI) from the server.
|
addConnectionFactory | javax.jms.ConnectionFactory |
This is a
ConnectionFactory instance used to connect to the ksessionQueue or taskQueue.
|
addKieSessionQueue | javax.jms.Queue |
This is an instance of the
Queue for requests relating to the process instance.
|
addTaskServiceQueue | javax.jms.Queue |
This is an instance of the
Queue for requests relating to task service usage.
|
addResponseQueue | javax.jms.Queue |
This is an instance of the
Queue used to receive responses.
|
addJbossServerUrl | java.net.URL |
This is the url for the JBoss Server.
|
addJbossServerHostName | java.lang.String |
This is the hostname for the JBoss Server.
|
addHostName | java.lang.String |
This is the hostname of the JMS queues.
|
addJmsConnectorPort | int |
This is the port for the JMS Connector.
|
addKeystorePassword | java.lang.String |
This is the JMS Keystore Password.
|
addKeystoreLocation | java.lang.String |
This is the JMS Keystore Location.
|
addTruststorePassword | java.lang.String |
This is the JMS Truststore Password.
|
addTruststoreLocation | java.lang.String |
This is the JMS Truststore Location.
|
Example Usage
import org.kie.api.runtime.KieSession;
import org.kie.api.task.TaskService;
import org.kie.api.runtime.process.ProcessInstance;
import org.kie.services.client.api.RemoteJmsRuntimeEngineFactory;
import org.kie.services.client.api.command.RemoteRuntimeEngine;
public void javaRemoteApiJmsExample(String deploymentId, Long processInstanceId, String user, String password) {
// create a factory class with all the values
RemoteJmsRuntimeEngineFactory jmsRuntimeFactory =
RemoteJmsRuntimeEngineFactory.newBuilder()
.addDeploymentId(deploymentId)
.addProcessInstanceId(processInstanceId)
.addUserName(user)
.addPassword(password)
.addRemoteInitialContext(remoteInitialContext)
.addTimeout(3)
.addExtraJaxbClasses(MyType.class)
.useSsl(false)
.build();
RemoteRuntimeEngine engine = jmsRuntimeFactory.newRuntimeEngine();
// Create KieSession and TaskService instances and use them
KieSession ksession = engine.getKieSession();
TaskService taskService = engine.getTaskService();
// Each opertion on a KieSession, TaskService or AuditLogService (client) instance
// sends a request for the operation to the server side and waits for the response
// If something goes wrong on the server side, the client will throw an exception.
ProcessInstance processInstance
= ksession.startProcess("com.burns.reactor.maintenance.cycle");
long procId = processInstance.getId();
String taskUserId = user;
taskService = engine.getTaskService();
List<TaskSummary> tasks = taskService.getTasksAssignedAsPotentialOwner(user, "en-UK");
long taskId = -1;
for (TaskSummary task : tasks) {
if (task.getProcessInstanceId() == procId) {
taskId = task.getId();
}
}
if (taskId == -1) {
throw new IllegalStateException("Unable to find task for " + user + " in process instance " + procId);
}
taskService.start(taskId, taskUserId);
}
}
Sending and receiving JMS messages
sendAndReceiveJmsMessage example below creates the JaxbCommandsRequest instance and adds commands from the user. In addition, it retrieves JNDI context from the server, creates a JMS connection, etc.
import org.kie.api.runtime.process.ProcessInstance;
import org.kie.api.task.model.TaskSummary;
public void sendAndReceiveJmsMessage() {
String USER = "charlie";
String PASSWORD = "ch0c0licious";
String DEPLOYMENT_ID = "test-project";
String PROCESS_ID_1 = "oompa-processing";
URL serverUrl;
try {
serverUrl = new URL("http://localhost:8080/business-central/");
} catch (MalformedURLException murle) {
logger.error("Malformed URL for the server instance!", murle);
return;
}
// Create JaxbCommandsRequest instance and add commands
Command<?> cmd = new StartProcessCommand(PROCESS_ID_1);
int oompaProcessingResultIndex = 0;
JaxbCommandsRequest req = new JaxbCommandsRequest(DEPLOYMENT_ID, cmd);
req.getCommands().add(new GetTaskAssignedAsPotentialOwnerCommand(USER));
int loompaMonitoringResultIndex = 1;
// Get JNDI context from server
InitialContext context = getRemoteJbossInitialContext(serverUrl, USER, PASSWORD);
// Create JMS connection
ConnectionFactory connectionFactory;
try {
connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
} catch (NamingException ne) {
throw new RuntimeException("Unable to lookup JMS connection factory.", ne);
}
// Setup queues
Queue sendQueue, responseQueue;
try {
sendQueue = (Queue) context.lookup("jms/queue/KIE.SESSION");
responseQueue = (Queue) context.lookup("jms/queue/KIE.RESPONSE");
} catch (NamingException ne) {
throw new RuntimeException("Unable to lookup send or response queue", ne);
}
// Send command request
Long processInstanceId = null; // needed if you're doing an operation on a PER_PROCESS_INSTANCE deployment
String humanTaskUser = USER;
JaxbCommandsResponse cmdResponse = sendJmsCommands(
DEPLOYMENT_ID, processInstanceId, humanTaskUser, req,
connectionFactory, sendQueue, responseQueue,
USER, PASSWORD, 5);
// Retrieve results
ProcessInstance oompaProcInst = null;
List<TaskSummary> charliesTasks = null;
for (JaxbCommandResponse<?> response : cmdResponse.getResponses()) {
if (response instanceof JaxbExceptionResponse) {
// something went wrong on the server side
JaxbExceptionResponse exceptionResponse = (JaxbExceptionResponse) response;
throw new RuntimeException(exceptionResponse.getMessage());
}
if (response.getIndex() == oompaProcessingResultIndex) {
oompaProcInst = (ProcessInstance) response.getResult();
} else if (response.getIndex() == loompaMonitoringResultIndex) {
charliesTasks = (List<TaskSummary>) response.getResult();
}
}
}Sending JMS commands
sendJmsCommands example below is a continuation of the previous example. It sets up user created classes and sends, receives, and extracts responses.
private JaxbCommandsResponse sendJmsCommands(String deploymentId, Long processInstanceId, String user,
JaxbCommandsRequest req, ConnectionFactory factory, Queue sendQueue, Queue responseQueue, String jmsUser,
String jmsPassword, int timeout) {
req.setProcessInstanceId(processInstanceId);
req.setUser(user);
Connection connection = null;
Session session = null;
String corrId = UUID.randomUUID().toString();
String selector = "JMSCorrelationID = '" + corrId + "'";
JaxbCommandsResponse cmdResponses = null;
try {
// setup
MessageProducer producer;
MessageConsumer consumer;
try {
if (jmsPassword != null) {
connection = factory.createConnection(jmsUser, jmsPassword);
} else {
connection = factory.createConnection();
}
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(sendQueue);
consumer = session.createConsumer(responseQueue, selector);
connection.start();
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to setup a JMS connection.", jmse);
}
JaxbSerializationProvider serializationProvider = new JaxbSerializationProvider();
// if necessary, add user-created classes here:
// xmlSerializer.addJaxbClasses(MyType.class, AnotherJaxbAnnotatedType.class);
// Create msg
BytesMessage msg;
try {
msg = session.createBytesMessage();
// serialize request
String xmlStr = serializationProvider.serialize(req);
msg.writeUTF(xmlStr);
// set properties
msg.setJMSCorrelationID(corrId);
msg.setIntProperty(SerializationConstants.SERIALIZATION_TYPE_PROPERTY_NAME, JaxbSerializationProvider.JMS_SERIALIZATION_TYPE);
Collection<Class<?>> extraJaxbClasses = serializationProvider.getExtraJaxbClasses();
if (!extraJaxbClasses.isEmpty()) {
String extraJaxbClassesPropertyValue = JaxbSerializationProvider
.classSetToCommaSeperatedString(extraJaxbClasses);
msg.setStringProperty(SerializationConstants.EXTRA_JAXB_CLASSES_PROPERTY_NAME, extraJaxbClassesPropertyValue);
msg.setStringProperty(SerializationConstants.DEPLOYMENT_ID_PROPERTY_NAME, deploymentId);
}
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to create and fill a JMS message.", jmse);
} catch (SerializationException se) {
throw new RemoteCommunicationException("Unable to deserialze JMS message.", se.getCause());
}
// send
try {
producer.send(msg);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to send a JMS message.", jmse);
}
// receive
Message response;
try {
response = consumer.receive(timeout);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to receive or retrieve the JMS response.", jmse);
}
if (response == null) {
logger.warn("Response is empty, leaving");
return null;
}
// extract response
assert response != null : "Response is empty.";
try {
String xmlStr = ((BytesMessage) response).readUTF();
cmdResponses = (JaxbCommandsResponse) serializationProvider.deserialize(xmlStr);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName()
+ " instance from JMS response.", jmse);
} catch (SerializationException se) {
throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName()
+ " instance from JMS response.", se.getCause());
}
assert cmdResponses != null : "Jaxb Cmd Response was null!";
} finally {
if (connection != null) {
try {
connection.close();
if( session != null ) {
session.close();
}
} catch (JMSException jmse) {
logger.warn("Unable to close connection or session!", jmse);
}
}
}
return cmdResponses;
} Configuration using an InitialContext instance
RemoteJmsRuntimeEngineFactory with an InitialContext instance as a parameter for Red Hat JBoss EAP 6, it is necessary to retrieve the (remote) InitialContext instance first from the remote server. The following code illustrates how to do this.
private InitialContext getRemoteJbossInitialContext(URL url, String user, String password) {
Properties initialProps = new Properties();
initialProps.setProperty(InitialContext.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
String jbossServerHostName = url.getHost();
initialProps.setProperty(InitialContext.PROVIDER_URL, "remote://"+ jbossServerHostName + ":4447");
initialProps.setProperty(InitialContext.SECURITY_PRINCIPAL, user);
initialProps.setProperty(InitialContext.SECURITY_CREDENTIALS, password);
for (Object keyObj : initialProps.keySet()) {
String key = (String) keyObj;
System.setProperty(key, (String) initialProps.get(key));
}
try {
return new InitialContext(initialProps);
} catch (NamingException e) {
throw new RemoteCommunicationException("Unable to create " + InitialContext.class.getSimpleName(), e);
}
}13.6.4. Supported Methods
RuntimeEngine, KieSession, TaskService and AuditLogService interfaces.
UnsupportedOperationException explaining that the called method is not available.
Table 13.9. Available process-related KieSession methods
| Returns | Method signature | Description |
|---|---|---|
void
| abortProcessInstance(long processInstanceId)
|
Abort the process instance
|
ProcessInstance
| getProcessInstance(long processInstanceId)
|
Return the process instance
|
ProcessInstance
| getProcessInstance(long processInstanceId, boolean readonly)
|
Return the process instance
|
Collection<ProcessInstance>
| getProcessInstances()
|
Return all (active) process instances
|
void
| signalEvent(String type, Object event)
|
Signal all (active) process instances
|
void
| signalEvent(String type, Object event, long processInstanceId)
|
Signal the process instance
|
ProcessInstance
| startProcess(String processId)
|
Start a new process and return the process instance (if the process instance has not immediately completed)
|
ProcessInstance
| startProcess(String processId, Map<String, Object> parameters);
|
Start a new process and return the process instance (if the process instance has not immediately completed)
|
Table 13.10. Available rules-related KieSession methods
| Returns | Method signature | Description |
|---|---|---|
Long
| getFactCount()
|
Return the total fact count
|
Object
| getGlobal(String identifier)
|
Return a global fact
|
void
| setGlobal(String identifier, Object value)
|
Set a global fact
|
Table 13.11. Available WorkItemManager methods
| Returns | Method signature | Description |
|---|---|---|
void
| abortWorkItem(long id)
|
Abort the work item
|
void
| completeWorkItem(long id, Map<String, Object> results)
|
Complete the work item
|
void | registerWorkItemHandler(String workItemName, WorkItemHandler handler) |
Register the work items
|
WorkItem
| getWorkItem(long workItemId)
|
Return the work item
|
Table 13.12. Available task operation TaskService methods
| Returns | Method signature | Description |
|---|---|---|
Long
| addTask(Task task, Map<String, Object> params)
|
Add a new task
|
void
| activate(long taskId, String userId)
|
Activate a task
|
void
| claim(long taskId, String userId)
|
Claim a task
|
void
| claimNextAvailable(String userId, String language)
|
Claim the next available task for a user
|
void
| complete(long taskId, String userId, Map<String, Object> data)
|
Complete a task
|
void
| delegate(long taskId, String userId, String targetUserId)
|
Delegate a task
|
void
| exit(long taskId, String userId)
|
Exit a task
|
void
| fail(long taskId, String userId, Map<String, Object> faultData)
|
Fail a task
|
void
| forward(long taskId, String userId, String targetEntityId)
|
Forward a task
|
void
| nominate(long taskId, String userId, List<OrganizationalEntity> potentialOwners)
|
Nominate a task
|
void
| release(long taskId, String userId)
|
Release a task
|
void
| resume(long taskId, String userId)
|
Resume a task
|
void
| skip(long taskId, String userId)
|
Skip a task
|
void
| start(long taskId, String userId)
|
Start a task
|
void
| stop(long taskId, String userId)
|
Stop a task
|
void
| suspend(long taskId, String userId)
|
Suspend a task
|
Table 13.13. Available task retrieval and query TaskService methods
| Returns | Method signature |
|---|---|
Task
| getTaskByWorkItemId(long workItemId)
|
Task
| getTaskById(long taskId)
|
List<TaskSummary>
| getTasksAssignedAsBusinessAdministrator(String userId, String language)
|
List<TaskSummary>
| getTasksAssignedAsPotentialOwner(String userId, String language)
|
List<TaskSummary>
| getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status>gt; status, String language)
|
List<TaskSummary>
| getTasksOwned(String userId, String language)
|
List<TaskSummary>
| getTasksOwnedByStatus(String userId, List<Status> status, String language)
|
List<TaskSummary>
| getTasksByStatusByProcessInstanceId(long processInstanceId, List<Status> status, String language)
|
List<TaskSummary>
| getTasksByProcessInstanceId(long processInstanceId)
|
Content
| getContentById(long contentId)
|
Attachment
| getAttachmentById(long attachId)
|
Table 13.14. Available AuditLogService methods
| Returns | Method signature |
|---|---|
List<ProcessInstanceLog>
| findProcessInstances()
|
List<ProcessInstanceLog>
| findProcessInstances(String processId)
|
List<ProcessInstanceLog>
| findActiveProcessInstances(String processId)
|
ProcessInstanceLog
| findProcessInstance(long processInstanceId)
|
List<ProcessInstanceLog>
| findSubProcessInstances(long processInstanceId)
|
List<NodeInstanceLog>
| findNodeInstances(long processInstanceId)
|
List<NodeInstanceLog>
| findNodeInstances(long processInstanceId, String nodeId)
|
List<VariableInstanceLog>
| findVariableInstances(long processInstanceId)
|
List<VariableInstanceLog>
| findVariableInstances(long processInstanceId, String variableId)
|
List<VariableInstanceLog>
| findVariableInstancesByName(String variableId, boolean onlyActiveProcesses)
|
List<VariableInstanceLog>
| findVariableInstancesByNameAndValue(String variableId, String value, boolean onlyActiveProcesses)
|
void
| clear()
|
Appendix A. JARs and Libraries included in Red Hat JBoss BPM Suite
Table A.1. Drools JARs
| JAR Name | Description |
|---|---|
| org.drools:drools-compiler:jar:6.0.3-redhat-6 | The Drools compiler jar provides facilities to compile various rule representations (DRL, DSL, etc.) into a corresponding internal structure. In addition, it provides the CDI extension allowing usage of @KSession and other annotations. It is the main artifact used when embedding Drools engine. |
| org.drools:drools-persistence-jpa:jar:6.0.3-redhat-6 | Provides Drools with the ability to persist a KieSession into a DB using Java Persistence API (JPA). |
| org.drools:drools-core:jar:6.0.3-redhat-6 | The core Drools engine jar, contains classes required for runtime execution of rules. |
| org.drools:drools-decisiontables:jar:6.0.3-redhat-6 | Includes classes which are responsible for parsing and compiling Decision Table into the plain DRL. This jar has to be included if you are building a kjar with decision tables present in the resources. |
| org.drools:drools-verifier:jar:6.0.3-redhat-6 | Drools Verifier analyses the quality of Drools rules and reports any issues. Internally used by business-central. |
| org.drools:drools-templates:jar:6.0.3-redhat-6 | Includes classes which are responsible for parsing and compiling provided Data Provider and Rule Template into a final DRL. This Library is required for using decision table/score card. |
Table A.2. jBPM Libraries
| Library Name | Description |
|---|---|
| org.jbpm:jbpm-audit:jar:6.0.3-redhat-6 | This library audits history information regarding processes into the database. Among others, it includes entity classes - i.e. ProcessInstanceLog which can be found in the database, specific AuditService implementation - JPAAuditLogService - which allows user to query history tables and also Database Logger which stores audit related information into database. |
| org.jbpm:jbpm-bpmn2:jar:6.0.3-redhat-6 | Internal representation of BPMN elements, including BPMN validator and parser. It also includes few basic WorkItemHandlers. When building a kjar which includes BPMN processes this library has to be explicitly added on the classpath. The process is understood as a static model. |
| org.jbpm:jbpm-executor:jar:6.0.3-redhat-6 | Executor service which can be used for Asynchronous Task execution. This library is mandatory if you use Asynchronous Task execution in your project. |
| org.jbpm:jbpm-flow-builder:jar:6.0.3-redhat-6 | Compiler of BPMN processes. When building by kie-maven-plugin, the jbpm-bpmn2 project already brings this dependency on classpath. |
| org.jbpm:jbpm-flow:jar:6.0.3-redhat-6 | Internal representation of instantiated process / workflow / ruleflow. This is actually the core engine library that performs the workflow execution. |
| org.jbpm:jbpm-human-task-audit:jar:6.0.3-redhat-6 | Library which audits Task related events - start, claim, complete, etc. into the database (table TaskEvent). |
| org.jbpm:jbpm-human-task-core:jar:6.0.3-redhat-6 | Includes core Human Task Services. API, its implementation, listeners, persistence, commands and more. |
| org.jbpm:jbpm-human-task-workitems:jar:6.0.3-redhat-6 | Implementation of Human Task Work Item handler with necessary utility and helper classes. |
| org.jbpm:jbpm-kie-services:jar:6.0.3-redhat-6 | core implementation of services that encapsulate core engine, task service and Runtime manager APIs into service oriented components to easier pluggability into custom system. Base of execution server brought by kie-wb/business central. |
| org.jbpm:jbpm-persistence-jpa:jar:6.0.3-redhat-6 | Provides classes which store process runtime information into the database using JPA. |
| org.jbpm:jbpm-runtime-manager:jar:6.0.3-redhat-6 | Provides Runtime Manager API which allows developer to interact with processes and tasks. |
| org.jbpm:jbpm-shared-services:jar:6.0.3-redhat-6 | Part of services subsystem that simplifies interaction with process engine within a dynamic environment. This library includes some classes which can be used for interactions with Business Central asynchronously. Please add this library if you use jbpm-executor. |
| org.jbpm:jbpm-test:jar:6.0.3-redhat-6 | jBPM Test Framework for unit testing of processes. Please add this jar if you implement your test code using jBPM Test Framework. |
| org.jbpm:jbpm-workitems:jar:6.0.3-redhat-6 | Includes every WorkItemHandlers provided by jBPM - i.e. RestWorkItemHandler, WebServiceWorkItemHandler, etc.Some of them are supported also in JBoss BPM Suite (those which has WorkItems visible by default in the Designer palette) and some are not. |
Table A.3. KIE Libraries
| Library Name | Description |
|---|---|
| org.kie:kie-api:jar:6.0.3-redhat-6 | The Drools and jBPM public API which is backwards compatible between releases. |
| org.kie:kie-ci:jar:6.0.3-redhat-6 | Allows loading a KIE module for further usage - i.e. KieBase / KieSession can be created from Kie Module. KIE module in its essence is a standard jar project built by Maven, so kie-ci library embeds maven in order to load KIE Module. This library has to be included if you use kjar's functionality. |
| org.kie:kie-internal:jar:6.0.3-redhat-6 | The Drools and jBPM internal API which might not be backwards compatible between releases. Any usage of classes located in this library should be consulted with Red Hat in order to determine whether the usage is supported or not. |
| org.kie:kie-spring:jar:6.0.3-redhat-6 | This library has to be included if you want to use jBPM/Spring integration capabilities. |
| org.kie.remote:kie-services-client:jar:6.0.3-redhat-6 | Native Java Remote Client for remote interaction with busines-central server (REST, JMS). This library has to be included if you use Native Java Remote Client for remote interaction with Business Central server (REST, JMS). |
| org.kie.remote:kie-services-jaxb:jar:6.0.3-redhat-6 | JAXB version of various entity classes. This library is required for remote interactions with the Business Central server. |
| org.kie.remote:kie-services-remote:jar:6.0.3-redhat-6 | Server side implementation of the REST API. This library is required for remote interaction with busines-central server. |
Appendix B. Revision History
| Revision History | |||
|---|---|---|---|
| Revision 1.0.0-40 | Thu Jul 23 2015 | ||
| |||

import org.kie.services.client.serialization.JaxbSerializationProvider;
import org.kie.services.client.serialization.SerializationConstants;
import org.kie.services.client.serialization.SerializationException;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandResponse;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandsRequest;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandsResponse;
import org.kie.services.client.serialization.jaxb.rest.JaxbExceptionResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DocumentationJmsExamples {
protected static final Logger logger = LoggerFactory.getLogger(DocumentationJmsExamples.class);
public void sendAndReceiveJmsMessage() {
String USER = "charlie";
String PASSWORD = "ch0c0licious";
String DEPLOYMENT_ID = "test-project";
String PROCESS_ID_1 = "oompa-processing";
URL serverUrl;
try {
serverUrl = new URL("http://localhost:8080/business-central/");
} catch (MalformedURLException murle) {
logger.error("Malformed URL for the server instance!", murle);
return;
}
// Create JaxbCommandsRequest instance and add commands
Command<?> cmd = new StartProcessCommand(PROCESS_ID_1);
int oompaProcessingResultIndex = 0;
JaxbCommandsRequest req = new JaxbCommandsRequest(DEPLOYMENT_ID, cmd);
req.getCommands().add(new GetTaskAssignedAsPotentialOwnerCommand(USER, "en-UK"));
int loompaMonitoringResultIndex = 1;
if (response instanceof JaxbExceptionResponse) {
// something went wrong on the server side
JaxbExceptionResponse exceptionResponse = (JaxbExceptionResponse) response;
throw new RuntimeException(exceptionResponse.getMessage());
}
if (response.getIndex() == oompaProcessingResultIndex) {
// set properties
msg.setJMSCorrelationID(corrId);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName()
+ " instance from JMS response.", jmse);
} catch (SerializationException se) {
throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName()
+ " instance from JMS response.", se.getCause());
}
assert cmdResponses != null : "Jaxb Cmd Response was null!";
} finally {
if (connection != null) {
try {
connection.close();
session.close();
} catch (JMSException jmse) {
logger.warn("Unable to close connection or session!", jmse);
}
}
}
return cmdResponses;
}
private InitialContext getRemoteJbossInitialContext(URL url, String user, String password) {
Properties initialProps = new Properties();
initialProps.setProperty(InitialContext.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
String jbossServerHostName = url.getHost();
initialProps.setProperty(InitialContext.PROVIDER_URL, "remote://"+ jbossServerHostName + ":4447");
initialProps.setProperty(InitialContext.SECURITY_PRINCIPAL, user);
initialProps.setProperty(InitialContext.SECURITY_CREDENTIALS, password);
for (Object keyObj : initialProps.keySet()) {
String key = (String) keyObj;
System.setProperty(key, (String) initialProps.get(key));
}
try {
return new InitialContext(initialProps);
} catch (NamingException e) {
throw new RemoteCommunicationException("Unable to create " + InitialContext.class.getSimpleName(), e);
}
}
}