Developing process services in Red Hat Process Automation Manager

Red Hat Process Automation Manager 7.9

Abstract

This document describes how to develop process services and case definitions with Red Hat Process Automation Manager using Business Process Model and Notation (BPMN) 2.0 models. This document also describes concepts and options for process and case management.

Preface

As a developer of business processes, you can use Red Hat Process Automation Manager to develop process services and case definitions using Business Process Model and Notation (BPMN) 2.0 models. BPMN process models are graphical representations of the steps required to achieve a business goal. For more information about BPMN, see the Object Management Group (OMG) Business Process Model and Notation 2.0 specification.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Part I. Designing business processes in Business Central

As a business processes developer, you can use Business Central in Red Hat Process Automation Manager to design business processes to meet specific business requirements. This document describes business processes and the concepts and options for creating them using the process designer in Red Hat Process Automation Manager. This document also describes the BPMN2 elements in Red Hat Process Automation Manager. For more details about BPMN2, see the Business Process Model and Notation Version 2.0 specification.

Prerequisites

Chapter 1. Business processes

A business process is a diagram that describes the order for a series of steps that must be executed and consists of predefined nodes and connections. Each node represents one step in the process while the connections specify how to transition from one node to another.

A typical business process consists of the following components:

  • The header section that comprises global elements such as the name of the process, imports, and variables
  • The nodes section that contains all the different nodes that are part of the process
  • The connections section that links these nodes to each other to create a flow chart

Figure 1.1. Business process

This image shows the steps of "self evaluation" through the project manager and HR manager.

Red Hat Process Automation Manager contains the legacy process designer and the new process designer for creating business process diagrams. The new process designer has an improved layout and feature set and continues to be developed. Until all features of the legacy process designer are completely implemented in the new process designer, both designers are available in Business Central for you to use.

Note

The legacy process designer in Business Central is deprecated in Red Hat Process Automation Manager 7.9.1. It will be removed in a future Red Hat Process Automation Manager release. The legacy process designer will not receive any new enhancements or features. If you intend to use the new process designer, start migrating your processes to the new designer. Create all new processes in the new process designer. For information about migrating to the new designer, see Managing projects in Business Central.

Chapter 2. Business Process Modeling and Notation Version 2.0

The Business Process Modeling and Notation Version 2.0 (BPMN2) specification is an Object Management Group (OMG) specification that defines standards for graphically representing a business process, defines execution semantics for the elements, and provides process definitions in XML format.

A process is defined or determined by its process definition. It exists in a knowledge base and is identified by its ID.

Table 2.1. General process properties

LabelDescription

Name

Enter the name of the process.

Documentation

Describes the process. The text in this field is included in the process documentation, if applicable.

ID

Enter an identifier for this process, for example orderItems.

Package

Enter the package location for this process in your Red Hat Process Automation Manager project, for example org.acme.

ProcessType

Specify whether the process is public or private. (Currently not supported.)

Version

Enter the artifact version for the process.

Ad hoc

Select this option if this process is an ad hoc subprocess.

Process Instance Description

Enter a description of the purpose of the process.

Imports

Click to open the Imports window and add any data type classes required for your process.

Executable

Select this option to make the process executable part of your Red Hat Process Automation Manager project.

SLA Due Date

Enter the service level agreement (SLA) expiration date.

Process Variables

Add any process variables for the process. Process variables are visible within the specific process instance. Process variables are initialized at process creation and destroyed on process completion. Variable tags provide greater control over variable behavior, for example whether the variable is tagged as required or readonly. For more information about variable tags, see Designing business processes in Business Central.

Metadata Attributes

Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present.

Global Variables

Add any global variables for the process. Global variables are visible to all process instances and assets in a project. Global variables are typically used by business rules and constraints and are created dynamically by the rules or constraints.

A process is a container for a set of modeling elements. It contains elements that specify the execution workflow of a business process or its parts using flow objects and flows. Each process has its own BPMN2 diagram. Red Hat Process Automation Manager contains the new process designer for creating BPMN2 diagrams and the legacy process designer to open the old BPMN2 diagram with .bpmn2 extension. The new process designer has an improved layout and feature set and continues to develop. By default, the new diagrams are created in the new process designer.

2.1. Red Hat Process Automation Manager support for BPMN2

With Red Hat Process Automation Manager, you can model your business processes using the BPMN 2.0 standard. You can then use Red Hat Process Automation Manager to run, manage, and monitor these business processes. The full BPMN 2.0 specification also includes details on how to represent items such as choreographies and collaboration. However, Red Hat Process Automation Manager uses only the parts of the specification that you can use to specify executable processes. This includes almost all elements and attributes as defined in the Common Executable subclass of the BPMN2 specification, extended with some additional elements and attributes.

The following table contains a list of icons used to indicate whether a BPMN2 element is supported in the legacy process designer, the legacy and new process designer, or not supported.

Table 2.2. Support status icons

KeyDescription

grn check

Supported in the legacy and new process designer

grn star

Supported in the legacy process designer only

bk x

Not supported

Elements that have no icon do not exist in the BPMN2 specification.

Table 2.3. BPMN2 catching events

Element NameStartIntermediate

None

grn check

 

Message

grn check

grn check

Timer

grn check

grn check

Error

grn check

grn check

Escalation

grn check

grn check

Cancel

 

bk x

Compensation

grn check

grn check

Conditional

grn check

grn check

Link

 

bk x

Signal

grn check

grn check

Multiple

bk x

bk x

Parallel Multiple

bk x

bk x

Table 2.4. BPMN2 throwing and non-interrupting events

Element NameThrowing Non-interrupting 
 

End

Intermediate

Start

Intermediate

None

grn check

   

Message

grn check

grn check

grn check

grn check

Timer

  

grn check

grn check

Error

grn check

   

Escalation

grn check

grn check

grn check

grn check

Cancel

bk x

bk x

 

bk x

Compensation

grn check

grn check

  

Conditional

  

grn check

grn check

Link

 

bk x

  

Signal

grn check

grn check

grn check

grn check

Terminate

grn check

   

Multiple

bk x

bk x

bk x

bk x

Parallel Multiple

  

bk x

bk x

Table 2.5. BPMN2 elements

Element typeElementSupported

Task

Business rule

grn check

 

Script

grn check

 

User task

grn check

 

Service task

grn check

Subprocesses, including multiple instance subprocesses

Embedded

grn check

 

Ad hoc

grn check

 

Reusable

grn check

 

Event

grn check

Gateways

Inclusive

grn check

 

Exclusive

grn check

 

Parallel

grn check

 

Event-based

grn check

 

Complex

bk x

Connecting objects

Sequence flows

grn check

 

Association flows

grn check

Swimlanes

Swimlanes

grn check

Artifacts

Group

grn star

 

Text annotation

grn check

 

Data object

grn check

For more information about the background and applications of BPMN2, see the OMG Business Process Model and Notation (BPMN) Version 2.0 specification.

2.2. BPMN2 events in process designer

An event is something that happens to a business process. BPMN2 supports three categories of events:

  • Start
  • End
  • Intermediate

A start event catches an event trigger, an end event throws an event trigger, and an intermediate event can both catch and throw event triggers.

The following business process diagram shows examples of events: events

In this example, the following events occurred:

  • The ATM Card Inserted signal start event is triggered when the signal is received.
  • The timeout intermediate event is an interrupting event based on a timer trigger. This means that the Wait for PIN subprocess is canceled when the timer event is triggered.
  • Depending on the inputs to the process, either end event associated with the Validate User Pin task or the end event associated with the Inform User of Timeout task ends the process.

2.2.1. Start events

Use start events to indicate the start of a business process. A start event cannot have an incoming sequence flow and must have only one outgoing sequence flow. You can use none start events in top-level processes, embedded subprocess, callable subprocesses, and event subprocesses.

All start events, with the exception of the none start event, are catch events. For example, a signal start event starts the process only when the referenced signal (event trigger) is received. You can configure start events in event subprocesses to be interrupting or non-interrupting. An interrupting start event for an event subprocess stops or interrupts the execution of the containing or parent process. A non-interrupting start event does not stop or interrupt the execution of the containing or parent process.

Table 2.6. Start events

Start event typeTop-levelSubprocesses 
  

Interrupt

Non-interrupt

None

bpmn start node

  

Conditional

bpmn conditional start

bpmn conditional start

bpmn conditional non interrupt

Compensation

bpmn compensation start

bpmn compensation start

 

Error

 

bpmn error start

 

Escalation

bpmn escalation start

bpmn escalation start

bpmn escalation non interrupt

Message

bpmn message node

bpmn message node

bpmn message non interrupt

Signal

bpmn signal start

bpmn signal start

bpmn signal non interrupt

Timer

bpmn timer start

bpmn timer start

bpmn timer non interrupt

None

The none start event is a start event without a trigger condition. A process or a subprocess can contain at most one none start event, which is triggered on process or subprocess start by default, and the outgoing flow is taken immediately.

When you use a none start event in a subprocess, the execution of the process flow is transferred from the parent process into the subprocess and the none start event is triggered. This means that the token (the current location within the process flow) is passed from the parent process into the subprocess activity and the none start event of the subprocess generates a token of its own.

Conditional

The conditional start event is a start event with a Boolean condition definition. The execution is triggered when the condition is first evaluated to false and then to true. The process execution starts only if the condition is evaluated to true after the start event has been instantiated.

A process can contain multiple conditional start events.

Compensation

A compensation start event is used to start a compensation event subprocess when using a subprocess as the target activity of a compensation intermediate event.

Error

A process or subprocess can contain multiple error start events, which are triggered when an error object with a particular ErrorRef property is received. The error object can be produced by an error end event. It indicates an incorrect process ending. The process instance with the error start event starts execution after it has received the respective error object. The error start event is executed immediately upon receiving the error object and its outgoing flow is taken.

Escalation

The escalation start event is a start event that is triggered by an escalation with a particular escalation code. Processes can contain multiple escalation start events. The process instance with an escalation start event starts its execution when it receives the defined escalation object. The process is instantiated and the escalation start event is executed immediately and its outgoing flow is taken.

Message

A process or an event subprocess can contain multiple message start events, which are triggered by a particular message. The process instance with a message start event only starts its execution from this event after it has received the respective message. After the message is received, the process is instantiated and its message start event is executed immediately (its outgoing flow is taken).

Because a message can be consumed by an arbitrary number of processes and process elements, including no elements, one message can trigger multiple message start events and therefore instantiate multiple processes.

Signal

The signal start event is triggered by a signal with a particular signal code. A process can contain multiple signal start events. The signal start event only starts its execution within the process instance after the instance has received the respective signal. Then, the signal start event is executed and its outgoing flow is taken.

Timer

The timer start event is a start event with a timing mechanism. A process can contain multiple timer start events, which are triggered at the start of the process, after which the timing mechanism is applied.

When you use a timer start event in a subprocess, execution of the process flow is transferred from the parent process into the subprocess and the timer start event is triggered. The token is taken from the parent subprocess activity and the timer start event of the subprocess is triggered and waits for the timer to trigger. After the time defined by the timing definition has been reached, the outgoing flow is taken.

2.2.2. Intermediate events

Intermediate events drive the flow of a business process. Intermediate events are used to either catch or throw an event during the execution of the business process. These events are placed between the start and end events and can also be used on the boundary of an activity, like a subprocess or a human task, as a catch event. The boundary catch events can be configured as interrupting or non-interrupting. An interrupting boundary catch event cancels the bound activity whereas a non-interrupting event does not.

An intermediate event handles a particular situation that occurs during process execution. The situation is a trigger for an intermediate event. In a process, intermediate events with one outgoing flow can be placed on an activity boundary.

If the event occurs while the activity is being executed, the event triggers its execution to the outgoing flow. One activity may have multiple boundary intermediate events. Note that depending on the behavior you require from the activity with the boundary intermediate event, you can use either of the following intermediate event types:

  • Interrupting: The activity execution is interrupted and the execution of the intermediate event is triggered.
  • Non-interrupting: The intermediate event is triggered and the activity execution continues.

Table 2.7. Intermediate events

Intermediate event typeCatchingBoundary Throwing
  

Interrupt

Non-interrupt

 

Message

bpmn intermediate message

bpmn intermediate message

bpmn message noninterrupt

bpmn message throwing

Timer

bpmn intermediate timer

bpmn intermediate timer

bpmn timer noninterrupt

 

Error

 

bpmn intermediate error

  

Signal

bpmn intermediate signal

bpmn intermediate signal

bpmn signal noninterrupt

bpmn signal throwing

Conditional

bpmn intermediate conditional

bpmn intermediate conditional

bpmn conditional noninterrupt

 

Compensation

bpmn intermediate catch

bpmn intermediate catch

 

bpmn intermediate compensation throwing

Escalation

bpmn intermediate escalation

bpmn intermediate escalation

bpmn intermediate escalation non interrupting

bpmn intermediate escalation throwing

Link

bpmn intermediate link

  

bpmn intermediate link throwing

Message

A message intermediate event is an intermediate event that enables you to manage a message object. Use one of the following events:

  • A throwing message intermediate event produces a message object based on the defined properties.
  • A catching message intermediate event listens for a message object with the defined properties.

Timer

A timer intermediate event enables you to delay workflow execution or to trigger the workflow execution periodically. It represents a timer that can trigger one or multiple times after a specified period of time. When the timer intermediate event is triggered, the timer condition, which is the defined time, is checked and the outgoing flow is taken. When the timer intermediate event is placed in the process workflow, it has one incoming flow and one outgoing flow. Its execution starts when the incoming flow transfers to the event. When a timer intermediate event is placed on an activity boundary, the execution is triggered at the same time as the activity execution.

The timer is canceled if the timer element is canceled, for example by completing or aborting the enclosing process instance.

Conditional

A conditional intermediate event is an intermediate event with a boolean condition as its trigger. The event triggers further workflow execution when the condition evaluates to true and its outgoing flow is taken.

The event must define the Expression property. When a conditional intermediate event is placed in the process workflow, it has one incoming flow, one outgoing flow, and its execution starts when the incoming flow transfers to the event. When a conditional intermediate event is placed on an activity boundary, the execution is triggered at the same time as the activity execution. Note that if the event is non-interrupting, the event triggers continuously while the condition is true.

Signal

A signal intermediate event enables you to produce or consume a signal object. Use either of the following options:

  • A throwing signal intermediate event produces a signal object based on the defined properties.
  • A catching signal intermediate event listens for a signal object with the defined properties.

Error

An error intermediate event is an intermediate event that can be used only on an activity boundary. It enables the process to react to an error end event in the respective activity. The activity must not be atomic. When the activity finishes with an error end event that produces an error object with the respective ErrorCode property, the error intermediate event catches the error object and execution continues to its outgoing flow.

Compensation

A compensation intermediate event is a boundary event attached to an activity in a transaction subprocess. It can finish with a compensation end event or a cancel end event. The compensation intermediate event must be associated with a flow, which is connected to the compensation activity.

The activity associated with the boundary compensation intermediate event is executed if the transaction subprocess finishes with the compensation end event. The execution continues with the respective flow.

Escalation

An escalation intermediate event is an intermediate event that enables you to produce or consume an escalation object. Depending on the action the event element should perform, you need to use either of the following options:

  • A throwing escalation intermediate event produces an escalation object based on the defined properties.
  • A catching escalation intermediate event listens for an escalation object with the defined properties.

Use either of the following options:

  • A throwing link intermediate event produces a link object based on the defined properties.
  • A catching link intermediate event listens for a link object with the defined properties.

2.2.3. End events

End events are used to end a business process and may not have any outgoing sequence flows. There may be multiple end events in a business process. All end events, with the exception of the none and terminate end events, are throw events.

End events indicate the completion of a business process. An end event is a node that ends a particular workflow. It has one or more incoming sequence flows and no outgoing flow.

A process must contain at least one end event.

During run time, an end event finishes the process workflow. The end event can finish only the workflow that reached it, or all workflows in the process instance, depending on the end event type.

Table 2.8. End events

None

The none end event specifies that no other special behavior is associated with the end of the process.

Message

When a flow enters a message end event, the flow finishes and the end event produces a message as defined in its properties.

Signal

A throwing signal end event is used to finish a process or subprocess flow. When the execution flow enters the element, the execution flow finishes and produces a signal identified by its SignalRef property.

Error

The throwing error end event finishes the incoming workflow, which means consumes the incoming token, and produces an error object. Any other running workflows in the process or subprocess remain uninfluenced.

Compensation

A compensation end event is used to finish a transaction subprocess and trigger the compensation defined by the compensation intermediate event attached to the boundary of the subprocess activities.

Escalation

The escalation end event finishes the incoming workflow, which means consumes the incoming token, and produces an escalation signal as defined in its properties, triggering the escalation process.

Terminate

The terminate end event finishes all execution flows in the specified process instance. Activities being executed are canceled. The subprocess instance terminates if it reaches a terminate end event.

2.3. BPMN2 tasks in process designer

A task is an automatic activity that is defined in the process model and the smallest unit of work in a process flow. The following task types defined in the BPMN2 specification are available in the Red Hat Process Automation Manager process designer palette:

  • Business rule task
  • Script task
  • User task
  • Service task
  • None task

Table 2.9. Task

Business rule task

bpmn business rule task

Script task

bpmn script task

User task

bpmn user task

Service task

bpmn service task

None task

bpmn none task

In addition, the BPMN2 specification provides the ability to create custom tasks. For more information about custom tasks, see Section 2.4, “BPMN2 custom tasks in process designer”.

Business rule task

A business rule task defines a way to make a decision either through a DMN model or a rule flow group.

bpmn business rule task

When a process reaches a business rule task defined by a DMN model, the process engine executes the DMN model decision with the inputs provided.

When a process reaches a business rule task defined by a rule flow group, the process engine begins executing the rules in the defined rule flow group. When there are no more active rules in the rule flow group, the execution continues to the next element. During the rule flow group execution, new activations belonging to the active rule flow group can be added to the agenda because these activations are changed by other rules.

Script task

A script task represents a script to be executed during the process execution.

bpmn script task

The associated script can access process variables and global variables. Review the following list before using a script task:

  • Avoid low-level implementation details in the process. A script task can be used to manipulate variables, but consider using a service task or a custom task when modelling more complex operations.
  • Ensure that the script is executed immediately, otherwise use an asynchronous service task.
  • Avoid contacting external services through a script task. Use a service task to model communication with an external service.
  • Ensure scripts do not throw exceptions. Runtime exceptions should be caught and managed, for example, inside the script or transformed into signals or errors that can then be handled inside the process.

When a script task is reached during execution, the script is executed and the outgoing flow is taken.

User task

User tasks are tasks in the process workflow that cannot be performed automatically by the system and therefore require the intervention of a human user, the actor.

bpmn user task

On execution, the User task element is instantiated as a task that appears in the list of tasks of one or more actors. If a User task element defines the Groups attribute, it is displayed in task lists of all users that are members of the group. Any user who is a member of the group can claim the task.

After it is claimed, the task disappears from the task list of the other users.

User tasks are implemented as domain-specific tasks and serve as a base for custom tasks.

Service task

Service tasks are tasks that do not require human interaction. They are completed automatically by an external software service.

bpmn service task

None task

None tasks are completed on activation. This is a conceptual model only. A none task is never actually executed by an IT system.

bpmn none task

2.4. BPMN2 custom tasks in process designer

The BPMN2 specification supports the ability to extend the bpmn2:task element to create custom tasks in a software implementation. Similar to standard BPMN tasks, custom tasks identify actions to be completed in a business process model, but they also include specialized functionality, such as compatibility with an external service of a specific type (REST, email, or web service) or checkpoint behavior within a process (milestone).

Red Hat Process Automation Manager provides the following predefined custom tasks under Custom Tasks in the BPMN modeler palette:

Table 2.10. Supported custom tasks

Custom task typeCustom task node

Rest

bpmn rest custom task

Email

bpmn email custom task

Log

bpmn log custom task

WebService

bpmn webservice custom task

Milestone

bpmn milestone

DecisionTask

bpmn decision task custom

BusinessRuleTask

bpmn business rule custom task

KafkaPublishMessages

bpmn kafkapublishmessages task

For more information about enabling or disabling custom tasks in Business Central, see Chapter 54, Managing custom tasks in Business Central.

In the BPMN modeler, you can configure the following general properties for a selected custom task:

Table 2.11. General custom task properties

LabelDescription

Name

Identifies the name of the task. You can also double-click the task node to edit the name.

Documentation

Describes the task. The text in this field is included in the process documentation, if applicable.

Is Async

Determines whether this task is invoked asynchronously.

AdHoc Autostart

Determines whether this is an ad hoc task that is started automatically. This option enables the task to automatically start when the process is created instead of being started by a signal event.

On Entry Action

Defines a Java, JavaScript, or MVEL script that directs an action at the start of the task.

On Exit Action

Defines a Java, JavaScript, or MVEL script that directs an action at the end of the task.

SLA Due Date

Specifies the duration (string type) when the service level agreement (SLA) expires. You can specify the duration in days, minutes, seconds, and milliseconds. For example, 1m value in SLA due date field indicates one minute.

Assignments

Defines data input and output for the task.

Rest

A rest custom task is used to invoke a remote RESTful service or perform an HTTP request from a process.

bpmn rest custom task

To use the rest custom task, you can set the URL, HTTP method, and credentials in the process modeler. When a process reaches a rest custom task, it generates an HTTP request and returns the response as a string.

You can click Assignments in the Properties panel to open the REST Data I/O window. In the REST Data I/O window, you can configure the data input and output as required. For example, to execute a rest custom task, enter the following data inputs in Data Inputs and Assignments fields:

  • Url: Endpoint URL for the REST service. This attribute is mandatory.
  • Method: Method of the endpoint called, such as GET, and POST. The default value is GET.
  • ContentType: Data type when sending data. This attribute is mandatory for POST and PUT requests.
  • ContentTypeCharset: Character set for the ContentType.
  • Content: Data you want to send. This attribute supports backward compatibility, use the ContentData attribute instead.
  • ContentData: Data you want to send. This attribute is mandatory for POST and PUT requests.
  • ConnectTimeout: Connection timeout (in seconds). The default value is 60 seconds.
  • ReadTimeout: Timeout (in seconds) on response. The default value is 60 seconds.
  • Username: User name for authentication.
  • Password: Password for authentication.
  • AuthUrl: URL that is handling authentication.
  • AuthType: Type of URL that is handling authentication.
  • HandleResponseErrors (Optional): Instructs handler to throw errors in case of an unsuccessful response codes (except 2XX).
  • ResultClass: Valid name of the class to which the response is unmarshalled. If not provided, then the raw response is returned in a string format.
  • AcceptHeader: Value of the accept header.
  • AcceptCharset: Character set of the accept header.
  • Headers: Headers to pass for REST call, such as content-type=text/html.

You can add the following data output in Data Outputs and Assignments to store the output of the task execution:

  • Result: Output variable (object type) of the rest custom task.

Email

An email custom task is used to send an email from a process. It contains email body associated with it.

bpmn email custom task

When an email custom task is activated, the email data is assigned to the data input property of the task. An email custom task completes when the associated email is sent.

You can click Assignments in the Properties panel to open the Email Data I/O window. In the Email Data I/O window, you can configure the data input as required. For example, to execute an email custom task, enter the following data inputs in Data Inputs and Assignments fields:

  • Body: Body of the email.
  • From: Email address of the sender.
  • Subject: Subject of the email.
  • To: Email address of the recipient. You can specify multiple email addresses separated by semicolon (;).
  • Template (Optional): Template to generate body of the email. The Template attribute overrides the Body parameter, if entered.
  • Reply-To: Email address to which reply message is sent.
  • Cc: Email address of the copied recipient. You can specify multiple email addresses separated by semicolon (;).
  • Bcc: Email address of the blind copied recipient. You can specify multiple email addresses separated by semicolon (;).
  • Attachments: Email attachment to send along with the email.
  • Debug: Flag to enable the debug logging.

Log

A log custom task is used to log a message from a process. When a business process reaches a log custom task, the message data is assigned to the data input property.

bpmn log custom task

A log custom task completes when the associated message is logged. You can click Assignments in the Properties panel to open the Log Data I/O window. In the Log Data I/O window, you can configure the data input as required. For example, to execute a log custom task, enter the following data inputs in Data Inputs and Assignments fields:

  • Message: Log message from the process.

WebService

A web service custom task is used to invoke a web service from a process. This custom task serves as a web service client with the web service response stored as a string.

bpmn webservice custom task

To invoke a web service from a process, you must use the correct task type. You can click Assignments in the Properties panel to open the WS Data I/O window. In the WS Data I/O window, you can configure the data input and output as required. For example, to execute a web service task, enter the following data inputs in Data Inputs and Assignments fields:

  • Endpoint: Endpoint location of the web service to invoke.
  • Interface: Name of a service, such as Weather.
  • Mode: Mode of a service, such as SYNC, ASYNC, or ONEWAY.
  • Namespace: Namespace of the web service, such as http://ws.cdyne.com/WeatherWS/.
  • Operation: Method name to call.
  • Parameter: Object or array to be sent for the operation.
  • Url: URL of the web service, such as http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL.

You can add the following data output in Data Outputs and Assignments to store the output of the task execution:

  • Result: Output variable (object type) of the web service task.

Milestone

A milestone represents a single point of achievement within a process instance. You can use milestones to flag certain events to trigger other tasks or track the progress of the process.

bpmn milestone

Milestones are useful for Key Performance Indicator (KPI) tracking or for identifying the tasks that are still to be completed. Milestones can occur at the end of a stage in a process or they can be the result of achieving other milestones.

Milestones can reach the following states during process execution:

  • Active: A milestone condition has been defined for the milestone node but it has not been met.
  • Completed: A milestone condition has been met (if applicable), the milestone has been achieved, and the process can proceed to the next task or can end.

You can click Assignments in the Properties panel to open the Milestone Data I/O window. In the Milestone Data I/O window, you can configure the data input as required. For example, to execute a milestone, enter the following data inputs in Data Inputs and Assignments fields:

  • Condition: Condition for the milestone to meet. For example, you can enter a Java expression (string data type) that uses a process variable.

DecisionTask

A decision task is used to execute a DMN diagram and invoke a decision engine service from a process. By default, a decision task maps to the DMN decision.

bpmn decision task custom

You can use decision tasks to make an operational decision in a process. Decision tasks are useful for identifying key decisions in a process that need to be made.

You can click Assignments in the Properties panel to open the Decision Task Data I/O window. In the Decision Task Data I/O window, you can configure the data input as required. For example, to execute a decision task, enter the following data inputs in Data Inputs and Assignments fields:

  • Decision: Decision for a process to make.
  • Language: Language of the decision task, defaults to DMN.
  • Model: Name of the DMN model.
  • Namespace: Namespace of the DMN model.

BusinessRuleTask

A business rule task is used to evaluate a DRL rule and invoke a decision engine service from a process. By default, a business rule task maps to the DRL rules.

bpmn business rule custom task

You can use business rule tasks to evaluate key business rules in a business process. You can click Assignments in the Properties panel to open the Business Rule Task Data I/O window. In the Business Rule Task Data I/O window, you can configure the data input as required. For example, to execute a business rule task, enter the following data inputs in Data Inputs and Assignments fields:

  • KieSessionName: Name of the KIE session.
  • KieSessionType: Type of the KIE session.
  • Language: Language of the business rule task, defaults to DRL.

KafkaPublishMessages

A Kafka work item is used to send events to a Kafka topic. This custom task includes a work item handler, which uses the Kafka producer to send messages to a specific Kafka server topic. For example, KafkaPublishMessages task publishes messages from a process to a Kafka topic.

bpmn kafkapublishmessages task

You can click Assignments in the Properties panel to open the KafkaPublishMessages Data I/O window. In the KafkaPublishMessages Data I/O window, you can configure the data input and output as required. For example, to execute a Kafka work item, enter the following data inputs in Data Inputs and Assignments fields:

  • Key: Key of the Kafka message to be sent.
  • Topic: Name of a Kafka topic.
  • Value: Value of the Kafka message to be sent.

You can add the following data output in Data Outputs and Assignments to store the output of the work item execution:

  • Result: Output variable (string type) of the work item.

2.5. BPMN2 subprocesses in process designer

A subprocess is an activity that contains nodes. You can embed part of the main process within a subprocess. You can also include variable definitions within the subprocess. These variables are accessible to all nodes inside the subprocess.

A subprocess must have one incoming connection and one outgoing connection. A terminate end event inside a subprocess ends the subprocess instance but does not automatically end the parent process instance. A subprocess ends when there are no more active elements in it.

The following subprocess types are supported in Red Hat Process Automation Manager:

  • Embedded subprocess, which is a part of the parent process execution and shares its data
  • Ad hoc subprocess, which has no strict element execution order
  • Reusable subprocess, which is independent from its parent process
  • Event subprocess, which is only triggered on a start event or a timer
  • Multi-instance subprocess

In the following example, the Place Order subprocess checks whether sufficient stock is available to place the order and updates the stock information if the order can be placed. The customer is then notified through the main process based on whether or not the order was placed.

subprocess

Embedded subprocess

An embedded subprocess encapsulates a part of the process. It must contain a start event and at least one end event. Note that the element enables you to define local subprocess variables that are accessible to all elements inside this container.

AdHoc subprocess

An ad hoc subprocess or process contains a number of embedded inner activities and is intended to be executed with a more flexible ordering compared to the typical routing of processes. Unlike regular processes, an ad hoc subprocess does not contain a complete, structured BPMN2 diagram description, for example, from start event to end event. Instead, the ad hoc subprocess contains only activities, sequence flows, gateways, and intermediate events. An ad hoc subprocess can also contain data objects and data associations. The activities within the ad hoc subprocesses are not required to have incoming and outgoing sequence flows. However, you can specify sequence flows between some of the contained activities. When used, sequence flows provide the same ordering constraints as in a regular process. To have any meaning, intermediate events must have outgoing sequence flows and they can be triggered multiple times while the ad hoc subprocess is active.

Reusable subprocess

Reusable subprocesses appear collapsed within the parent process. To configure a reusable subprocess, select the reusable subprocess, click diagram properties , and expand Implementation/Execution. Set the following properties:

  • Called Element: The ID of the subprocess that the activity calls and instantiates.
  • Independent: If selected, the subprocess is started as an independent process. If not selected, the active subprocess is canceled when the parent process is terminated.
  • Abort Parent: If selected, non-independent reusable subprocesses can abort the parent process when there is an error during the execution of the called process instance. For example, when there’s an error when trying to invoke the subprocess or when the subprocess instance is aborted. This property is visible only when the Independent property is not selected. The following rules apply:

    • If the reusable subprocess is independent, Abort parent is not available.
    • If the reusable subprocess is not independent, Abort parent is available.
  • Wait for completion: If selected, the specified On Exit Action is not performed until the called subprocess instance is terminated. The parent process execution continues when the On Exit Action completes. This property is selected (set to true) by default.
  • Is Async: Select if the task should be invoked asynchronously and cannot be executed instantly.
  • Multiple Instance: Select to execute the subprocess elements a specified number of times. If selected, the following options are available:

    • MI Execution mode: Indicates if the multiple instances execute in parallel or sequentially. If set to Sequential, new instances are not created until the previous instance completes.
    • MI Collection input: Select a variable that represents a collection of elements for which new instances are created. The subprocess is instantiated as many times as the size of the collection.
    • MI Data Input: Specifies the name of the variable containing the selected element in the collection. The variable is used to access elements in the collection.
    • MI Collection output: Optional variable that represents the collection of elements that will gather the output of the multi-instance node.
    • MI Data Output: Specifies the name of the variable that is added to the output collection that you selected in the MI Collection output property.
    • MI Completion Condition (mvel): MVEL expression that is evaluated on each completed instance to check if the specified multiple instance node can complete. If it evaluates to true, all remaining instances are canceled.
  • On Entry Action: A Java or MVEL script that specifies an action at the start of the task.
  • On Exit Action: A Java or MVEL script that specifies an action at the end of the task.
  • SLA Due Date: The date that the service level agreement (SLA) expires.

Figure 2.1. Reusable subprocess properties

A screenshot of Subprocess properties

Event subprocess

An event subprocess becomes active when its start event is triggered. It can interrupt the parent process context or run in parallel with it.

With no outgoing or incoming connections, only an event or a timer can trigger the subprocess. The subprocess is not part of the regular control flow. Although self-contained, it is executed in the context of the bounding process.

Use an event subprocess within a process flow to handle events that happen outside of the main process flow. For example, while booking a flight, two events may occur:

  • Cancel booking (interrupting)
  • Check booking status (non-interrupting)

You can model both of these events using the event subprocess.

Multiple instance subprocess

A multiple instances subprocess is instantiated multiple times when its execution is triggered. The instances are created sequentially. A new subprocess instance is created only after the previous instance has finished.

A multiple instances subprocess has one incoming connection and one outgoing connection.

2.6. BPMN2 gateways in process designer

Gateways are used to create or synchronize branches in the workflow using a set of conditions called the gating mechanism. BPMN2 supports two types of gateways:

  • Converging gateways, merging multiple flows into one flow
  • Diverging gateways, splitting one flow into multiple flows

One gateway cannot have multiple incoming and multiple outgoing flows.

In the following business process diagram, the XOR gateway evaluates only the incoming flow whose condition evaluates to true: gateway

In this example, the customer details are verified by a user and the process is assigned to a user for approval. If approved, an approval notification is sent to the user. If the event of the request is rejected, a rejection notification is sent to the user.

Table 2.12. Gateway elements

Element typeIcon

exclusive (XOR)

bpmn gateway exclusive

Inclusive

bpmn gateway inclusive

Parallel

bpmn gateway parallel

Event

bpmn gateway event

Exclusive

In an exclusive diverging gateway, only the first incoming flow whose condition evaluates to true is chosen. In a converging gateway, the next node is triggered for each triggered incoming flow.

The gateway triggers exactly one outgoing flow. The flow with the constraint evaluated to true and the lowest priority number is taken.

Important

Ensure that at least one of the outgoing flows evaluates to true at run time. Otherwise, the process instance terminates with a runtime exception.

The converging gateway enables a workflow branch to continue to its outgoing flow as soon as it reaches the gateway. When one of the incoming flows triggers the gateway, the workflow continues to the outgoing flow of the gateway. If it is triggered from more than one incoming flow, it triggers the next node for each trigger.

Inclusive

With an inclusive diverging gateway, the incoming flow is taken and all outgoing flows that evaluate to true are taken. Connections with lower priority numbers are triggered before triggering higher priority connections. Priorities are evaluated but the BPMN2 specification does not guarantee the priority order. Avoid depending on the priority attribute in your workflow.

Important

Ensure that at least one of the outgoing flows evaluates to true at run time. Otherwise, the process instance terminates with a runtime exception.

A converging inclusive gateway merges all incoming flows previously created by an inclusive diverging gateway. It acts as a synchronizing entry point for the inclusive gateway branches.

Parallel

Use a parallel gateway to synchronize and create parallel flows. With a parallel diverging gateway, the incoming flow is taken, all outgoing flows are taken simultaneously. With a converging parallel gateway, the gateway waits until all incoming flows have entered and only then triggers the outgoing flow.

Event

An event-based gateway is only diverging and enables you to react to possible events as opposed to the data-based exclusive gateway, which reacts to the process data. The outgoing flow is taken based on the event that occurs. Only one outgoing flow is taken at a time. The gateway might act as a start event, where the process is instantiated only if one of the intermediate events connected to the event-based gateway occurs.

2.7. BPMN2 connecting objects in process designer

Connecting objects create an association between two BPMN2 elements. When a connecting object is directed, the association is sequential and indicates that one of the elements is executed immediately before the other, within an instance of the process. Connecting objects can start and end at the top, bottom, right, or left of the process elements being associated. The OMG BPMN2 specification allows you to use your discretion, placing connecting objects in a way that makes the process behavior easy to understand and follow.

BPMN2 supports two main types of connecting objects:

  • Sequence flows: Connect elements of a process and define the order in which those elements are executed within an instance.
  • Association flows: Connect the elements of a process without execution semantics. Association flows can be undirected or unidirectional.
Note

The new process designer supports only undirected association flows. The legacy designer supports one direction and Unidirection flows.

2.8. BPMN2 swimlanes in process designer

Swimlanes are process elements that visually group tasks related to one group or user. You can use user tasks in combination with swimlanes to assign multiple user tasks to the same actor, due to Autoclaim property of the swimlanes. When a potential owner of a group claims the first task in a swimlane, then other tasks are directly assigned to the same owner. Therefore, the claim for other tasks is not needed by the remaining owners of the group. The Autoclaim property enables the auto-assignment of the tasks that are related to a swimlane.

Note

If the remaining user tasks in a swimlane contain multiple predefined ActorIds, then the user tasks are not assigned automatically.

In the following example, an analyst lane consists of two user tasks: swimlane

The Group field in the Update Customer Details and Resolve Customer Issue tasks contain the value analyst. When the process is started, and the Update Customer Details task is claimed, started, or completed by an analyst, and the Resolve Customer Issue task is claimed and assigned to the user who completed the first task. However, if only the Update Customer Details task contains the analyst group assigned, and the second task contains no user or group assignments, and the process stops after the first task completes.

You can disable the Autoclaim property of the swimlanes. If the Autoclaim property is disabled, then the tasks related to a swimlane are not assigned automatically. By default, the value of Autoclaim property is set as true. If needed, you can also change the default value for the Autoclaim property from project settings in Business Central or using the deployment descriptor file.

To change the default value of Autoclaim property of swimlanes in Business Central:

  1. Go to project Settings.
  2. Open Deployment → Environment entries.
  3. Enter the following values in the given fields:

    • Name - Autoclaim
    • Value - "false”

If you want to set the environment entry in the XML deployment descriptor, add the following code to the kie-deployment-descriptor.xml file:

<environment-entries>
  ..
    <environment-entry>
        <resolver>mvel</resolver>
        <identifier>new String ("false")</identifier>
        <parameters/>
        <name>Autoclaim</name>
    </environment-entry>
  ..
</environment-entries>

2.9. BPMN2 artifacts in process designer

Artifacts are used to provide additional information about a process. An artifact is any object depicted in the BPMN2 diagram that is not part of the process workflow. Artifacts have no incoming or outgoing flow objects.The purpose of artifacts is to provide additional information required to understand the diagram. The artifacts table lists the artifacts supported in the legacy process designer.

Table 2.13. Artifacts

Artifact typeDescription

Group

Organizes tasks or processes that have significance in the overall process. Group artifacts are not supported in the new process designer.

Text annotation

Provides additional textual information for the BPMN2 diagram.

Data object

Displays the data flowing through a process in the BPMN2 diagram.

2.9.1. Creating data object

Data objects represent, for example, documents used in a process in physical and digital form. Data objects appear as a page with a folded top right corner. The following procedure is a generic overview of creating a data object.

Note

In Red Hat Process Automation Manager 7.9.1, limited support for data objects is provided that excludes support for data inputs, data outputs, and associations.

Procedure

  1. Create a business process.
  2. In the process designer, select the Artifacts → Data Object from the tool palette.
  3. Either drag and drop a data object onto the process designer canvas or click a blank area of the canvas.
  4. If necessary, in the upper-right corner of the screen, click the Properties icon.
  5. Add or define the data object information listed in the following table as required.

    Table 2.14. Data object parameters

    LabelDescription

    Name

    The name of the data object. You can also double-click the data object shape to edit the name.

    Type

    Select a type of the data object.

  6. Click Save.

Chapter 3. Creating a business process in Business Central

The process designer is the Red Hat Process Automation Manager process modeler. The output of the modeler is a BPMN 2.0 process definition file. The definition is used as input for the Red Hat Process Automation Manager process engine, which creates a process instance based on the definition.

The procedures in this section provide a general overview of how to create a simple business process. For a more detailed business process example, see Getting started with business processes.

Prerequisites

Procedure

  1. In Business Central, go to MenuDesignProjects.
  2. Click the project name to open the project’s asset list.
  3. Click Add Asset → Business Process.
  4. In the Create new Business Process wizard, enter the following values:

    • Business Process: New business process name
    • Package: Package location for your new business process, for example com.myspace.myProject
  5. Click Ok to open the process designer.
  6. In the upper-right corner, click the Properties diagram properties icon and add your business process property information, such as process data and variables:

    1. Scroll down and expand Process Data.
    2. Click btn plus next to Process Variables and define the process variables that you want to use in your business process.

    Table 3.1. General process properties

    LabelDescription

    Name

    Enter the name of the process.

    Documentation

    Describes the process. The text in this field is included in the process documentation, if applicable.

    ID

    Enter an identifier for this process, such as orderItems.

    Package

    Enter the package location for this process in your Red Hat Process Automation Manager project, such as org.acme.

    ProcessType

    Specify whether the process is public or private (or null, if not applicable).

    Version

    Enter the artifact version for the process.

    Ad hoc

    Select this option if this process is an ad hoc subprocess.

    Process Instance Description

    Enter a description of the purpose of the process.

    Imports

    Click to open the Imports window and add any data object classes required for your process.

    Executable

    Select this option to make the process executable part of your Red Hat Process Automation Manager project.

    SLA Due Date

    Enter the service level agreement (SLA) expiration date.

    Process Variables

    Add any process variables for the process. Process variables are visible within the specific process instance. Process variables are initialized at process creation and destroyed on process completion. Variable Tags provide greater control over variable behavior, for example whether the variable is required or readonly. For more information about variable tags, see Chapter 4, Variables.

    Metadata Attributes

    Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present.

    Global Variables

    Add any global variables for the process. Global variables are visible to all process instances and assets in a project. Global variables are typically used by business rules and constraints, and are created dynamically by the rules or constraints.

    The Metadata Attributes entries are similar to Process Variables tags in that they enable new metaData extensions to BPMN diagrams. However, process variable tags modify the behavior of specific process variables, such as whether a certain variable is required or readonly, whereas metadata attributes are key-value definitions that modify the behavior of the overall process.

    For example, the following custom metadata attribute riskLevel and value low in a BPMN process correspond to a custom event listener for starting the process:

    Figure 3.1. Example metadata attribute and value in the BPMN modeler

    Image of custom metadata attribute and value

    Example metadata attribute and value in the BPMN file

    <bpmn2:process id="approvals" name="approvals" isExecutable="true" processType="Public">
      <bpmn2:extensionElements>
        <tns:metaData name="riskLevel">
          <tns:metaValue><![CDATA[low]]></tns:metaValue>
        </tns:metaData>
      </bpmn2:extensionElements>

    Example event listener with metadata value

    public class MyListener implements ProcessEventListener {
        ...
        @Override
        public void beforeProcessStarted(ProcessStartedEvent event) {
            Map < String, Object > metadata = event.getProcessInstance().getProcess().getMetaData();
            if (metadata.containsKey("low")) {
                // Implement some action for that metadata attribute
            }
        }
    }

  7. In the process designer canvas, use the left toolbar to drag and drop BPMN components to define your business process logic, connections, events, tasks, or other elements.

    Note

    A task and event in Red Hat Process Automation Manager expect one incoming and one outgoing flow. If you want to design a business process with multiple incoming and multiple outgoing flows, then consider redesigning the business process using gateways. Using gateways makes the logic apparent, which a sequence flow is executing. Therefore, gateways are considered as a best practice for multiple connections.

    However, if it is a must to use multiple connections for a task or an event, then you must set the JVM (Java virtual machine) system property jbpm.enable.multi.con to true. When Business Central and KIE Server run on different servers, then ensure that both of them contains the jbpm.enable.multi.con system property as enabled otherwise, the process engine throws an exception.

  8. After you add and define all components of the business process, click Save to save the completed business process.

3.1. Creating business rules tasks

Business rules tasks are used to make decisions through a Decision Model and Notation (DMN) model or rule flow group.

Procedure

  1. Create a business process.
  2. In the process designer, select the Activities tool from the tool palette.
  3. Select Business Rule.
  4. Click a blank area of the process designer canvas.
  5. If necessary, in the upper-right corner of the screen, click the Properties icon.
  6. Add or define the task information listed in the following table as required.

    Table 3.2. Business rule task parameters

    LabelDescription

    Name

    The name of the business rule task. You can also double-click the business rule task shape to edit the name.

    Rule Language

    The output language for the task. Select Decision Model and Notation (DMN) or Drools (DRL).

    Rule Flow Group

    The rule flow group associated with this business task. Select a rule flow group from the list or specify a new rule flow group.

    On Entry Action

    A Java, JavaScript, or MVEL script that specifies an action at the start of the task.

    On Exit Action

    A Java, JavaScript, or MVEL script that specifies an action at the end of the task.

    Is Async

    Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service.

    AdHoc Autostart

    Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management.

    SLA Due Date

    The date that the service level agreement (SLA) expires.

    Assignments

    Click to add local variables.

  7. Click Save.

3.2. Creating script tasks

Script tasks are used to execute a piece of code written in Java, JavaScript, or MVEL. They contain code snippets that specify the action of the script task. You can include global and process variables in your scripts.

Note that MVEL accepts any valid Java code and additionally provides support for nested access of parameters. For example, the MVEL equivalent of the Java call person.getName() is person.name. MVEL also provides other improvements over Java and MVEL expressions are generally more convenient for business users.

Procedure

  1. Create a business process.
  2. In the process designer, select the Activities tool from the tool palette.
  3. Select Script.
  4. Click a blank area of the process designer canvas.
  5. If necessary, in the upper-right corner of the screen, click the Properties icon.
  6. Add or define the task information listed in the following table as required.

    Table 3.3. Script task parameters

    LabelDescription

    Name

    The name of the script task. You can also double-click the script task shape to edit the name.

    Documentation

    Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation.

    Script

    Enter a script in Java, JavaScript, or MVEL to be excuted by the task, and select the script type.

    Is Async

    Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service.

    AdHoc Autostart

    Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management.

  7. Click Save.

3.3. Creating service tasks

A service task is a task that executes an action based on a web service call or in a Java class method. Examples of service tasks include sending an email and logging a message when these tasks are performed by systems. You can define the parameters (input) and results (output) that are associated with a service task. A Service Task should have one incoming connection and one outgoing connection.

Procedure

  1. In Business Central, select the Admin icon in the top-right corner of the screen and select Artifacts.
  2. Click Upload to open the Artifact upload window.
  3. Choose the .jar file and click upload button .

    Important

    The .jar file contains data types (data objects) and Java classes for web service and Java service tasks respectively.

  4. Create a project you want to use.
  5. Go to your project Settings → Dependencies.
  6. Click Add from repository, locate the uploaded .jar file, and click Select.
  7. Open your project Settings → Work Item Handler.
  8. Enter the following values in the given fields:

    • Name - Service Task
    • Value - new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession, classLoader)
  9. Save the project.

    Example of creating web service task

    The default implementation of a service task in the BPMN2 specification is a web service. The web service support is based on the Apache CXF dynamic client, which provides a dedicated service task handler that implements the WorkItemHandler interface:

    org.jbpm.process.workitem.bpmn2.ServiceTaskHandler

    To create a service task using web service, you must configure the web service:

    1. Create a business process.
    2. If necessary, in the upper-right corner of the screen, click the Properties icon.
    3. Click import property icon in the Imports property to open the Imports window.
    4. Click +Add next to the WSDL Imports to import the required WSDL (Web Services Description Language) values. For example:

    5. In the process designer, select the Activities tool from the tool palette.
    6. Select Service Task.
    7. Click a blank area of the process designer canvas.
    8. Add or define the task information listed in the following table as required.

      Table 3.4. Web service task parameters

      LabelDescription

      Name

      The name of the service task. You can also double-click the service task shape to edit the name.

      Documentation

      Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation.

      Implementation

      Specify a web service.

      Interface

      The service used to implement the script, such as CountriesPortService.

      Operation

      The operation that is called by the interface, such as getCountry.

      Assignments

      Click to add local variables.

      AdHoc Autostart

      Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management.

      Is Async

      Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service.

      Is Multiple Instance

      Select if this task has multiple instances.

      MI Execution mode

      Select if the multiple instances execute in parallel or sequentially.

      MI Collection input

      Specify a variable that represents a collection of elements for which new instances are created, such as inputCountryNames.

      MI Data Input

      Specify the input data assignment that is transferred to a web service, such as Parameter.

      MI Collection output

      The array list in which values returned from the web service task is stored, such as outputCountries.

      MI Data Output

      Specify the output data assignment for the web service task, which stores the result of class execution on the server, such as Result.

      MI Completion Condition (mvel)

      Specify the MVEL expression that is evaluated on each completed instance to check if the specified multiple instance node can complete.

      On Entry Action

      A Java, JavaScript, or MVEL script that specifies an action at the start of the task.

      On Exit Action

      A Java, JavaScript, or MVEL script that specifies an action at the end of the task.

      SLA Due Date

      The date that the service level agreement (SLA) expires.

    Example of creating Java service task

    When you create a service task using Java method, then the method can only contain one parameter and returns a single value. To create a service task using a Java method, you must add the Java class to the dependencies of the project:

    1. Create a business process.
    2. In the process designer, select the Activities tool from the tool palette.
    3. Select Service Task.
    4. Click a blank area of the process designer canvas.
    5. If necessary, in the upper-right corner of the screen, click the Properties icon.
    6. Add or define the task information listed in the following table as required.

      Table 3.5. Java service task parameters

      LabelDescription

      Name

      The name of the service task. You can also double-click the service task shape to edit the name.

      Documentation

      Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation.

      Implementation

      Specify the task is implemented in Java.

      Interface

      The class used to implement the script, such as org.xyz.HelloWorld.

      Operation

      The method that is called by the interface, such as sayHello.

      Assignments

      Click to add local variables.

      AdHoc Autostart

      Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management.

      Is Async

      Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service.

      Is Multiple Instance

      Select if this task has multiple instances.

      MI Execution mode

      Select if the multiple instances execute in parallel or sequentially.

      MI Collection input

      Specify a variable that represents a collection of elements for which new instances are created, such as InputCollection.

      MI Data Input

      Specify the input data assignment that is transferred to a Java class. For example, you can set the input data assignments as Parameter and ParameterType. ParameterType represents the type of Parameter and sends arguments to the execution of Java method.

      MI Collection output

      The array list in which values returned from the Java class is stored, such as OutputCollection.

      MI Data Output

      Specify the output data assignment for Java service task, which stores the result of class execution on the server, such as Result.

      MI Completion Condition (mvel)

      Specify the MVEL expression that is evaluated on each completed instance to check if the specified multiple instance node can complete. For example, OutputCollection.size() == 3 indicates more than three people are not addressed.

      On Entry Action

      A Java, JavaScript, or MVEL script that specifies an action at the start of the task.

      On Exit Action

      A Java, JavaScript, or MVEL script that specifies an action at the end of the task.

      SLA Due Date

      The date that the service level agreement (SLA) expires.

  10. Click Save.

3.4. Creating user tasks

User tasks are used to include human actions as input to the business process.

Procedure

  1. Create a business process.
  2. In the process designer, select the Activities tool from the tool palette.
  3. Select User.
  4. Either drag and drop a user task onto the process designer canvas or click a blank area of the canvas.
  5. If necessary, in the upper-right corner of the screen, click the Properties icon.
  6. Add or define the task information listed in the following table as required.

    Table 3.6. User task parameters

    LabelDescription

    Name

    The name of the user task. You can also double-click the user task shape to edit the name.

    Documentation

    Enter a description of the task. The text in this field is included in the process documentation. Click the Documentation tab in the upper-left side of the process designer canvas to view the process documentation.

    Task Name

    The name of the human task.

    Subject

    Enter a subject for the task.

    Actors

    The actors responsible for executing the human task. Click Add to add a row then select an actor from the list or click New to add a new actor.

    Groups

    The groups responsible for executing the human task. Click Add to add a row then select a group from the list or click New to add a new group.

    Assignments

    Local variables for this task. Click to open the Task Data I/O window then add data inputs and outputs as required.

    Reassignments

    Specify a different actor to complete this task.

    Notifications

    Click to specify notifications associated with the task.

    Is Async

    Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example a task performed by an outside service.

    Skippable

    Select if this task is not mandatory.

    Priority

    Specify a priority for the task.

    Description

    Enter a description for the human task.

    Created By

    The user that created this task.

    AdHoc Autostart

    Select if this is an ad hoc task that should be started automatically. AdHoc Autostart enables the task to automatically start when the process or case instance is created instead of being starting by a start task. It is often used in case management.

    Multiple Instance

    Select if this task has multiple instances.

    On Entry Action

    A Java, JavaScript, or MVEL script that specifies an action at the start of the task.

    On Exit Action

    A Java, JavaScript, or MVEL script that specifies an action at the end of the task.

    Content

    The content of the script.

    SLA Due Date

    The date that the service level agreement (SLA) expires.

  7. Click Save.

3.5. BPMN2 user task life cycle in process designer

You can trigger a user task element during the process instance execution to create a user task. The user task service of the task execution engine executes the user task instance. The process instance continues the execution only when the associated user task is completed or aborted. A user task life cycle is as follows:

  • When a process instance enters a user task element, the user task is in the Created stage.
  • Created stage is a transient stage and the user task enters the Ready stage immediately. The task appears in the task list of all the actors who are allowed to execute the task.
  • When an actor claims the user task, the task becomes Reserved.
Note

If a user task has a single potential actor, the task is assigned to that actor upon creation.

  • When an actor who claimed the user task starts the execution, the status of the user task changes to InProgress.
  • Once an actor completes the user task, the status changes to Completed or Failed depending on the execution outcome.

There are also several other life cycle methods, including:

  • Delegating or forwarding a user task so the user task is assigned to another actor.
  • Revoking a user task, then the user task is no longer claimed by a single actor but is available to all actors who are allowed to take it.
  • Suspending and resuming a user task.
  • Stopping a user task that is in progress.
  • Skipping a user task, in which the execution of the task is suspended.

For more information about the user task life cycle, refer Web Services Human Task.

3.6. BPMN2 task permission matrix in process designer

The user task permission matrix summarizes the actions that are allowed for specific user roles. The user roles are as follows:

  • Potential owner: User who can claim the task, which was claimed earlier and is released and forwarded. The tasks with Ready status can be claimed, and the potential owner becomes the actual owner of the task.
  • Actual owner: User who claims the task and progresses the task to completion or failure.
  • Business administrator: Super user who can modify the status or progress with the task at any point of the task life cycle.

The following permission matrix represents the authorizations for all operations that modify a task.

  • + indicates that the user role is allowed to do the specified operation.
  • - indicates that the user role is not allowed to do the specified operation, or the operation does not match with the user’s role.

Table 3.7. Main operations permissions matrix

OperationPotential ownerActual ownerBusiness administrator

activate

-

-

+

claim

+

-

+

complete

-

+

+

delegate

+

+

+

fail

-

+

+

forward

+

+

+

nominate

-

-

+

release

-

+

+

remove

-

-

+

resume

+

+

+

skip

+

+

+

start

+

+

+

stop

-

+

+

suspend

+

+

+

3.7. Making a copy of a business process

You can make a copy of a business process in Business Central and modify the copied process as needed.

Procedure

  1. In the business process designer, click Copy in the upper-right toolbar.
  2. In the Make a Copy window, enter a new name for the copied business process, select the target package, and optionally add a comment.
  3. Click Make a Copy.
  4. Modify the copied business process as needed and click Save to save the updated business process.

3.8. Resizing elements and using the zoom function to view business processes

You can resize individual elements in a business process and zoom in or out to modify the view of your business process.

Procedure

  1. In the business process designer, select the element and click the red dot in the lower-right corner of the element.
  2. Drag the red dot to resize the element.

    Figure 3.2. Resize an element

    Resizing an element
  3. To zoom in or out to view the entire diagram, click the plus or minus sign on the lower-right side of the canvas.

    Figure 3.3. Enlarge or shrink a business process

    Zooming to view the entire diagram

3.9. Generating process documentation in Business Central

In the process designer in Business Central, you can view and print a report of the process definition. The process documentation summarizes the components, data, and visual flow of the process in a format (PDF) that you can print and share more easily.

Procedure

  1. In Business Central, navigate to a project that contains a business process and select the process.
  2. In the process designer, click the Documentation tab to view the summary of the process file, and click Print in the top-right corner of the window to print the PDF report.

    Figure 3.4. Generate process documentation

    Project-level service task settings

Chapter 4. Variables

Variables store data that is used during runtime. Process designer uses three types of variables:

Global variables

Global variables are visible to all process instances and assets in a particular session. They are intended to be used primarily by business rules and by constraints and are created dynamically by rules or constraints.

Process variables

Process variables are defined as properties in the BPMN2 definition file and are visible within the process instance. They are initialized at process creation and destroyed on process completion.

Local variables

Local variables are associated with and available within specific process elements, such as activities. They are initialized when the element context is initialized, that is, when the execution workflow enters the node and execution of the onEntry action has finished, if applicable. They are destroyed when the element context is destroyed, that is, when the execution workflow leaves the element.

An element, such as a process, sub-process, or task can only access variables in its own and parent contexts. An element cannot access a variable defined in the element’s child element. Therefore, when an elements requires access to a variable during runtime, its own context is searched first.

If the variable cannot be found directly in the element’s context, the immediate parent context is searched. The search continues until the process context is reached. In case of global variables, the search is performed directly on the session container.

If the variable cannot be found, a read access request returns null and a write access produces an error message, and the process continues its execution. Variables are searched for based on their ID.

4.1. Variable tags

For greater control over variable behavior, you can tag process variables and local variables in the BPMN process file. Tags are simple string values that you add as metadata to a specific variable.

Red Hat Process Automation Manager supports the following tags for process variables and local variables:

  • required: Sets the variable as a requirement in order to start a process instance. If a process instance starts without the required variable, Red Hat Process Automation Manager generates a VariableViolationException error.
  • readonly: Indicates that the variable is for informational purposes only and can be set only once during process instance execution. If the value of a read-only variable is modified at any time, Red Hat Process Automation Manager generates a VariableViolationException error.
  • restricted: A special tag that is used with the VariableGuardProcessEventListener to indicate that permission is granted to modify the variable based on the required and the existing role.

    VariableGuardProcessEventListener is extended from DefaultProcessEventListener and supports two different constructors:

    • VariableGuardProcessEventListener

      public VariableGuardProcessEventListener(String requiredRole, IdentityProvider identityProvider) {
          this("restricted", requiredRole, identityProvider);
      }
    • VariableGuardProcessEventListener

      public VariableGuardProcessEventListener(String tag, String requiredRole, IdentityProvider identityProvider) {
          this.tag = tag;
          this.requiredRole = requiredRole;
          this.identityProvider = identityProvider;
      }

      Therefore, you must add an event listener to the session with the allowed role name and identity provider that returns the user role as shown in the following example:

      ksession.addEventListener(new VariableGuardProcessEventListener("AdminRole", myIdentityProvider));

    In the previous example, the VariableGuardProcessEventListener method verifies if a variable is tagged with a security constraint tag (restricted). If the user does not have the required role, then Red Hat Process Automation Manager generates a VariableViolationException error.

Note

The variable tags that appear in the Business Central UI, for example internal, input, output, business-relevant, and tracked are not supported in Red Hat Process Automation Manager.

You can add the tag directly to the BPMN process source file as a customTags metadata property with the tag value defined in the format ![CDATA[TAG_NAME]].

For example, the following BPMN process applies the required tag to an approver process variable:

Figure 4.1. Example variable tagged in the BPMN modeler

Image of variable tags in BPMN modeler

Example variable tagged in a BPMN file

<bpmn2:property id="approver" itemSubjectRef="ItemDefinition_9" name="approver">
  <bpmn2:extensionElements>
    <tns:metaData name="customTags">
      <tns:metaValue><![CDATA[required]]></tns:metaValue>
    </tns:metaData>
  </bpmn2:extensionElements>
</bpmn2:property>

You can use more than one tag for a variable where applicable. You can also define custom variable tags in your BPMN files to make variable data available to Red Hat Process Automation Manager process event listeners. Custom tags do not influence the Red Hat Process Automation Manager runtime as the standard variable tags do and are for informational purposes only. You define custom variable tags in the same customTags metadata property format that you use for standard Red Hat Process Automation Manager variable tags.

4.2. Defining global variables

Global variables exist in a knowledge session and can be accessed and are shared by all assets in that session. They belong to the particular session of the Knowledge Base and they are used to pass information to the engine. Every global variable defines its ID and item subject reference. The ID serves as the variable name and must be unique within the process definition. The item subject reference defines the data type the variable stores.

Important

The rules are evaluated at the moment the fact is inserted. Therefore, if you are using a global variable to constrain a fact pattern and the global is not set, the system returns a NullPointerException.

Global variables are initialized either when the process with the variable definition is added to the session or when the session is initialized with globals as its parameters.

Values of global variables can typically be changed during the assignment, which is a mapping between a process variable and an activity variable. The global variable is then associated with the local activity context, local activity variable, or by a direct call to the variable from a child context.

Prerequisites

  • You have created a project in Business Central and it contains at least one business process asset.

Procedure

  1. Open a business process asset.
  2. Click a blank area of the process designer canvas.
  3. Click the Properties icon on the upper-right side of the screen to open the Properties panel.
  4. If necessary, expand the Process section.
  5. In the Global Variables sub-section, click the plus icon.
  6. Enter a name for the variable in the Name box.
  7. Select a data type from the Data Type menu.

4.3. Defining process variables

Process variables are defined as properties in the BPMN2 definition file and are visible within the process instance. They are initialized at process creation and destroyed on process completion.

A process variable is a variable that exists in a process context and can be accessed by its process or its child elements. Process variables belong to a particular process instance and cannot be accessed by other process instances. Every process variable defines its ID and item subject reference: the ID serves as the variable name and must be unique within the process definition. The item subject reference defines the data type the variable stores.

Process variables are initialized when the process instance is created. Their value can be changed by the process activities using the Assignment, when the global variable is associated with the local Activity context, local Activity variable, or by a direct call to the variable from a child context.

Note that process variables should be mapped to local variables.

Prerequisites

  • You have created a project in Business Central and it contains at least one business process asset.

Procedure

  1. Open a business process asset.
  2. Click a blank area of the process designer canvas.
  3. Click the Properties icon on the upper-right side of the screen to open the Properties panel.
  4. If necessary, expand the Process Data section.
  5. In the Process Variables sub-section, click the plus icon.
  6. Enter a name for the variable in the Name box.
  7. Select a data type from the Data Type menu.

4.4. Defining local variables

Local variables are available within their process element, such as an activity. They are initialized when the element context is initialized, that is, when the execution workflow enters the node and execution of the onEntry action has finished, if applicable. They are destroyed when the element context is destroyed, that is, when the execution workflow leaves the element.

Values of local variables can be mapped to global or process variables. This enables you to maintain relative independence of the parent element that accommodates the local variable. Such isolation might help prevent technical exceptions.

A local variable is a variable that exists in a child element context of a process and can be accessed only from within this context. Local variables belong to the particular element of a process.

For tasks, with the exception of the Script task, you can define Data Input Assignments and Data Output Assignments in the Assignments property. Data Input Assignment defines variables that enter the Task and therefore provide the entry data needed for the task execution. The Data Output Assignments can refer to the context of the Task after execution to acquire output data.

User Tasks present data related to the actor that is executing the User Task. Additionally, User Tasks also request the actor to provide result data related to the execution.

To request and provide the data, use task forms and map the data in the Data Input Assignment parameter to a variable. Map the data provided by the user in the Data Output Assignment parameter if you want to preserve the data as output.

Prerequisites

  • You have created a project in Business Central and it contains at least one business process asset that has at least one task that is not a script task.

Procedure

  1. Open a business process asset.
  2. Select a task that is not a script task.
  3. Click the Properties icon on the upper-right side of the screen to open the Properties panel.
  4. Click the box under the Assignments sub-section. The Task Data I/O dialog box opens.
  5. Click Add next to Data Inputs and Assignments or Data Outputs and Assignments.
  6. Enter a name for the local variable in the Name box.
  7. Select a data type from the Data Type menu.
  8. Select a source or target then click Save.

Chapter 5. Constraints

A constraint is a boolean expression that is evaluated when an element containing a constraint is executed. You can use constraints in various parts of your process, such as in a diverging gateway.

Red Hat Process Automation Manager supports two types of constraints, including:

  • Code constraints: Constraints that are defined in Java, Javascript, Drools, or MVEL. Code constraints can access the data in the working memory, including the global and process variables. The following code constraint examples contain person as a variable in a process:

    Example Java code constraint

    return person.getAge() > 20;

    Example MVEL code constraint

    return person.age > 20;

    Example Javascript code constraint

    person.age > 20

  • Rule constraints: Constraints that are defined in the form of DRL rule conditions. Rule constraints can access the data in the working memory, including global variables. However, rule constraints cannot access the variables directly in a process but using a process instance. To retrieve the reference of the parent process instance, use the processInstance variable of the type WorkflowProcessInstance.

    Note

    You can insert a process instance into the session and update it if necessary, for example, using Java code or an on-entry, on-exit, or explicit action in your process.

    The following example shows a rule constraint, searching for a person with the same name as the value of the name variable in the process.

    Example rule constraint with process variable assignment

    processInstance : WorkflowProcessInstance()
    Person( name == ( processInstance.getVariable("name") ) )
    # add more constraints here ...

Chapter 6. Deploying a business process in Business Central

After you design your business process in Business Central, you can build and deploy your project in Business Central to make the process available to KIE Server.

Prerequisites

Procedure

  1. In Business Central, go to MenuDesignProjects.
  2. Click the project that you want to deploy.
  3. Click Deploy.

    Note

    You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The next time you deploy or redeploy the built KJAR, the previous deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server.

    To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production. To configure the deployment behavior for a corresponding project in Business Central, go to project SettingsGeneral SettingsVersion and toggle the Development Mode option. By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode.

    To review project deployment details, click View deployment details in the deployment banner at the top of the screen or in the Deploy drop-down menu. This option directs you to the MenuDeployExecution Servers page.

Chapter 7. Executing a business process in Business Central

After you build and deploy the project that contains your business process, you can execute the defined functionality for the business process.

As an example, this procedure uses the Mortgage_Process sample project in Business Central. In this scenario, you input data into a mortgage application form acting as the mortgage broker. The MortgageApprovalProcess business process runs and determines whether or not the applicant has offered an acceptable down payment based on the decision rules defined in the project. The business process either ends the rule testing or requests that the applicant increase the down payment to proceed. If the application passes the business rule testing, the bank approver reviews the application and either approves or denies the loan.

Prerequisites

Procedure

  1. In Business Central, go to MenuProjects and select a space. The default space is MySpace.
  2. In the upper-right corner of the window, click the arrow next to Add Project and select Try Samples.
  3. Select the Mortgage_Process sample and click Ok.
  4. On the project page, select Mortgage_Process.
  5. On the Mortgage_Process page, click Build.
  6. After the project has built, click Deploy.
  7. Go to MenuManageProcess Definitions.
  8. Click anywhere in the MortgageApprovalProcess row to view the process details.
  9. Click the Diagram tab to view the business process diagram in the editor.
  10. Click New Process Instance to open the Application form and input the following values into the form fields:

    • Down Payment: 30000
    • Years of amortization: 10
    • Name: Ivo
    • Annual Income: 60000
    • SSN: 123456789
    • Age of property: 8
    • Address of property: Brno
    • Locale: Rural
    • Property Sale Price: 50000
  11. Click Submit to start a new process instance. After starting the process instance, the Instance Details view opens.
  12. Click the Diagram tab to view the process flow within the process diagram. The state of the process is highlighted as it moves through each task.
  13. Click MenuManageTasks.

    For this example, the user or users working on the corresponding tasks are members of the following groups:

    • approver: For the Qualify task
    • broker: For the Correct Data and Increase Down Payment tasks
    • manager: For the Final Approval task
  14. As the approver, review the Qualify task information, click Claim and then Start to start the task, and then select Is mortgage application in limit? and click Complete to complete the task flow.
  15. In the Tasks page, click anywhere in the Final Approval row to open the Final Approval task.
  16. Click Claim to claim responsibility for the task, and click Complete to finalize the loan approval process.
Note

The Save and Release buttons are only used to either pause the approval process and save the instance if you are waiting on a field value, or to release the task for another user to modify.

Chapter 8. Testing a business process

A business process can be updated dynamically, which can cause errors, therefore testing a process business is also a part of the business process life cycle similar to any other development artifact.

The unit test for a business process ensures that the process behaves as expected in a specific use case. For example, you can test an output based on a particular input. To simplify unit testing, Red Hat Process Automation Manager includes the org.jbpm.test.JbpmJUnitBaseTestCase class.

The JbpmJUnitBaseTestCase performs as a base test case class, which is used for Red Hat Process Automation Manager related tests. The JbpmJUnitBaseTestCase provides the following usage areas:

  • JUnit life cycle methods

    Table 8.1. JUnit life cycle methods

    MethodDescription

    setUp

    This method is annotated as @Before. It configures a data source and EntityManagerFactory and deletes the session ID of a singleton.

    tearDown

    This method is annotated as @After. It removes history, closes EntityManagerFactory and a data source, and disposes RuntimeManager and RuntimeEngines.

  • Knowledge base and knowledge session management methods: To create a session, create RuntimeManager and RuntimeEngine. Use the following methods to create and dispose RuntimeManager:

    Table 8.2. RuntimeManager and RuntimeEngine management methods

    MethodDescription

    createRuntimeManager

    Creates RuntimeManager for a given set of assets and selected strategy.

    disposeRuntimeManager

    Disposes RuntimeManager that is active in the scope of the test.

    getRuntimeEngine

    Creates new RuntimeEngine for the given context.

  • Assertions: To test the state of assets, use the following methods:

    Table 8.3. RuntimeManager and RuntimeEngine Management Methods

    AssertionDescription

    assertProcessInstanceActive(long processInstanceId, KieSession ksession)

    Verifies whether a process instance with the given processInstanceId is active.

    assertProcessInstanceCompleted(long processInstanceId)

    Verifies whether a process instance with the given processInstanceId is completed. You can use this method if session persistence is enabled, otherwise use assertProcessInstanceNotActive(long processInstanceId, KieSession ksession).

    assertProcessInstanceAborted(long processInstanceId)

    Verifies whether a process instance with the given processInstanceId is aborted. You can use this method if session persistence is enabled, otherwise use assertProcessInstanceNotActive(long processInstanceId, KieSession ksession).

    assertNodeExists(ProcessInstance process, String…​ nodeNames)

    Verifies whether the specified process contains the given nodes.

    assertNodeActive(long processInstanceId, KieSession ksession, String…​ name)

    Verifies whether a process instance with the given processInstanceId contains at least one active node with the specified node names.

    assertNodeTriggered(long processInstanceId, String…​ nodeNames)

    Verifies whether a node instance is triggered for each given node during the execution of the specified process instance.

    assertProcessVarExists(ProcessInstance process, String…​ processVarNames)

    Verifies whether the given process contains the specified process variables.

    assertProcessNameEquals(ProcessInstance process, String name)

    Verifies whether the given name matches the specified process name.

    assertVersionEquals(ProcessInstance process, String version)

    Verifies whether the given process version matches the specified process version.

  • Helper methods: Use following methods to create a new RuntimeManager and RuntimeEngine for a given set of processes with or without using persistence. For more information about persistence, see Process engine in Red Hat Process Automation Manager.

    Table 8.4. RuntimeManager and RuntimeEngine Management Methods

    MethodDescription

    setupPoolingDataSource

    Configures a data source.

    getDs

    Returns the configured data source.

    getEmf

    Returns the configured EntityManagerFactory.

    getTestWorkItemHandler

    Returns a test work item handler that can be registered in addition to the default work item handler.

    clearHistory

    Clears the history log.

The following example contains a start event, a script task, and an end event. The example JUnit test creates a new session, starts the hello.bpmn process, and verifies whether the process instance is completed and the StartProcess, Hello, and EndProcess nodes are executed.

Figure 8.1. Example JUnit Test of hello.bpmn Process

Example JUnit Test Process
public class ProcessPersistenceTest extends JbpmJUnitBaseTestCase {

    public ProcessPersistenceTest() {
        super(true, true);
    }

    @Test
    public void testProcess() {

        createRuntimeManager("hello.bpmn");

        RuntimeEngine runtimeEngine = getRuntimeEngine();

        KieSession ksession = runtimeEngine.getKieSession();

        ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");

        assertProcessInstanceNotActive(processInstance.getId(), ksession);

        assertNodeTriggered(processInstance.getId(), "StartProcess", "Hello", "EndProcess");
    }
}

JbpmJUnitBaseTestCase supports all predefined RuntimeManager strategies as part of the unit testing. Therefore, it is enough to specify the strategy that is used when you create a RuntimeManager as part of a single test. The following example shows the use of the PerProcessInstance strategy in a task service to manage user tasks:

public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {

    private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);

    public ProcessHumanTaskTest() {
        super(true, false);
    }

    @Test
    public void testProcessProcessInstanceStrategy() {
        RuntimeManager manager = createRuntimeManager(Strategy.PROCESS_INSTANCE, "manager", "humantask.bpmn");
        RuntimeEngine runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get());
        KieSession ksession = runtimeEngine.getKieSession();
        TaskService taskService = runtimeEngine.getTaskService();

        int ksessionID = ksession.getId();
        ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");

        assertProcessInstanceActive(processInstance.getId(), ksession);
        assertNodeTriggered(processInstance.getId(), "Start", "Task 1");

        manager.disposeRuntimeEngine(runtimeEngine);
        runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get(processInstance.getId()));

        ksession = runtimeEngine.getKieSession();
        taskService = runtimeEngine.getTaskService();

        assertEquals(ksessionID, ksession.getId());

        // let John execute Task 1
        List<TaskSummary> list = taskService.getTasksAssignedAsPotentialOwner("john", "en-UK");
        TaskSummary task = list.get(0);
        logger.info("John is executing task {}", task.getName());
        taskService.start(task.getId(), "john");
        taskService.complete(task.getId(), "john", null);

        assertNodeTriggered(processInstance.getId(), "Task 2");

        // let Mary execute Task 2
        list = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");
        task = list.get(0);
        logger.info("Mary is executing task {}", task.getName());
        taskService.start(task.getId(), "mary");
        taskService.complete(task.getId(), "mary", null);

        assertNodeTriggered(processInstance.getId(), "End");
        assertProcessInstanceNotActive(processInstance.getId(), ksession);
    }
}

8.1. Testing integration with external services

Business processes often include the invocation of external services. Unit testing of a business process enables you to register test handlers that verify whether the specific services are requested correctly, and also provide test responses for the requested services.

To test the interaction with external services, use the default TestWorkItemHandler handler. You can register the TestWorkItemHandler to collect all the work items of a particular type. Also, TestWorkItemHandler contains data related to a task. A work item represents one unit of work, such as sending a specific email or invoking a specific service. The TestWorkItemHandler verifies whether a specific work item is requested during an execution of a process, and the associated data is correct.

The following example shows how to verify an email task and whether an exception is raised if the email is not sent. The unit test uses a test handler that is executed when an email is requested and enables you to test the data related to the email, such as the sender and recipient. Once the abortWorkItem() method notifies the engine about the email delivery failure, the unit test verifies that the process handles such case by generating an error and logging the action. In this case, the process instance is eventually aborted.

Figure 8.2. Example email process

Example email process for testing
public void testProcess2() {

    createRuntimeManager("sample-process.bpmn");

    RuntimeEngine runtimeEngine = getRuntimeEngine();

    KieSession ksession = runtimeEngine.getKieSession();

    TestWorkItemHandler testHandler = getTestWorkItemHandler();

    ksession.getWorkItemManager().registerWorkItemHandler("Email", testHandler);

    ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello2");

    assertProcessInstanceActive(processInstance.getId(), ksession);
    assertNodeTriggered(processInstance.getId(), "StartProcess", "Email");

    WorkItem workItem = testHandler.getWorkItem();
    assertNotNull(workItem);
    assertEquals("Email", workItem.getName());
    assertEquals("me@mail.com", workItem.getParameter("From"));
    assertEquals("you@mail.com", workItem.getParameter("To"));

    ksession.getWorkItemManager().abortWorkItem(workItem.getId());
    assertProcessInstanceNotActive(processInstance.getId(), ksession);
    assertNodeTriggered(processInstance.getId(), "Gateway", "Failed", "Error");

}

Chapter 9. Managing log files

Red Hat Process Automation Manager manages the required maintenance, runtime data that is removed, including:

  • Process instance data, which is removed upon process instance completion.
  • Work item data, which is removed upon work item completion.
  • Task instance data, which is removed upon completion of a process to which the given task belongs.

Runtime data, which is not cleaned automatically includes session information data that is based on the selected runtime strategy.

  • Singleton strategy ensures that runtime data of session information is not automatically removed.
  • Per request strategy allows automatic removal when a request is terminated.
  • Per process instances are automatically removed when a process instance is mapped to a session that is completed or aborted.

In order to keep the track of process instances, Red Hat Process Automation Manager provides audit data tables. There are two ways to manage and maintain the audit data tables, including cleaning up the jobs automatically and manually.

9.1. Setting up automatic cleanup job

You can set up an automatic cleanup job in Business Central.

Procedure

  1. In Business Central, go to Manage > Jobs.
  2. Click New Job.
  3. Enter values for Business Key, Due On, and Retries fields.
  4. Enter the following command into the Type field.

    org.jbpm.executor.commands.LogCleanupCommand
  5. To use the parameters, complete the following steps:

    For full parameters list, see Section 9.3, “Removing logs from the database”.

    1. Open the Advanced tab.
    2. Click Add Parameter.
    3. Enter a parameter in the Key column and enter a parameter value in the Value column.
  6. Click Create.

The automatic cleanup job is created.

9.2. Manual cleanup

To perform manual cleanup, you can use the audit API. The audit API is divided into the following areas:

Table 9.1. Audit API areas

NameDescription

Process audit

It is used to clean up process, node and variable logs that are accessible in the jbpm-audit module.

For example, you can access the module as follows: org.jbpm.process.audit.JPAAuditLogService

Task audit

It is used to clean up tasks and events that are accessible in the jbpm-human-task-audit module.

For example, you can access the module as follows: org.jbpm.services.task.audit.service.TaskJPAAuditService

Executor jobs

It is used to clean up executor jobs and errors that are accessible in the jbpm-executor module.

For example, you can access the module as follows: org.jbpm.executor.impl.jpa.ExecutorJPAAuditService

9.3. Removing logs from the database

Use LogCleanupCommand executor command to clean up the data, which is using the database space. The LogCleanupCommand consists of logic to automatically clean up all or selected data.

There are several configuration options that you can use with the LogCleanupCommand:

Table 9.2. LogCleanupCommand parameters table

NameDescriptionIs Exclusive

SkipProcessLog

Indicates whether process and node instances, and process variables log cleanup is skipped when the command runs. The default value is false.

No, it is used with other parameters.

SkipTaskLog

Indicates if the task audit and event log cleanup are skipped. The default value is false.

No, it is used with other parameters.

SkipExecutorLog

Indicates if Red Hat Process Automation Manager executor entries cleanup is skipped. The default value is false.

No, it is used with other parameters.

SingleRun

Indicates if a job routine runs only once. The default value is false.

No, it is used with other parameters.

NextRun

Schedules the next job execution. The default value is 24h.

For example, set to 12h for jobs to be executed every 12 hours. The schedule is ignored if you set SingleRun to true, unless you set both SingleRun and NextRun. If both are set, the NextRun schedule takes priority. The ISO format can be used to set the precise date.

No, it is used with other parameters.

OlderThan

Logs that are older than the specified date are removed. The date format is YYYY-MM-DD. Usually, this parameter is used for single run jobs.

Yes, it is not used with OlderThanPeriod parameter.

OlderThanPeriod

Logs that are older than the specified timer expression are removed. For example, set 30d to remove logs, which are older than 30 days.

Yes, it is not used with OlderThan parameter.

ForProcess

Specifies process definition ID for logs that are removed.

No, it is used with other parameters.

ForDeployment

Specifies deployment ID of the logs that are removed.

No, it is used with other parameters.

EmfName

Persistence unit name that is used to perform delete operation.

Not applicable

Note

LogCleanupCommand does not remove any active instances, such as running process instances, task instances, or executor jobs.

Chapter 10. Process definitions and process instances in Business Central

A process definition is a Business Process Model and Notation (BPMN) 2.0 file that serves as a container for a process and its BPMN diagram. The process definition shows all of the available information about the business process, such as any associated subprocesses or the number of users and groups that are participating in the selected definition.

A process definition also defines the import entry for imported processes that the process definition uses, and the relationship entries.

BPMN2 source of a process definition

<definitions id="Definition"
               targetNamespace="http://www.jboss.org/drools"
               typeLanguage="http://www.java.com/javaTypes"
               expressionLanguage="http://www.mvel.org/2.0"
               xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"Rule Task
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
               xmlns:g="http://www.jboss.org/drools/flow/gpd"
               xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
               xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
               xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
               xmlns:tns="http://www.jboss.org/drools">

    <process>
      PROCESS
    </process>

    <bpmndi:BPMNDiagram>
     BPMN DIAGRAM DEFINITION
    </bpmndi:BPMNDiagram>

    </definitions>

After you have created, configured, and deployed your project that includes your business processes, you can view the list of all the process definitions in Business Central MenuManageProcess Definitions. You can refresh the list of deployed process definitions at any time by clicking the refresh button in the upper-right corner.

The process definition list shows all the available process definitions that are deployed into the platform. Click any of the process definitions listed to show the corresponding process definition details. This displays information about the process definition, such as if there is a sub-process associated with it, or how many users and groups exist in the process definition. The Diagram tab in the process definition details page contains the BPMN2-based diagram of the process definition.

Within each selected process definition, you can start a new process instance for the process definition by clicking the New Process Instance button in the upper-right corner. Process instances that you start from the available process definitions are listed in MenuManageProcess Instances.

You can also define the default pagination option for all users under the Manage drop-down menu (Process Definition, Process Instances, Tasks, Jobs, and Execution Errors) and in MenuTrackTask Inbox.

For more information about process and task administration in Business Central, see Managing and monitoring business processes in Business Central.

10.1. Starting a process instance from the process definitions page

You can start a process instance in MenuManageProcess Definitions. This is useful for environments where you are working with several projects or process definitions at the same time.

Prerequisites

  • A project with a process definition has been deployed in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Definitions.
  2. Select the process definition for which you want to start a new process instance from the list. The details page of the definition opens.
  3. Click New Process Instance in the upper-right corner to start a new process instance.
  4. Provide any required information for the process instance.
  5. Click Submit to create the process instance.
  6. View the new process instance in MenuManageProcess Instances.

10.2. Starting a process instance from the process instances page

You can create new process instances or view the list of all the running process instances in MenuManageProcess Instances.

Prerequisites

  • A project with a process definition has been deployed in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. Click New Process Instance in the upper-right corner and select the process definition for which you want to start a new process instance from the drop-down list.
  3. Provide any information required to start a new process instance.
  4. Click Start to create the process instance.

    The new process instance appears in the Manage Process Instances list.

10.3. Process definitions in XML

You can create processes directly in XML format using the BPMN 2.0 specifications. The syntax of these XML processes is defined using the BPMN 2.0 XML Schema Definition.

A process XML file consists of the following core sections:

  • process: This is the top part of the process XML that contains the definition of the different nodes and their properties. The process XML file consists of exactly one <process> element. This element contains parameters related to the process (its type, name, ID, and package name), and consists of three subsections: a header section where process-level information such as variables, globals, imports, and lanes are defined, a nodes section that defines each of the nodes in the process, and a connections section that contains the connections between all the nodes in the process.
  • BPMNDiagram: This is the lower part of the process XML file that contains all graphical information, such as the location of the nodes. The nodes section contains a specific element for each node and defines the various parameters and any sub-elements for that node type.

The following process XML file fragment shows a simple process that contains a sequence of a start event, a script task that prints "Hello World" to the console, and an end event:

<?xml version="1.0" encoding="UTF-8"?>

<definitions
  id="Definition"
  targetNamespace="http://www.jboss.org/drools"
  typeLanguage="http://www.java.com/javaTypes"
  expressionLanguage="http://www.mvel.org/2.0"
  xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
  xmlns:g="http://www.jboss.org/drools/flow/gpd"
  xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
  xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
  xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
  xmlns:tns="http://www.jboss.org/drools">

  <process processType="Private" isExecutable="true" id="com.sample.hello" name="Hello Process">
    <!-- nodes -->
    <startEvent id="_1" name="Start" />

    <scriptTask id="_2" name="Hello">
      <script>System.out.println("Hello World");</script>
    </scriptTask>

    <endEvent id="_3" name="End" >
      <terminateEventDefinition/>
    </endEvent>

    <!-- connections -->

    <sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
    <sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />
  </process>

  <bpmndi:BPMNDiagram>
    <bpmndi:BPMNPlane bpmnElement="com.sample.hello" >

      <bpmndi:BPMNShape bpmnElement="_1" >
        <dc:Bounds x="16" y="16" width="48" height="48" />
      </bpmndi:BPMNShape>

      <bpmndi:BPMNShape bpmnElement="_2" >
        <dc:Bounds x="96" y="16" width="80" height="48" />
      </bpmndi:BPMNShape>

      <bpmndi:BPMNShape bpmnElement="_3" >
        <dc:Bounds x="208" y="16" width="48" height="48" />
      </bpmndi:BPMNShape>

      <bpmndi:BPMNEdge bpmnElement="_1-_2" >
        <di:waypoint x="40" y="40" />
        <di:waypoint x="136" y="40" />
      </bpmndi:BPMNEdge>

      <bpmndi:BPMNEdge bpmnElement="_2-_3" >
        <di:waypoint x="136" y="40" />
        <di:waypoint x="232" y="40" />
      </bpmndi:BPMNEdge>

    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>

</definitions>

Chapter 11. Forms in Business Central

A form is a layout definition for a page, defined as HTML, that is displayed as a dialog window to the user during process and task instantiation. Task forms acquire data from a user for both the process and task instance execution, whereas process forms take input and output from process variables.

The input is then mapped to the task using the data input assignment, which you can use inside of a task. When the task is completed, the data is mapped as a data output assignment to provide the data to the parent process instance.

11.1. Form Modeler

Red Hat Process Automation Manager provides a custom editor for defining forms called Form Modeler. With Form Modeler, you can generate forms for data objects, task forms, and process start forms without writing code. Form Modeler includes a widget library for binding multiple data types and a callback mechanism to send notifications when form values change. Form Modeler uses bean-based validation and supports binding form fields to static or dynamic models.

Form Modeler includes the following features:

  • Form modeling user interface for forms
  • Form auto-generation from the data model or Java objects
  • Data binding for Java objects
  • Formula and expressions
  • Customized forms layouts
  • Forms embedding

Form Modeler comes with predefined field types that you place onto the canvas to create a form.

Figure 11.1. Example mortgage loan application form

5011

11.2. Generating process and task forms in Business Central

You can generate a process form from your business process that is displayed at process instantiation to the user who instantiated the process. You can also generate a task form from your business process that is displayed at user task instantiation, when the execution flow reaches the task, to the actor of the user task.

Procedure

  1. In Business Central, go to MenuDesignProjects.
  2. Click the project name to open the asset view and then click the business process name.
  3. In the process designer, click the process task that you want to create a form for (if applicable).
  4. In the upper-right toolbar, click the Form Generation icon and select the forms that you want to generate:

    • Generate process form: Generates the form for the entire process. This is the initial form that a user must complete when the process instance is started.
    • Generate all forms: Generates the form for the entire process and for all user tasks.
    • Generate forms for selection: Generates the forms for the selected user task nodes.

    Figure 11.2. Form generation menu

    auto form create

    The forms are created in the root directory of your project.

  5. Go to the root directory of your project in Business Central, click the new form name, and use the Form Modeler to customize the form to meet your requirements.

11.3. Manually creating forms in Business Central

You can create task and process forms manually from your project asset view. This is another way to generate a form without selecting to generate forms from your business process. For example, the Form Modeler now supports creating forms from external data objects.

Procedure

  1. In Business Central, go to MenuDesignProjects and click the project name.
  2. Click Add AssetForm.
  3. Provide the following information in the Create new Form window:

    • Form name (must be unique)
    • Package name
    • Model type: Select either Business Process or Data Object.

      • For the Business Process model type, select your business process from the Select Process drop-down menu, and then select the form that you want to create from the Select Form drop-down menu.
      • For the Data Object model type, select one of your project data objects from the Select Data Object from Project drop-down menu.
  4. Click Ok to open the Form Modeler.
  5. In the Components view on the left side of the Form Modeler, expand the Model Fields and Form Controls menus and create a new form by dragging your required fields and form controls to the canvas.
  6. Click Save to save your changes.

11.4. Document attachments in a form or process

Red Hat Process Automation Manager supports document attachments in forms using the Document form field. With the Document form field, you can upload documents that are required as part of a form or process.

To enable document attachments in forms and processes, complete the following procedures:

  1. Set the document marshalling strategy.
  2. Create a document variable in the business process.
  3. Map the task inputs and outputs to the document variable.

11.4.1. Setting the document marshalling strategy

The document marshalling strategy for your project determines where documents are stored for use with forms and processes. The default document marshalling strategy in Red Hat Process Automation Manager is org.jbpm.document.marshalling.DocumentMarshallingStrategy. This strategy uses a DocumentStorageServiceImpl class that stores documents locally in your PROJECT_HOME/.docs folder. You can set this document marshalling strategy or a custom document marshalling strategy for your project in Business Central or in the kie-deployment-descriptor.xml file.

Procedure

  1. In Business Central, go to MenuDesignProjects.
  2. Select a project. The project Assets window opens.
  3. Click the Settings tab.

    Figure 11.3. Settings tab

    Selecting the settings tab
  4. Click DeploymentsMarshalling StrategiesAdd Marshalling Strategy.
  5. In the Name field, enter the identifier of a document marshalling strategy, and in the Resolver drop-down menu, select the corresponding resolver type:

    • For single documents: Enter org.jbpm.document.marshalling.DocumentMarshallingStrategy as the document marshalling strategy and set the resolver type to Reflection.
    • For multiple documents: Enter new org.jbpm.document.marshalling.DocumentCollectionImplMarshallingStrategy(new org.jbpm.document.marshalling.DocumentMarshallingStrategy()) as the document marshalling strategy and set the resolver type to MVEL.
    • For custom document support: Enter the identifier of the custom document marshalling strategy and select the relevant resolver type.
  6. Click Test to validate your deployment descriptor file.
  7. Click Deploy to build and deploy the updated project.

    Alternatively, if you are not using Business Central, you can navigate to PROJECT_HOME/src/main/resources/META_INF/kie-deployment-descriptor.xml (if applicable) and edit the deployment descriptor file with the required <marshalling-strategies> elements.

  8. Click Save.

Example deployment descriptor file with document marshalling strategy for multiple documents

<deployment-descriptor
    xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <persistence-unit>org.jbpm.domain</persistence-unit>
  <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
  <audit-mode>JPA</audit-mode>
  <persistence-mode>JPA</persistence-mode>
  <runtime-strategy>SINGLETON</runtime-strategy>
  <marshalling-strategies>
    <marshalling-strategy>
      <resolver>mvel</resolver>
      <identifier>new org.jbpm.document.marshalling.DocumentCollectionImplMarshallingStrategy(new org.jbpm.document.marshalling.DocumentMarshallingStrategy());</identifier>
    </marshalling-strategy>
  </marshalling-strategies>

11.4.1.1. Using a custom document marshalling strategy for a content management system (CMS)

The document marshalling strategy for your project determines where documents are stored for use with forms and processes. The default document marshalling strategy in Red Hat Process Automation Manager is org.jbpm.document.marshalling.DocumentMarshallingStrategy. This strategy uses a DocumentStorageServiceImpl class that stores documents locally in your PROJECT_HOME/.docs folder. If you want to store form and process documents in a custom location, such as in a centralized content management system (CMS), add a custom document marshalling strategy to your project. You can set this document marshalling strategy in Business Central or in the kie-deployment-descriptor.xml file directly.

Procedure

  1. Create a custom marshalling strategy .java file that includes an implementation of the org.kie.api.marshalling.ObjectMarshallingStrategy interface. This interface enables you to implement the variable persistence required for your custom document marshalling strategy.

    The following methods in this interface help you create your strategy:

    • boolean accept(Object object): Determines if the specified object can be marshalled by the strategy
    • byte[] marshal(Context context, ObjectOutputStream os, Object object): Marshals the specified object and returns the marshalled object as byte[]
    • Object unmarshal(Context context, ObjectInputStream is, byte[] object, ClassLoader classloader): Reads the object received as byte[] and returns the unmarshalled object
    • void write(ObjectOutputStream os, Object object): Same as the marshal method, provided for backward compatibility
    • Object read(ObjectInputStream os): Same as the unmarshal method, provided for backward compatibility

    The following code sample is an example ObjectMarshallingStrategy implementation for storing and retrieving data from a Content Management Interoperability Services (CMIS) system:

    Example implementation for storing and retrieving data from a CMIS system

    package org.jbpm.integration.cmis.impl;
    
    import java.io.ByteArrayInputStream;
    import java.io.ByteArrayOutputStream;
    import java.io.IOException;
    import java.io.ObjectInputStream;
    import java.io.ObjectOutputStream;
    import java.util.HashMap;
    
    import org.apache.chemistry.opencmis.client.api.Folder;
    import org.apache.chemistry.opencmis.client.api.Session;
    import org.apache.chemistry.opencmis.commons.data.ContentStream;
    import org.apache.commons.io.IOUtils;
    import org.drools.core.common.DroolsObjectInputStream;
    import org.jbpm.document.Document;
    import org.jbpm.integration.cmis.UpdateMode;
    
    import org.kie.api.marshalling.ObjectMarshallingStrategy;
    
    public class OpenCMISPlaceholderResolverStrategy extends OpenCMISSupport implements ObjectMarshallingStrategy {
    
    	private String user;
    	private String password;
    	private String url;
    	private String repository;
    	private String contentUrl;
    	private UpdateMode mode = UpdateMode.OVERRIDE;
    
    	public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository) {
    		this.user = user;
    		this.password = password;
    		this.url = url;
    		this.repository = repository;
    	}
    
    	public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, UpdateMode mode) {
    		this.user = user;
    		this.password = password;
    		this.url = url;
    		this.repository = repository;
    		this.mode = mode;
    	}
    
    	   public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, String contentUrl) {
    	        this.user = user;
    	        this.password = password;
    	        this.url = url;
    	        this.repository = repository;
    	        this.contentUrl = contentUrl;
    	    }
    
    	    public OpenCMISPlaceholderResolverStrategy(String user, String password, String url, String repository, String contentUrl, UpdateMode mode) {
    	        this.user = user;
    	        this.password = password;
    	        this.url = url;
    	        this.repository = repository;
    	        this.contentUrl = contentUrl;
    	        this.mode = mode;
    	    }
    
    	public boolean accept(Object object) {
    		if (object instanceof Document) {
    			return true;
    		}
    		return false;
    	}
    
    	public byte[] marshal(Context context, ObjectOutputStream os, Object object) throws IOException {
    		Document document = (Document) object;
    		Session session = getRepositorySession(user, password, url, repository);
    		try {
    			if (document.getContent() != null) {
    				String type = getType(document);
    				if (document.getIdentifier() == null || document.getIdentifier().isEmpty()) {
    					String location = getLocation(document);
    
    					Folder parent = findFolderForPath(session, location);
    					if (parent == null) {
    						parent = createFolder(session, null, location);
    					}
    					org.apache.chemistry.opencmis.client.api.Document doc = createDocument(session, parent, document.getName(), type, document.getContent());
    					document.setIdentifier(doc.getId());
    					document.addAttribute("updated", "true");
    				} else {
    					if (document.getContent() != null && "true".equals(document.getAttribute("updated"))) {
    						org.apache.chemistry.opencmis.client.api.Document doc = updateDocument(session, document.getIdentifier(), type, document.getContent(), mode);
    
    						document.setIdentifier(doc.getId());
    						document.addAttribute("updated", "false");
    					}
    				}
    			}
    			ByteArrayOutputStream buff = new ByteArrayOutputStream();
    	        ObjectOutputStream oos = new ObjectOutputStream( buff );
    	        oos.writeUTF(document.getIdentifier());
    	        oos.writeUTF(object.getClass().getCanonicalName());
    	        oos.close();
    	        return buff.toByteArray();
    		} finally {
    			session.clear();
    		}
    	}
    
    	public Object unmarshal(Context context, ObjectInputStream ois, byte[] object, ClassLoader classloader) throws IOException, ClassNotFoundException {
    		DroolsObjectInputStream is = new DroolsObjectInputStream( new ByteArrayInputStream( object ), classloader );
    		String objectId = is.readUTF();
    		String canonicalName = is.readUTF();
    		Session session = getRepositorySession(user, password, url, repository);
    		try {
    			org.apache.chemistry.opencmis.client.api.Document doc = (org.apache.chemistry.opencmis.client.api.Document) findObjectForId(session, objectId);
    			Document document = (Document) Class.forName(canonicalName).newInstance();
    			document.setAttributes(new HashMap<String, String>());
    
    			document.setIdentifier(objectId);
    			document.setName(doc.getName());
    			document.setLastModified(doc.getLastModificationDate().getTime());
    			document.setSize(doc.getContentStreamLength());
    			document.addAttribute("location", getFolderName(doc.getParents()) + getPathAsString(doc.getPaths()));
    			if (doc.getContentStream() != null && contentUrl == null) {
    				ContentStream stream = doc.getContentStream();
    				document.setContent(IOUtils.toByteArray(stream.getStream()));
    				document.addAttribute("updated", "false");
    				document.addAttribute("type", stream.getMimeType());
    			} else {
    			    document.setLink(contentUrl + document.getIdentifier());
    			}
    			return document;
    		} catch(Exception e) {
    			throw new RuntimeException("Cannot read document from CMIS", e);
    		} finally {
    			is.close();
    			session.clear();
    		}
    	}
    
    	public Context createContext() {
    		return null;
    	}
    
    	// For backward compatibility with previous serialization mechanism
    	public void write(ObjectOutputStream os, Object object) throws IOException {
    		Document document = (Document) object;
    		Session session = getRepositorySession(user, password, url, repository);
    		try {
    			if (document.getContent() != null) {
    				String type = document.getAttribute("type");
    				if (document.getIdentifier() == null) {
    					String location = document.getAttribute("location");
    
    					Folder parent = findFolderForPath(session, location);
    					if (parent == null) {
    						parent = createFolder(session, null, location);
    					}
    					org.apache.chemistry.opencmis.client.api.Document doc = createDocument(session, parent, document.getName(), type, document.getContent());
    					document.setIdentifier(doc.getId());
    					document.addAttribute("updated", "false");
    				} else {
    					if (document.getContent() != null && "true".equals(document.getAttribute("updated"))) {
    						org.apache.chemistry.opencmis.client.api.Document doc = updateDocument(session, document.getIdentifier(), type, document.getContent(), mode);
    
    						document.setIdentifier(doc.getId());
    						document.addAttribute("updated", "false");
    					}
    				}
    			}
    			ByteArrayOutputStream buff = new ByteArrayOutputStream();
    	        ObjectOutputStream oos = new ObjectOutputStream( buff );
    	        oos.writeUTF(document.getIdentifier());
    	        oos.writeUTF(object.getClass().getCanonicalName());
    	        oos.close();
    		} finally {
    			session.clear();
    		}
    	}
    
    	public Object read(ObjectInputStream os) throws IOException, ClassNotFoundException {
    		String objectId = os.readUTF();
    		String canonicalName = os.readUTF();
    		Session session = getRepositorySession(user, password, url, repository);
    		try {
    			org.apache.chemistry.opencmis.client.api.Document doc = (org.apache.chemistry.opencmis.client.api.Document) findObjectForId(session, objectId);
    			Document document = (Document) Class.forName(canonicalName).newInstance();
    
    			document.setIdentifier(objectId);
    			document.setName(doc.getName());
    			document.addAttribute("location", getFolderName(doc.getParents()) + getPathAsString(doc.getPaths()));
    			if (doc.getContentStream() != null) {
    				ContentStream stream = doc.getContentStream();
    				document.setContent(IOUtils.toByteArray(stream.getStream()));
    				document.addAttribute("updated", "false");
    				document.addAttribute("type", stream.getMimeType());
    			}
    			return document;
    		} catch(Exception e) {
    			throw new RuntimeException("Cannot read document from CMIS", e);
    		} finally {
    			session.clear();
    		}
    	}
    
    }

  2. In Business Central, go to MenuDesignProjects.
  3. Click the project name and click Settings.

    Figure 11.4. Settings tab

    Selecting the settings tab
  4. Click DeploymentsMarshalling StrategiesAdd Marshalling Strategy.
  5. In the Name field, enter the identifier of the custom document marshalling strategy, such as org.jbpm.integration.cmis.impl.OpenCMISPlaceholderResolverStrategy in this example.
  6. Select the relevant option from the Resolver drop-down menu, such as Reflection in this example.
  7. Click Test to validate your deployment descriptor file.
  8. Click Deploy to build and deploy the updated project.

    Alternatively, if you are not using Business Central, you can navigate to PROJECT_HOME/src/main/resources/META_INF/kie-deployment-descriptor.xml (if applicable) and edit the deployment descriptor file with the required <marshalling-strategies> elements.

    Example deployment descriptor file with custom document marshalling strategy

    <deployment-descriptor
        xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
      <persistence-unit>org.jbpm.domain</persistence-unit>
      <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
      <audit-mode>JPA</audit-mode>
      <persistence-mode>JPA</persistence-mode>
      <runtime-strategy>SINGLETON</runtime-strategy>
      <marshalling-strategies>
        <marshalling-strategy>
          <resolver>reflection</resolver>
          <identifier>
            org.jbpm.integration.cmis.impl.OpenCMISPlaceholderResolverStrategy
          </identifier>
        </marshalling-strategy>
      </marshalling-strategies>

  9. To enable documents stored in a custom location to be attached to forms and processes, create a document variable in the relevant processes and map task inputs and outputs to that document variable in Business Central.

11.4.2. Creating a document variable in a business process

After you set a document marshalling strategy, create a document variable in the related process to upload documents to a human task and for the document or documents to be visible in the Process Instances view in Business Central.

Prerequisites

Procedure

  1. In Business Central, go to MenuDesignProjects.
  2. Click the project name to open the asset view and click the business process name.
  3. Click the canvas and click diagram properties on the right side of the window to open the Properties panel.
  4. Expand Process Data and click 6176 and enter the following values:

    • Name: document
    • Custom Type: org.jbpm.document.Document for a single document or org.jbpm.document.DocumentCollection for multiple documents

11.4.3. Mapping task inputs and outputs to the document variable

If you want to view or modify the attachments inside of task forms, create assignments inside of the task inputs and outputs.

Prerequisties

  • You have a project that contains a business process asset that has at least one user task.

Procedure

  1. In Business Central, go to MenuDesignProjects.
  2. Click the project name to open the asset view and click the business process name.
  3. Click a user task and click diagram properties on the right side of the window to open the Properties panel.
  4. Expand Implementation/Execution and next to Assignments, click btn assign to open the Data I/O window.
  5. Next to Data Inputs and Assignments, click Add and enter the following values:

    • Name: taskdoc_in
    • Data Type: org.jbpm.document.Document for a single document or org.jbpm.document.DocumentCollection for multiple documents
    • Source: document
  6. Next to Data Outputs and Assignments, click Add and enter the following values:

    • Name: taskdoc_out
    • Data Type: org.jbpm.document.Document for a single document or org.jbpm.document.DocumentCollection for multiple documents
    • Target: document

    The Source and Target fields contain the name of the process variable you created earlier.

  7. Click Save.

Chapter 12. Advanced process concepts and tasks

12.1. Invoking a Decision Model and Notation (DMN) service in a business process

You can use Decision Model and Notation (DMN) to model a decision service graphically in a decision requirements diagram (DRD) in Business Central and then invoke that DMN service as part of a business process in Business Central. Business processes interact with DMN services by identifying the DMN service and mapping business data between DMN inputs and the business process properties.

As an illustration, this procedure uses an example TrainStation project that defines train routing logic. This example project contains the following data object and DMN components designed in Business Central for the routing decision logic:

Example Train object

public class Train {

     private String departureStation;

     private String destinationStation;

     private BigDecimal railNumber;

     // Getters and setters
}

Figure 12.1. Example Compute Rail DMN model

dmn execution graph

Figure 12.2. Example Rail DMN decision table

dmn execution expression

Figure 12.3. Example tTrain DMN data type

dmn execution data type

For more information about creating DMN models in Business Central, see Designing a decision service using DMN models.

Prerequisites

  • All required data objects and DMN model components are defined in the project.

Procedure

  1. In Business Central, go to MenuDesignProjects and click the project name.
  2. Select or create the business process asset in which you want to invoke the DMN service.
  3. In the process designer, use the left toolbar to drag and drop BPMN components as usual to define your overall business process logic, connections, events, tasks, or other elements.
  4. To incorporate a DMN service in the business process, add a Business Rule task from the left toolbar or from the start-node options and insert the task in the relevant location in the process flow.

    For this example, the following Accept Train business process incorporates the DMN service in the Route To Rail node:

    Figure 12.4. Example Accept Train business process with a DMN service

    dmn execution business process
  5. Select the business rule task node that you want to use for the DMN service, click Properties in the upper-right corner of the process designer, and under Implementation/Execution, define the following fields:

    • Rule Language: Select DMN.
    • Namespace: Enter the unique namespace from the DMN model file. Example: https://www.drools.org/kie-dmn
    • Decision Name: Enter the name of the DMN decision node that you want to invoke in the selected process node. Example: Rail
    • DMN Model Name: Enter the DMN model name. Example: Compute Rail

      Important

      When you explore the root node, ensure that the Namespace and DMN Model Name fields consist of the same value in BPMN as DMN diagram.

  6. Under Data AssignmentsAssignments, click the Edit icon and add the DMN input and output data to define the mapping between the DMN service and the process data.

    For the Route To Rail DMN service node in this example, you add an input assignment for Train that corresponds to the input node in the DMN model, and add an output assignment for Rail that corresponds to the decision node in the DMN model. The Data Type must match the type that you set for that node in the DMN model, and the Source and Target definition is the relevant variable or field for the specified object.

    Figure 12.5. Example input and output mapping for the Route To Rail DMN service node

    dmn execution io mapping
  7. Click Save to save the data input and output data.
  8. Define the remainder of your business process according to how you want the completed DMN service to be handled.

    For this example, the PropertiesImplementation/ExecutionOn Exit Action value is set to the following code to store the rail number after the Route To Rail DMN service is complete:

    Example code for On Exit Action

    train.setRailNumber(rail);

    If the rail number is not computed, the process reaches a No Appropriate Rail end error node that is defined with the following condition expression:

    Figure 12.6. Example condition for No Appropriate Rail end error node

    dmn execution negative condition

    If the rail number is computed, the process reaches an Accept Train script task that is defined with the following condition expression:

    Figure 12.7. Example condition for Accept Train script task node

    dmn execution positive condition

    The Accept Train script task also uses the following script in PropertiesImplementation/ExecutionScript to print a message about the train route and current rail:

    com.myspace.trainstation.Train t =
        (com.myspace.trainstation.Train) kcontext.getVariable("train");
    System.out.println("Train from: " + t.getDepartureStation() +
                       ", to: " + t.getDestinationStation() +
                       ",  is on rail: " + t.getRailNumber());
  9. After you define your business process with the incorporated DMN service, save your process in the process designer, deploy the project, and run the corresponding process definition to invoke the DMN service.

    For this example, when you deploy the TrainStation project and run the corresponding process definition, you open the process instance form for the Accept Train process definition and set the departure station and destination station fields to test the execution:

    Figure 12.8. Example process instance form for the Accept Train process definition

    dmn execution process instance form

    After the process is executed, a message appears in the server log with the train route that you specified:

    Example server log output for the Accept Train process

    Train from: Zagreb, to: Belgrade,  is on rail: 1

Chapter 13. Additional resources

Part II. Interacting with processes and tasks

As a knowledge worker, you use Business Central in Red Hat Process Automation Manager to run processes and tasks of the business process application developed by citizen developers. A business process is a series of steps that are executed as defined in the process flow. To effectively interact with processes and tasks, you must have a clear understanding of the business process and be able to determine the current step of a process or task. You can start and stop tasks; search and filter tasks and process instances; delegate, claim, and release tasks; set a due date and priority of tasks; view and add comments to tasks; and view the task history log.

Prerequisites

Chapter 14. Business processes in Business Central

A business process application created by a citizen developer in Business Central depicts the flow of the business process as a series of steps. Each step executes according to the process flow chart. A process can consist of one or more smaller discrete tasks. As a knowledge worker, you work on processes and tasks that occur during business process execution.

As an example, using Red Hat Process Automation Manager, the mortgage department of a financial institution can automate the complete business process for a mortgage loan. When a new mortgage request comes in, a new process instance is created in the mortgage application. Because all requests follow the same set of rules for processing, consistency in every step is ensured. This results in an efficient process that reduces processing time and effort.

14.1. Knowledge worker user

Consider the example of a customer account representative processing mortgage loan requests at a financial institution. As a customer account representative, you can perform the following tasks:

  • Accept and decline mortgage requests
  • Search and filter through requests
  • Delegate, claim, and release requests
  • Set a due date and priority on requests
  • View and comment on requests
  • View the request history log

Chapter 15. Knowledge worker tasks in Business Central

A task is a part of the business process flow that a given user can claim and perform. You can handle tasks in MenuTrackTask Inbox in Business Central. It displays the task list for the logged-in user. A task can be assigned to a particular user, multiple users, or to a group of users. If a task is assigned to multiple users or a group of users, it is visible in the task lists of all the users and any user can claim the task. When a task is claimed by a user, it is removed from the task list of other users.

15.1. Starting a task

You can start user tasks in MenuManageTasks and in MenuTrackTask Inbox in Business Central.

Note

Ensure that you are logged in and have appropriate permissions for starting and stopping tasks.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the Work tab of the task page, click Start. Once you start a task, its status changes to InProgress.

    You can view the status of tasks on the Task Inbox as well as on the Manage Tasks page.

Note

Only users with the process-admin role can view the task list on the Manage Tasks page. Users with the admin role can access the Manage Tasks page, however they see only an empty task list.

15.2. Stopping a task

You can stop user tasks from the Tasks and Task Inbox page.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the Work tab of the task page, click Complete.

15.3. Delegating a task

After tasks are created in Business Central, you can delegate them to others.

Note

A user assigned with any role can delegate, claim, or release tasks visible to the user. On the Task Inbox page, the Actual Owner column displays the name of the current owner of the task.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the task page, click the Assignments tab.
  4. In the User field, enter the name of the user or group you want to delegate the task to.
  5. Click Delegate. Once a task is delegated, the owner of the task changes.

15.4. Claiming a task

After tasks are created in Business Central, you can claim the released tasks. A user can claim a task from the Task Inbox page only if the task is assigned to a group the user belongs to.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the Work tab of the task page, click Claim.
  4. To claim the released task from the Task Inbox page, do any of the following tasks:

    • Click Claim from the three dots in the Actions column.
    • Click Claim and Work from the three dots in the Actions column to open, view, and modify the details of a task.

The user who claims a task becomes the owner of the task.

15.5. Releasing a task

After tasks are created in Business Central, you can release your tasks for others to claim.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the task page, click Release. A released task has no owner.

15.6. Bulk actions on tasks

In the Tasks and Task Inbox pages in Business Central, you can perform bulk actions over multiple tasks in a single operation.

Note

If a specified bulk action is not permitted based on the task status, a notification is displayed and the operation is not executed on that particular task.

15.6.1. Claiming tasks in bulk

After you create tasks in Business Central, you can claim the available tasks in bulk.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To claim the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Claim.
  4. To confirm, click Claim on the Claim selected tasks window.

For each task selected, a notification is displayed showing the result.

15.6.2. Releasing tasks in bulk

You can release your owned tasks in bulk for others to claim.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To release the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Release.
  4. To confirm, click Release on the Release selected tasks window .

For each task selected, a notification is displayed showing the result.

15.6.3. Resuming tasks in bulk

If there are suspended tasks in Business Central, you can resume them in bulk.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To resume the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Resume.
  4. To confirm, click Resume on the Resume selected tasks window.

For each task selected, a notification is displayed showing the result.

15.6.4. Suspending tasks in bulk

After you create tasks in Business Central, you can suspend the tasks in bulk.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To suspend the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Suspend.
  4. To confirm, click Suspend on the Suspend selected tasks window .

For each task selected, a notification is displayed showing the result.

15.6.5. Reassigning tasks in bulk

After you create tasks in Business Central, you can reassign your tasks in bulk and delegate them to others.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To reassign the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Reassign.
  4. In the Tasks reassignment window, enter the user ID of the user to whom you want to reassign the tasks.
  5. Click Delegate.

For each task selected, a notification is displayed showing the result.

Chapter 16. Task filtering in Business Central

Business Central provides built-in filters to help you search tasks. You can filter tasks by attributes such as Status, Filter By, Process Definition Id, and Created On. It is also possible to create custom task filters using the Advanced Filters option. The newly created custom filter is added to the Saved Filters pane, which is accessible by clicking on the star icon on the left of the Task Inbox page.

16.1. Managing task list columns

In the task list on the Task Inbox and Manage Tasks windows, you can specify what columns to view and you can change the order of columns to better manage task information.

Note

Only users with the process-admin role can view the task list on the Manage Tasks page. Users with the admin role can access the Manage Tasks page, however they see only an empty task list.

Procedure

  1. In Business Central, go to MenuManageTasks or MenuTrackTask Inbox.
  2. On the Manage Task or Task Inbox page, click the Show/hide columns icon to the right of Bulk Actions.
  3. Select or deselect columns to display. As you make changes to the list, columns in the task list appear or disapper.
  4. To rearrange the columns, drag the column heading to a new position. Note that your pointer must change to the icon shown in the following illustration before you can drag the column:

    column icon
  5. To save your changes as a filter, click Save Filters, enter a name, and click Save.
  6. To use your new filter, click the Saved Filters icon (star) on the left of the screen and select your filter from the list.

16.2. Filtering tasks using basic filters

Business Central provides basic filters for filtering and searching through tasks based on their attributes such as Status, Filter By, Process Definition Id, and Created On.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the filter icon on the left of the page to expand the Filters pane and select the filters you want to use:

    • Status: Filter tasks based on their status.
    • Filter By: Filter tasks based on Id, Task, Correlation Key, Actual Owner, or Process Instance Description attribute.
    • Process Definition Id: Filter tasks based on process definition ids.
    • Created On: Filter tasks based on their creation date.

You can use the Advanced Filters option to create custom filters in Business Central.

16.3. Filtering tasks using advanced filters

You can create custom task filters using the Advanced Filters option in Business Central.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the advanced filters icon on the left of the page to expand the Advanced Filters panel.
  3. In the Advanced Filters panel, enter the filter name and description, and click Add New.
  4. Select an attribute from the Select column drop-down list, such as Name. The content of the drop-down changes to Name != value1.
  5. Click the drop-down again and choose the required logical query. For the Name attribute, choose equals to.
  6. Change the value of the text field to the name of the task you want to filter.

    Note

    The name must match the value defined in the business process of the project.

  7. Click Save and the tasks are filtered according to the filter definition.
  8. Click the star icon to open the Saved Filters pane.

    In the Saved Filters pane, you can view the saved advanced filters.

16.4. Managing tasks using default filter

You can set a task filter as a default filter using the Saved Filter option in Business Central. A default filter will be executed every time when the page is open by the user.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox or go to MenuManageTasks.
  2. On the Task Inbox page or the Manage Tasks page, click the star icon on the left of the page to expand the Saved Filters panel.

    In the Saved Filters panel, you can view the saved advanced filters.

    Default filter selection for Tasks or Task Inbox

    Default filter selection for Tasks or Task Inbox

  3. In the Saved Filters panel, set a saved task filter as the default filter.

16.5. Viewing task variables using basic filters

Business Central provides basic filters to view task variables in Manage Tasks and Task Inbox. You can view the task variables of the task as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageTasks or go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the filter icon on the left of the page to expand the Filters panel
  3. In the Filters panel, select the Task Name.

    The filter is applied to the current task list.

  4. Click Show/hide columns on the upper right of the tasks list and the task variables of the specified task id will be displayed.
  5. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

16.6. Viewing task variables using advanced filters

You can use the Advanced Filters option in Business Central to view task variables in Manage Tasks and Task Inbox. When you create a filter with the task defined, you can view the task variables of the task as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageTasks or go to MenuTrackTask Inbox.
  2. On the Manage Tasks page or the Task Inbox page, click the advanced filters icon to expand the Advanced Filters panel.
  3. In the Advanced Filters panel, enter the name and description of the filter, and click Add New.
  4. From the Select column list, select the name attribute. The value will change to name != value1.
  5. From the Select column list, select equals to for the logical query.
  6. In the text field, enter the name of the task.
  7. Click Save and the filter is applied on the current task list.
  8. Click Show/hide columns on the upper right of the tasks list and the task variables of the specified task id will be displayed.
  9. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

Chapter 17. Process instance filtering in Business Central

Business Central now provides you with basic and advanced filters to help you filter and search through process instances. You can filter processes by attributes such as State, Errors, Filter By, Name, Start Date, and Last update. You can also create custom filters using the Advanced Filters option. The newly created custom filter is added to the Saved Filters pane, which is accessible by clicking on the star icon on the left of the Manage Process Instances page.

Note

All users except those with manager or rest-all roles can access and filter process instances in Business Central.

17.1. Filtering process instances using basic filters

Business Central provides basic filters for filtering and searching through process instances based on their attributes such as State, Errors, Filter By, Name, Start Date, and Last update.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the filter icon on the left of the page to expand the Filters pane and select the filters you want to use:

    • State: Filter process instances based on their state (Active, Aborted, Completed, Pending, and Suspended).
    • Errors: Filter process instances that contain at least one or no errors.
    • Filter By: Filter process instances based on Id, Initiator, Correlation Key, or Description attribute.
    • Name: Filter process instances based on process definition name.
    • Definition ID: The ID of the instance definition.
    • Deployment ID: The ID of the instance deployment.
    • SLA Compliance: SLA compliance status (Aborted, Met, N/A, Pending, and Violated).
    • Parent Process ID: The ID of the parent process instance.
    • Start Date: Filter process instances based on their creation date.
    • Last update: Filter process instances based on their last modified date.

You can also use the Advanced Filters option to create custom filters in Business Central.

17.2. Filtering process instances using advanced filters

You can create custom process instance filters using the Advanced Filters option in Business Central.

Procedure

  1. In Business Central, click MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the Advanced Filters icon.
  3. In the Advanced Filters pane, enter the name and description of the filter, and click Add New.
  4. Select an attribute from the Select column drop-down list, for example, processName. The content of the drop-down changes to processName != value1.
  5. Click the drop-down again and choose the required logical query. For the processName attribute, choose equals to.
  6. Change the value of the text field to the name of the process you want to filter.

    Note

    The processName must match the value defined in the business process of the project.

  7. Click Save and the processes are filtered according to the filter definition.
  8. Click the star icon to open the Saved Filters pane.

    In the Saved Filters pane, you can view all the saved advanced filters.

17.3. Managing process instances using default filter

You can set a process instance filter as a default filter using the Saved Filter option in Business Central. A default filter will be executed every time when the page is open by the user.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the star icon on the left of the page to expand the Saved Filters panel.

    In the Saved Filters panel, you can view the saved advanced filters.

    Default filter selection for Process Instances

    Default filter selection for Process Instances

  3. In the Saved Filters panel, set a saved process instance filter as the default filter.

17.4. Viewing process instance variables using basic filters

Business Central provides basic filters to view process instance variables. You can view the process instance variables of the process as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the filter icon on the left of the page to expand the Filters panel.
  3. In the Filters panel, select the Definition Id.

    The filter is applied on the current process instance list.

  4. Click Show/hide columns on the upper right of the process instances list and the process instance variables of the specified process id will be displayed.
  5. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

17.5. Viewing process instance variables using advanced filters

You can use the Advanced Filters option in Business Central to view process instance variables. When you create a filter over the column processId, you can view the process instance variables of the process as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the advanced filters icon to expand the Advanced Filters panel.
  3. In the Advanced Filters panel, enter the name and description of the filter, and click Add New.
  4. From the Select column list, select the processId attribute. The value will change to processId != value1.
  5. From the Select column list, select equals to for the logical query.
  6. In the text field, enter the name of the process id.
  7. Click Save and the filter is applied on the current process instance list.
  8. Click Show/hide columns on the upper right of the process instances list and the process instance variables of the specified process id will be displayed.
  9. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

Chapter 18. Configuring emails in task notification

Earlier it was possible to send notifications only to users or group of users in Business Central. Now you can directly add any email addresses as well.

Prerequisites

You have created a project in Business Central.

Procedure

  1. Create a business process.

    For more information about creating a business process in Business Central, see Chapter 3, Creating a business process in Business Central.

  2. Create a user task.

    For more information about creating a user task in Business Central, see Section 3.4, “Creating user tasks”.

  3. In the upper-right corner of the screen, click the Properties icon.
  4. Expand Implementation/Execution and click btn assign next to Notifications, to open the Notifications window.
  5. Click Add.
  6. In the Notifications window, enter an email address in the To: email(s) field to set the recipients of the task notification emails.

    You can add multiple email addresses separated by comma.

  7. Enter the subject and body of the email.
  8. Click Ok.

    You can see the added email addresses in the To: email(s) column in the Notifications window.

  9. Click Ok.

Chapter 19. Setting the due date and priority of a task

You can set the priority, due date, and time of a task in Business Central from the Task Inbox page. Note that all users may not have permissions for setting priority and the due date of a task.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the task page, click the Details tab.
  4. In the Due Date field, select the required date from the calendar and the due time from the drop-down list.
  5. In the Priority field, select the required priority.
  6. Click Update.

Chapter 20. Viewing and adding comments to a task

You can add comments to a task and also view the existing comments of a task in Business Central.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the task page, click the Work tab or the Comments tab.
  4. In the Comment field, enter the task related comment and click Add Comment icon.

    All task related comments are displayed in a tabular form in the Work as well as Comments tab.

Note

To select or clear the Show task comments at work tab check box, go to the Business Central home page, click the Settings icon and select the Process Administration option. Only users with the admin role have access to enable or disable this feature.

Chapter 21. Viewing the history log of a task

You can view the history log of a task in Business Central from the Logs tab of task. The history log lists all the events in the "Date Time: Task event" format.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the task to open it.
  3. On the task page, click the Logs tab.

    All events that take place during the task life cycle is listed in the Logs tab.

Chapter 22. Viewing the history log of a process instance

You can view the history log of a process instance in Business Central from its Logs tab. The log lists all the events in the Date Time: Event Node Type: Event Type format.

You can filter the logs based on Event Node Type and Event Type. You can also view the details of the human nodes in the logs.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Process Instances page, click the process instance whose log you want to view.
  3. On the instance page, click the Logs tab.
  4. Select the required check boxes from Event Node Type and Event Type pane to filter the log as per your need.
  5. To view additional information regarding human nodes, expand Details.
  6. Click Reset to revert to the default filter selection.

    All events that occur in a process instance life cycle are listed in the Logs tab.

Part III. Managing and monitoring business processes in Business Central

As a process administrator, you can use Business Central in Red Hat Process Automation Manager to manage and monitor process instances and tasks running on a number of projects. From Business Central you can start a new process instance, verify the state of all process instances, and abort processes. You can view the list of jobs and tasks associated with your processes, as well as understand and communicate any process errors.

Prerequisites

Chapter 23. Process monitoring

Red Hat Process Automation Manager provides real-time monitoring for your business processes and includes the following capabilities:

  • Business managers can monitor processes in real time.
  • Customers can monitor the current status of their requests.
  • Administrators can easily monitor any errors related to process execution.

Chapter 24. Process definitions and process instances in Business Central

A process definition is a Business Process Model and Notation (BPMN) 2.0 file that serves as a container for a process and its BPMN diagram. The process definition shows all of the available information about the business process, such as any associated subprocesses or the number of users and groups that are participating in the selected definition.

A process definition also defines the import entry for imported processes that the process definition uses, and the relationship entries.

BPMN2 source of a process definition

<definitions id="Definition"
               targetNamespace="http://www.jboss.org/drools"
               typeLanguage="http://www.java.com/javaTypes"
               expressionLanguage="http://www.mvel.org/2.0"
               xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"Rule Task
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
               xmlns:g="http://www.jboss.org/drools/flow/gpd"
               xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
               xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
               xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
               xmlns:tns="http://www.jboss.org/drools">

    <process>
      PROCESS
    </process>

    <bpmndi:BPMNDiagram>
     BPMN DIAGRAM DEFINITION
    </bpmndi:BPMNDiagram>

    </definitions>

After you have created, configured, and deployed your project that includes your business processes, you can view the list of all the process definitions in Business Central MenuManageProcess Definitions. You can refresh the list of deployed process definitions at any time by clicking the refresh button in the upper-right corner.

The process definition list shows all the available process definitions that are deployed into the platform. Click any of the process definitions listed to show the corresponding process definition details. This displays information about the process definition, such as if there is a sub-process associated with it, or how many users and groups exist in the process definition. The Diagram tab in the process definition details page contains the BPMN2-based diagram of the process definition.

Within each selected process definition, you can start a new process instance for the process definition by clicking the New Process Instance button in the upper-right corner. Process instances that you start from the available process definitions are listed in MenuManageProcess Instances.

You can also define the default pagination option for all users under the Manage drop-down menu (Process Definition, Process Instances, Tasks, Jobs, and Execution Errors) and in MenuTrackTask Inbox.

24.1. Starting a process instance from the process definitions page

You can start a process instance in MenuManageProcess Definitions. This is useful for environments where you are working with several projects or process definitions at the same time.

Prerequisites

  • A project with a process definition has been deployed in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Definitions.
  2. Select the process definition for which you want to start a new process instance from the list. The details page of the definition opens.
  3. Click New Process Instance in the upper-right corner to start a new process instance.
  4. Provide any required information for the process instance.
  5. Click Submit to create the process instance.
  6. View the new process instance in MenuManageProcess Instances.

24.2. Starting a process instance from the process instances page

You can create new process instances or view the list of all the running process instances in MenuManageProcess Instances.

Prerequisites

  • A project with a process definition has been deployed in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. Click New Process Instance in the upper-right corner and select the process definition for which you want to start a new process instance from the drop-down list.
  3. Provide any information required to start a new process instance.
  4. Click Start to create the process instance.

    The new process instance appears in the Manage Process Instances list.

24.3. Generating process documentation in Business Central

In the process designer in Business Central, you can view and print a report of the process definition. The process documentation summarizes the components, data, and visual flow of the process in a format (PDF) that you can print and share more easily.

Procedure

  1. In Business Central, navigate to a project that contains a business process and select the process.
  2. In the process designer, click the Documentation tab to view the summary of the process file, and click Print in the top-right corner of the window to print the PDF report.

    Figure 24.1. Generate process documentation

    Project-level service task settings

Chapter 25. Process instance management

To view process instances, in Business Central, click MenuManageProcess Instances. Each row in the Manage Process Instances list represents a process instance from a particular process definition. Each execution is differentiated from all the others by the internal state of the information that the process is manipulating. Click on a process instance to view the corresponding tabs with runtime information related to the process.

Figure 25.1. Process instance tab view

Process instance tab view
  • Instance Details: Provides an overview about what is going on inside the process. It displays the current state of the instance and the current activity that is being executed.
  • Process Variables: Displays all of the process variables that are being manipulated by the instance, with the exception of the variables that contain documents. You can edit the process variable value and view its history.
  • Documents: Displays process documents if the process contains a variable of the type org.jbpm.Document. This enables access, download, and manipulation of the attached documents.
  • Logs: Displays process instance logs for the end users. For more information, see Interacting with processes and tasks.
  • Diagram: Tracks the progress of the process instance through the BPMN2 diagram. The node or nodes of the process flow that are in progress are highlighted in red. Reusable subprocesses appear collapsed within the parent process. Double-click on the reusable subprocess node to open its diagram from the parent process diagram.

For information on user credentials and conditions to be met to access KIE Server runtime data, see Planning a Red Hat Process Automation Manager installation.

25.1. Process instance filtering

For process instances in MenuManageProcess Instances, you can use the Filters and Advanced Filters panels to sort process instances as needed.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the Filters icon on the left of the page to select the filters that you want to use:

    • State: Filter process instances based on their state (Active, Aborted, Completed, Pending, and Suspended).
    • Errors: Filter process instances that contain at least one or no errors.
    • Filter By: Filter process instances based on the following attributes:

      • Id: Filter by process instance ID.

        Input: Numeric

      • Initiator: Filter by the user ID of the process instance initiator.

        The user ID is a unique value, and depends on the ID management system.

        Input: String

      • Correlation key: Filter by correlation key.

        Input: String

      • Description: Filter by process instance description.

        Input: String

    • Name: Filter process instances based on process definition name.
    • Definition ID: The ID of the instance definition.
    • Deployment ID: The ID of the instance deployment.
    • SLA Compliance: SLA compliance status (Aborted, Met, N/A, Pending, and Violated).
    • Parent Process ID: The ID of the parent process.
    • Start Date: Filter process instances based on their creation date.
    • Last update: Filter process instances based on their last modified date.

You can also use the Advanced Filters option to create custom filters in Business Central.

25.2. Creating a custom process instance list

You can view the list of all the running process instances in MenuManageProcess Instances in Business Central. From this page, you can manage the instances during run time and monitor their execution. You can customize which columns are displayed, the number of rows displayed per page, and filter the results. You can also create a custom process instance list.

Prerequisites

  • A project with a process definition has been deployed in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. In the Manage Process Instances page, click the advanced filters icon on the left to open the list of process instance Advanced Filters options.
  3. In the Advanced Filters panel, enter the name and description of the filter that you want to use for your custom process instance list, and click Add New.
  4. From the list of filter values, select the parameters and values to configure the custom process instance list, and click Save.

    A new filter is created and immediately applied to the process instances list. The filter is also saved in the Saved Filters list. You can access saved filters by clicking the star icon on the left side of the Manage Process Instances page.

25.3. Managing process instances using a default filter

You can set a process instance filter as a default filter using the Saved Filter option in Business Central. A default filter will be executed every time when the page is open by the user.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the star icon on the left of the page to expand the Saved Filters panel.

    In the Saved Filters panel, you can view the saved advanced filters.

    Default filter selection for Process Instances

    Default filter selection for Process Instances

  3. In the Saved Filters panel, set a saved process instance filter as the default filter.

25.4. Viewing process instance variables using basic filters

Business Central provides basic filters to view process instance variables. You can view the process instance variables of the process as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the filter icon on the left of the page to expand the Filters panel.
  3. In the Filters panel, select the Definition Id and select a definition ID from the list.

    The filter is applied to the current process instance list.

  4. Click columns icon (to the right of Bulk Actions) in the upper-right of the screen to diplay or hide columns in the process instances table.
  5. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

25.5. Viewing process instance variables using advanced filters

You can use the Advanced Filters option in Business Central to view process instance variables. When you create a filter over the column processId, you can view the process instance variables of the process as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. On the Manage Process Instances page, click the advanced filters icon to expand the Advanced Filters panel.
  3. In the Advanced Filters panel, enter the name and description of the filter, and click Add New.
  4. From the Select column list, select the processId attribute. The value will change to processId != value1.
  5. From the Select column list, select equals to for the query.
  6. In the text field, enter the name of the process id.
  7. Click Save and the filter is applied on the current process instance list.
  8. Click the columns icon (to the right of Bulk Actions) in the upper-right of the process instances list and the process instance variables of the specified process ID will be displayed.
  9. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

25.6. Aborting a process instance using Business Central

If a process instance becomes obsolete, you can abort the process instance in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Instances to view the list of available process instances.
  2. Select the process instance you want to abort from the list.
  3. In the process details page, click the Abort button in the upper-right corner.

25.7. Signaling process instances from Business Central

You can signal a process instance from Business Central.

Prerequisites

  • A project with a process definition has been deployed in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Instances.
  2. Locate the required process instance, click the Actions button and select Signal from the drop-down menu.
  3. Fill the following fields:

    • Signal Name: Corresponds to the SignalRef or MessageRef attributes of the signal. This field is required.

      Note

      You can also send a Message event to the process by adding the Message- prefix in front of the MessageRef value.

    • Signal Data: Corresponds to data accompanying the signal. This field is optional.
Note

When using the Business Central user interface, you can only signal Signal intermediate catch events.

25.8. Asynchronous signal events

When several process instances from different process definitions are waiting for the same signal, they are executed sequentially in the same thread. But, if one of those process instances throws a runtime exception, all the other process instances are affected and usually result in a rolled back transaction. To avoid this situation, Red Hat Process Automation Manager supports using asynchronous signals events for:

  • Throwing intermediate signal events
  • End events

25.8.1. Configuring asynchronous signals for intermediate events

Intermediate events drive the flow of a business process. Intermediate events are used to either catch or throw an event during the execution of the business process. An intermediate event handles a particular situation that occurs during process execution. A throwing signal intermediate event produces a signal object based on the defined properties.

You can configure an asynchronous signal for intermediate events in Business Central.

Prerequisites

  • You have created a project in Business Central and it contains at least one business process asset.
  • A project with a process definition has been deployed in Business Central.

Procedure

  1. Open a business process asset.
  2. In the process designer canvas, drag and drop the Intermediate Signal from the left toolbar.
  3. In the upper-right corner, click diagram properties to open the Properties panel.
  4. Expand Data Assignments.
  5. Click the box under the Assignments sub-section. The Task Data I/O dialog box opens.
  6. Click Add next to Data Inputs and Assignments.
  7. Enter a name of the throw event as async in the Name field.
  8. Leave the Data Type and Source fields blank.
  9. Click OK.

It will automatically set the executor service on each session. This ensures that each process instance is signaled in a different transaction.

25.8.2. Configuring asynchronous signals for end events

End events indicate the completion of a business process. All end events, with the exception of the none and terminate end events, are throw events. A throwing signal end event is used to finish a process or subprocess flow. When the execution flow enters the element, the execution flow finishes and produces a signal identified by its SignalRef property.

You can configure an asynchronous signal for end events in Business Central.

Prerequisites

  • You have created a project in Business Central and it contains at least one business process asset.
  • A project with a process definition has been deployed in Business Central.

Procedure

  1. Open a business process asset.
  2. In the process designer canvas, drag and drop the End Signal from the left toolbar.
  3. In the upper-right corner, click diagram properties to open the Properties panel.
  4. Expand Data Assignments.
  5. Click the box under the Assignments sub-section. The Task Data I/O dialog box opens.
  6. Click Add next to Data Inputs and Assignments.
  7. Enter a name of the throw event as async in the Name field.
  8. Leave the Data Type and Source fields blank.
  9. Click OK.

It will automatically set the executor service on each session. This ensures that each process instance is signaled in a different transaction.

25.9. Process instance operations

Process instance administration API exposes the following operations for the process engine and the individual process instance.

  • get process nodes - by process instance id: Returns all nodes, including all embedded subprocesses that exist in the process instance. You must retrieve the nodes from the specified process instance to ensure that the node exists and includes a valid ID so that it can be used by other administration operations.
  • cancel node instance - by process instance id and node instance id: Cancels a node instance within a process instance using the process and node instance IDs.
  • retrigger node instance - by process instance id and node instance id: Re-triggers a node instance by canceling the active node instance and creates a new node instance of the same type using the process and node instance IDs.
  • update timer - by process instance id and timer id: Updates the timer expiration of an active timer based on the time elapsed since the timer was scheduled. For example, if a timer was initially created with delay of one hour and after thirty minutes you set it to update in two hours, it expires in one and a half hours from the time it was updated.

    • delay: The duration after the timer expires.
    • period: The interval between the timer expiration for cycle timers.
    • repeat limit: Limits the expiration for a specified number for cycle timers.
  • update timer relative to current time - by process instance id and timer id: Updates the timer expiration of an active timer based on the current time. For example, if a timer was initially created with delay of one hour and after thirty minutes you set it to update in two hours, it expires in two hours from the time it was updated.
  • list timer instances - by process instance id: Returns all active timers for a specified process instance.
  • trigger node - by process instance id and node id: Triggers any node in a process instance at any time.

Chapter 26. Task management

Tasks that are assigned to the current user appear in MenuTrackTask Inbox in Business Central. You can click a task to open and begin working on it.

A user task can be assigned to a particular user, multiple users, or to a group. If assigned to multiple users or a group it appears in the task lists of all assigned users and any of the possible actors can claim the task. When a task is assigned to another user it no longer appears in your Task Inbox.

Task inbox

Business administrators can view and manage all user tasks from the Tasks page in Business Central, located under MenuManageTasks. Users with the admin or process-admin role can access the Tasks page but do not have access rights to view and manage tasks by default.

To manage all the tasks, a user must be specified as a process administrator by defining any of the following conditions:

  • User is specified as task admin user. The default value is Administrator.
  • User belongs to the task administrators group. The default value is Administrators.

You can configure the user and user group assignment with the org.jbpm.ht.admin.user and org.jbpm.ht.admin.group system properties.

You can open, view, and modify the details of a task, such as the due date, the priority, or the task description, by clicking a task in the list. The following tabs are available in the task page:

Task details
  • Work: Displays basic details about the task and the task owner. You can click the Claim button to claim the task. To undo the claim process, click the Release button.
  • Details: Displays information such as task description, status, and due date.
  • Assignments: Displays the current owner of the task and enables you to delegate the task to another person or group.
  • Comments: Displays comments added by task user(s). You can delete an existing comment and add a new comment.
  • Admin: Displays the potential owner of the task and enables you to to forward the task to another person or group. It also displays the actual owner of the task and you can send a reminder to the actual owner of the task.
  • Logs: Displays task logs containing task life cycle events (such as task started, claimed, completed), updates made to task fields (such as task due date and priority).

You can filter the tasks based on the filter parameters available by clicking the Filters icon on the left side of the page. For more information about filtering, see Section 26.1, “Task filtering”.

In addition to these, you can create custom filters to filter tasks based on the query parameters you define. For more information about custom tasks filters, see Section 26.2, “Creating custom task filters”.

26.1. Task filtering

For tasks in MenuManageTasks and in MenuTrackTask Inbox, you can use the Filters and Advanced Filters panels to sort tasks as needed.

Figure 26.1. Filtering Tasks - Default View

Filtering Tasks - Default View

The Manage Tasks page is only available to administrators and process administrators.

You can filter tasks by the following attributes in the Filters panel:

Status

Filter by task status. You can select more than one status to display results that meet any of the selected states. Removing the status filter displays all processes, regardless of status.

The following filter states are available:

  • Completed
  • Created
  • Error
  • Exited
  • Failed
  • InProgress
  • Obsolete
  • Ready
  • Reserved
  • Suspended
Id

Filter by process instance ID.

Input: Numeric

Task

Filter by task name.

Input: String

Correlation key

Filter by correlation key.

Input: String

Actual Owner

Filter by the task owner.

The actual owner refers to the user responsible for executing the task. The search is based on user ID, which is a unique value and depends on the ID management system.

Input: String

Process Instance Description

Filter by process instance description.

Input: String

Task Name
Filter by task name.
Process Definition Id
Filter by process definition Id.
SLA Compliance

Filter by SLA compliance state.

The following filter states are available:

  • Aborted
  • Met
  • N/A
  • Pending
  • Violated
Created On

Filtering by date or time.

This filter has the following quick filter options:

  • Last Hour
  • Today
  • Last 24 Hours
  • Last 7 Days
  • Last 30 Days
  • Custom

    Selecting Custom date and time filtering opens a calendar tool for selecting a date and time range.

    Figure 26.2. Search by Date

    Search by Date Range

26.2. Creating custom task filters

You can create a custom task filter based on a provided query in MenuManageTasks, or in MenuTrackTask Inbox for tasks assigned to the current user.

Procedure

  1. In Business Central, go to MenuManageTasks
  2. In the Manage Tasks page, click the advanced filters icon on the left to open the list of Advanced Filters options.
  3. In the Advanced Filters panel, enter the name and description of the filter, and click Add New.
  4. In the Select column drop-down menu, choose name.

    The content of the drop-down menu changes to name != value1.

  5. Click the drop-down menu again and choose equals to.
  6. Rewrite the value of the text field to the name of the task you want to filter. Note that the name must match the value defined in the associated business process:

    task name
  7. Click Ok to save the custom task filter.

    new fl final

    After you apply the filter with a specified restriction, the set of configurable columns is based on the specific custom task filter and contains the following column options:

    in name column new

26.3. Managing tasks using a default filter

You can set a task filter as a default filter using the Saved Filter option in Business Central. A default filter will be executed every time when the page is open by the user.

Procedure

  1. In Business Central, go to MenuTrackTask Inbox or go to MenuManageTasks
  2. On the Task Inbox page or the Manage Tasks page, click the star icon on the left of the page to expand the Saved Filters panel.

    In the Saved Filters panel, you can view the saved advanced filters.

    Default filter selection for Tasks or Task Inbox

    Default filter selection for Tasks or Task Inbox

  3. In the Saved Filters panel, set a saved task filter as the default filter.

26.4. Viewing task variables using basic filters

Business Central provides basic filters to view task variables in Manage Tasks and Task Inbox. You can view the task variables of the task as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageTasks or go to MenuTrackTask Inbox.
  2. On the Task Inbox page, click the filter icon on the left of the page to expand the Filters panel
  3. In the Filters panel, select the Task Name.

    The filter is applied on the current task list.

  4. Click Show/hide columns on the upper right of the tasks list and the task variables of the specified task id will be displayed.
  5. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

26.5. Viewing task variables using advanced filters

You can use the Advanced Filters option in Business Central to view task variables in Manage Tasks and Task Inbox. When you create a filter with the task defined, you can view the task variables of the task as columns using Show/hide columns.

Procedure

  1. In Business Central, go to MenuManageTasks or go to MenuTrackTask Inbox.
  2. On the Manage Tasks page or the Task Inbox page, click the advanced filters icon to expand the Advanced Filters panel.
  3. In the Advanced Filters panel, enter the name and description of the filter, and click Add New.
  4. From the Select column list, select the name attribute. The value will change to name != value1.
  5. From the Select column list, select equals to for the logical query.
  6. In the text field, enter the name of the task.
  7. Click Save and the filter is applied on the current task list.
  8. Click Show/hide columns on the upper right of the tasks list and the task variables of the specified task id will be displayed.
  9. Click the star icon to open the Saved Filters panel.

    In the Saved Filters panel, you can view all the saved advanced filters.

26.6. Managing custom tasks in Business Central

Custom tasks (work items) are tasks that you can customize and reuse across multiple business processes or across all projects in Business Central. Red Hat Process Automation Manager provides a set of custom tasks within the custom task repository in Business Central. You can enable or disable the default custom tasks and upload custom tasks into Business Central to implement the tasks in the relevant processes.

Note

Red Hat Process Automation Manager includes a limited set of supported custom tasks. Custom tasks that are not included in Red Hat Process Automation Manager are not supported.

Procedure

  1. In Business Central, click gear icon in the upper-right corner and select Custom Tasks Administration.

    This page lists the custom task installation settings and available custom tasks for processes in projects throughout Business Central. The custom tasks that you enable on this page become available in the project-level settings where you can then install each custom task to be used in processes. The way in which the custom tasks are installed in a project is determined by the global settings that you enable or disable under Settings on this Custom Tasks Administration page.

  2. Under Settings, enable or disable each setting to determine how the available custom tasks are implemented when a user installs them at the project level.

    The following custom task settings are available:

    • Install as Maven artifact: Uploads the custom task JAR file to the Maven repository that is configured with Business Central, if the file is not already present.
    • Install custom task dependencies into project: Adds any custom task dependencies to the pom.xml file of the project where the task is installed.
    • Use version range when installing custom task into project: Uses a version range instead of a fixed version of a custom task that is added as a project dependency. Example: [7.16,) instead of 7.16.0.Final
  3. Enable or disable (set to ON or OFF) any available custom tasks as needed. Custom tasks that you enable are displayed in project-level settings for all projects in Business Central.

    Figure 26.3. Enable custom tasks and custom task settings

    Custom Tasks Administration page
  4. To add a custom task, click Add Custom Task, browse to the relevant JAR file, and click the Upload icon. The JAR file must contain work item handler implementations annotated with @Wid.
  5. Optionally, to remove a custom task, click remove on the row of the custom task you want to remove and click Ok to confirm removal.
  6. After you configure all required custom tasks, navigate to a project in Business Central and go to the project SettingsCustom Tasks page to view the available custom tasks that you enabled.
  7. For each custom task, click Install to make the task available to the processes in that project or click Uninstall to exclude the task from the processes in the project.
  8. If you are prompted for additional information when you install a custom task, enter the required information and click Install again.

    The required parameters for the custom task depend on the type of task. For example, rule and decision tasks require artifact GAV information (Group ID, Artifact ID, Version), email tasks require host and port access information, and REST tasks require API credentials. Other custom tasks might not require any additional parameters.

    Figure 26.4. Install custom tasks for use in processes

    Project-level custom task settings
  9. Click Save.
  10. Return to the project page, select or add a business process in the project, and in the process designer palette, select the Custom Tasks option to view the available custom tasks that you enabled and installed:

    Figure 26.5. Access installed custom tasks in process designer

    Custom tasks in process designer

26.7. User task administration

User tasks enable you to include human actions as input to the business processes that you create. User task administration provides methods to manipulate user and group task assignments, data handling, time-based automatic notifications, and reassignments.

The following user task operations are available in Business Central:

  • add/remove potential owners - by task id: Adds or removes users and groups using the task ID.
  • add/remove excluded owners - by task id: Adds or removes excluded owners using the task ID.
  • add/remove business administrators - by task id: Adds or removes business administrators using the task ID.
  • add task inputs - by task id: Provides a way to modify task input content after a task is created using the task ID.
  • remove task inputs - by task id: Removes task input variables using the task ID.
  • remove task output - by task id: Removes task output variables using the task ID.
  • schedules new reassignment to given users/groups after given time elapses - by task id: Schedules automatic reassignment based on the time expression and the state of the task:

    • reassign if not started: Used if the task was not moved to the InProgress state.
    • reassign if not completed: Used if the task was not moved to the Completed state.
  • schedules new email notification to given users/groups after given time elapses - by task id: Schedules automatic email notification based on the time expression and the state of the task:

    • notify if not started: Used if the task was not moved to the InProgress state.
    • notify if not completed: Used if the task was not moved to the Completed state.
  • list scheduled task notifications - by task id: Returns all active task notifications using the task ID.
  • list scheduled task reassignments - by task id: Returns all active tasks reassignments using the task ID.
  • cancel task notification - by task id and notification id: Cancels and unschedules task notification using the task ID.
  • cancel task reassignment - by task id and reassignment id: Cancels and unschedules task reassignment using the task ID.

26.8. Bulk actions on tasks

In the Tasks and Task Inbox pages in Business Central, you can perform bulk actions over multiple tasks in a single operation.

Note

If a specified bulk action is not permitted based on the task status, a notification is displayed and the operation is not executed on that particular task.

26.8.1. Claiming tasks in bulk

After you create tasks in Business Central, you can claim the available tasks in bulk.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To claim the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Claim.
  4. To confirm, click Claim on the Claim selected tasks window.

For each task selected, a notification is displayed showing the result.

26.8.2. Releasing tasks in bulk

You can release your owned tasks in bulk for others to claim.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To release the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Release.
  4. To confirm, click Release on the Release selected tasks window .

For each task selected, a notification is displayed showing the result.

26.8.3. Resuming tasks in bulk

If there are suspended tasks in Business Central, you can resume them in bulk.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To resume the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Resume.
  4. To confirm, click Resume on the Resume selected tasks window.

For each task selected, a notification is displayed showing the result.

26.8.4. Suspending tasks in bulk

After you create tasks in Business Central, you can suspend the tasks in bulk.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To suspend the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Suspend.
  4. To confirm, click Suspend on the Suspend selected tasks window .

For each task selected, a notification is displayed showing the result.

26.8.5. Reassigning tasks in bulk

After you create tasks in Business Central, you can reassign your tasks in bulk and delegate them to others.

Procedure

  1. In Business Central, complete one of the following steps:

    • To view the Task Inbox page, select MenuTrackTask Inbox.
    • To view the Tasks page, select MenuManageTasks.
  2. To reassign the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table.
  3. From the Bulk Actions drop-down list, select Bulk Reassign.
  4. In the Tasks reassignment window, enter the user ID of the user to whom you want to reassign the tasks.
  5. Click Delegate.

For each task selected, a notification is displayed showing the result.

Chapter 27. Execution error management

When an execution error occurs for a business process, the process stops and reverts to the most recent stable state (the closest safe point) and continues its execution. If an error of any kind is not handled by the process the entire transaction rolls back, leaving the process instance in the previous wait state. Any trace of this is only visible in the logs, and usually displayed to the caller who sent the request to the process engine.

Users with process administrator (process-admin) or administrator (admin) roles are able to access error messages in Business Central. Execution error messaging provides the following primary benefits:

  • Better traceability
  • Visibility in case of critical processes
  • Reporting and analytics based on error situations
  • External system error handling and compensation

Configurable error handling is responsible for receiving any technical errors thrown throughout the process engine execution (including task service). The following technical exceptions apply:

  • Anything that extends java.lang.Throwable
  • Process level error handling and any other exceptions not previously handled

There are several components that make up the error handling mechanism and allow a pluggable approach to extend its capabilities.

The process engine entry point for error handling is the ExecutionErrorManager. This is integrated with RuntimeManager, which is then responsible for providing it to the underlying KieSession and TaskService.

From an API point of view, ExecutionErrorManager provides access to the following components:

  • ExecutionErrorHandler: The primary mechanism for error handling
  • ExecutionErrorStorage: Pluggable storage for execution error information

27.1. Viewing process execution errors in Business Central

You can view process errors in two locations in Business Central:

  • MenuManageProcess Instances
  • MenuManageExecution Errors

In the Manage Process Instances page, the Errors column displays the number of errors, if any, for the current process instance.

Prerequisites

  • An error has occurred while running a process in Business Central.

Procedure

  1. In Business Central, go to MenuManageProcess Instances and hover over the number shown in the Errors column.
  2. Click the number of errors shown in the Errors column to navigate to the Manage Execution Errors page.

    The Manage Execution Errors page shows a list of errors for all process instances.

27.2. Managing execution errors

By definition, every process error that is detected and stored is unacknowledged and must be handled by someone or something (in case of automatic error recovery). Errors are filtered on the basis of whether or not they have been acknowledged. Acknowledging an error saves the user information and time stamp for traceability. You can access the Error Management view at any time.

Procedure

  1. In Business Central, go to MenuManageExecution Errors.
  2. Select an error from the list to open the Details tab. This displays information about the error or errors.
  3. Click the Acknowledge button to acknowledge and clear the error. You can view the error later by selecting Yes on the Acknowledged filter in the Manage Execution Errors page.

    If the error was related to a task, a Go to Task button is displayed.

  4. Click the Go to Task button, if applicable, to view the associated job information in the Manage Tasks page.

    In the Manage Tasks page, you can restart, reschedule, or retry the corresponding task.

27.3. Error filtering

For execution errors in MenuManageExecution Errors, you can use the Filters and Advanced Filters panels to sort errors as needed.

Figure 27.1. Filtering Errors - Default View

Filtering Errors

You can filter execution errors by the following attributes in the Filters panel:

Type

Filter by errors by type. You can select multiple type filters. Removing the status filter displays all processes, regardless of status.

The following filter states are available:

  • DB
  • Task
  • Process
  • Job
Process Instance Id

Filter by process instance ID.

Input: Numeric

Job Id

Filter by job ID. The job id is created automatically when the job is created.

Input: Numeric

Id

Filter by process instance ID.

Input: Numeric

Acknowledged
Filter errors that have been or have not been acknowledged.
Error Date

Filtering by the date or time that the error occurred.

This filter has the following quick filter options:

  • Last Hour
  • Today
  • Last 24 Hours
  • Last 7 Days
  • Last 30 Days
  • Custom

    Selecting Custom date and time filtering opens a calendar tool for selecting a date and time range.

    Figure 27.2. Search by Date

    Search by Date Range

Chapter 28. Process instance migration

Process instance migration (PIM) is a standalone service containing a user interface and a back-end. It is packaged as a Thorntail uber-JAR. You can use the PIM service to define the migration between two different process definitions, known as a migration plan. The user can then apply the migration plan to the running process instance in a specific KIE Server.

For more information about the PIM service, see Process Instance Migration Service in KIE (Drools, OptaPlanner and jBPM ).

28.1. Installing the process instance migration service

You can use the process instance migration (PIM) service to create, export and execute migration plans. The PIM service is provided through a GitHub repository. To install the PIM service, clone the GitHub repository, then run the service and access it in a web browser.

Prerequisites

  • You have defined processes in a backup-ed Red Hat Process Automation Manager development environment.

Procedure

  1. Download the rhpam-7.9.1-add-ons.zip file from the Software Downloads page for Red Hat Process Automation Manager 7.9.
  2. Unzip the downloaded archive.
  3. Move the rhpam-7.9.1-process-migration-service-standalone.jar file from the add-ons archive to a desired location.
  4. In the location, create a YAML file containing the kieserver and Thorntail configuration, for example:

    thorntail:
      deployment:
        process-migration.war:
          jaxrs:
            application-path: /rest
          web:
            login-config:
              auth-method: BASIC
              security-domain: pim
            security-constraints:
              - url-pattern: /*
                roles: [ admin ]
              - url-pattern: /health/*
      datasources:
        data-sources:
          pimDS:
            driver-name: h2
            connection-url: jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
            user-name: DS_USERNAME
            password: DS_PASSWORD
      security:
        security-domains:
          pim:
            classic-authentication:
              login-modules:
                UsersRoles:
                  code: UsersRoles
                  flag: required
                  module-options:
                    usersProperties: application-users.properties
                    rolesProperties: application-roles.properties
    kieservers:
      - host: http://localhost:8080/kie-server/services/rest/server
        username: KIESERVER_USERNAME
        password: KIESERVER_PASSWORD
      - host: http://localhost:8280/kie-server/services/rest/server
        username: KIESERVER_USERNAME
        password: KIESERVER_PASSWORD1
  5. Start the PIM service:

    $ java -jar rhpam-7.9.1-process-migration-service-standalone.jar -s./config.yml
  6. To enable auto-detection of a JDBC driver by Thorntail, add the JAR file of the JDBC driver name to the thorntail.classpath system property. For example:

    $ java -Dthorntail.classpath=./h2-1.4.200.jar -jar rhpam-7.9.1-process-migration-service-standalone.jar -s ./config.yml
    Note

    The h2 JDBC driver is included by default. You can use different JDBC drivers to connect to different external databases.

  7. After the PIM service is up and running, enter http://localhost:8080 in a web browser.

28.2. Creating a migration plan

You can define the migration between two different process definitions, known as a migration plan, in the process instance migration (PIM) service web UI.

Prerequisites

  • You have defined processes in a backup-ed Red Hat Process Automation Manager development environment.
  • The process instance migration service is running.

Procedure

  1. Enter http://localhost:8080 in a web browser.
  2. Log in to the PIM service.
  3. In the upper right corner of the Process Instance Migration page, from the KIE Service list select the KIE Service you want to add a migration plan for.
  4. Click Add Plan. The Add Migration Plan Wizard window opens.
  5. In the Name field, enter a name for the migration plan.
  6. Optional: In the Description field, enter a description for the migration plan.
  7. Click Next.
  8. In the Source ContainerID field, enter the source container ID.
  9. In the Source ProcessId field, enter the source process ID.
  10. Click Copy Source To Target.
  11. In the Target ContainerID field, update the target container ID.
  12. Click Retrieve Definition from backend and click Next.

    pim migration wizard
  13. From the Source Nodes list, select the source node you want to map.
  14. From the Target Nodes list, select the target node you want to map.
  15. If the Source Process Definition Diagram pane is not displayed, click Show Source Diagram.
  16. If the Target Process Definition Diagram pane is not displayed, click Show Target Diagram.
  17. Optional: To modify the view in the diagram panes, perform any of the following tasks:

    • To select text, select the pim selection icon icon.
    • To pan, select the pim pan icon icon.
    • To zoom in, select the pim zoom in icon icon.
    • To zoom out, select the pim zoom out icon icon.
    • To fit to viewer, select the pim fit to icon icon.
  18. Click Map these two nodes.
  19. Click Next.
  20. Optional: To export as a JSON file, click Export.
  21. In the Review & Submit tab, review the plan and click Submit Plan.
  22. Optional: To export as a JSON file, click Export.
  23. Review the response and click Close.

28.3. Editing a migration plan

You can edit a migration plan in the process instance migration (PIM) service web UI. You can modify the migration plan name, description, specified nodes, and process instances.

Prerequisites

  • You have defined processes in a backup-ed Red Hat Process Automation Manager development environment.
  • The PIM service is running.

Procedure

  1. Enter http://localhost:8080 in a web browser.
  2. Log in to the PIM service.
  3. On the Process Instance Migration page, select the Edit Migration Plan pim edit icon icon on the row of the migration plan you want to edit. The Edit Migration Plan window opens.
  4. On each tab, modify the details you want to change.
  5. Click Next.
  6. Optional: To export as a JSON file, click Export.
  7. In the Review & Submit tab, review the plan and click Submit Plan.
  8. Optional: To export as a JSON file, click Export.
  9. Review the response and click Close.

28.4. Exporting a migration plan

You can export migration plans as a JSON file using the process instance migration (PIM) service web UI.

Prerequisites

  • You have defined processes in a backup-ed Red Hat Process Automation Manager development environment.
  • The PIM service is running.

Procedure

  1. Enter http://localhost:8080 in a web browser.
  2. Log in to the PIM service.
  3. On the Process Instance Migration page, select the Export Migration Plan pim export icon icon on the row of the migration plan you want to execute. The Export Migration Plan window opens.
  4. Review and click Export.

28.5. Executing a migration plan

You can execute the migration plan in the process instance migration (PIM) service web UI.

Prerequisites

  • You have defined processes in a backup-ed Red Hat Process Automation Manager development environment.
  • The PIM service is running.

Procedure

  1. Enter http://localhost:8080 in a web browser.
  2. Log in to the PIM service.
  3. On the Process Instance Migration page, select the Execute Migration Plan pim execute icon icon on the row of the migration plan you want to execute. The Execute Migration Plan Wizard window opens.
  4. From the migration plan table, select the check box on the row of each running process instance you want to migrate, and click Next.
  5. In the Callback URL field, enter the callback URL.
  6. To the right of Run migration, perform one of the following tasks:

    • To execute the migration immediately, select Now.
    • To schedule the migration, select Schedule and in the text field, enter the date and time, for example 06/20/2019 10:00 PM.
  7. Click Next.
  8. Optional: To export as a JSON file, click Export.
  9. Click Execute Plan.
  10. Optional: To export as a JSON file, click Export.
  11. Check the response and click Close.

28.6. Deleting a migration plan

You can delete a migration plan in the process instance migration (PIM) service web UI.

Prerequisites

  • You have defined processes in a backup-ed Red Hat Process Automation Manager development environment.
  • The PIM service is running.

Procedure

  1. Enter http://localhost:8080 in a web browser.
  2. Log in to the PIM service.
  3. On the Process Instance Migration page, select the Delete pim delete icon icon on the row of the migration plan you want to delete. The Delete Migration Plan window opens.
  4. Click Delete to confirm deletion.

Part IV. Designing and building cases for case management

As a developer, you can use Business Central to configure Red Hat Process Automation Manager assets for case management.

Case management differs from Business Process Management (BPM). It focuses more on the actual data being handled throughout the case rather than on the sequence of steps taken to complete a goal. Case data is the most important piece of information in automated case handling, while business context and decision-making are in the hands of the human case worker.

Red Hat Process Automation Manager includes the IT_Orders sample project in Business Central. This document refers to the sample project to explain case management concepts and provide examples.

The Getting started with case management tutorial describes how to create and test a new IT_Orders project in Business Central. After reviewing the concepts in this guide, follow the procedures in the tutorial to ensure that you are able to successfully create, deploy, and test your own case project.

Prerequisites

Chapter 29. Case management

Case management is an extension of Business Process Management (BPM) that enables you to manage adaptable business processes.

BPM is a management practice used to automate tasks that are repeatable and have a common pattern, with a focus on optimization by perfecting a process. Business processes are usually modeled with clearly defined paths leading to a business goal. This requires a lot of predictability, usually based on mass-production principles. However, many real-world applications cannot be described completely from start to finish (including all possible paths, deviations, and exceptions). Using a process-oriented approach in certain cases can lead to complex solutions that are hard to maintain.

Case management provides problem resolution for non-repeatable, unpredictable processes as opposed to the efficiency-oriented approach of BPM for routine, predictable tasks. It manages one-off situations when the process cannot be predicted in advance. A case definition usually consists of loosely coupled process fragments that can be connected directly or indirectly to lead to certain milestones and ultimately a business goal, while the process is managed dynamically in response to changes that occur during run time.

In Red Hat Process Automation Manager, case management includes the following core process engine features:

  • Case file instance
  • A per case runtime strategy
  • Case comments
  • Milestones
  • Stages
  • Ad hoc fragments
  • Dynamic tasks and processes
  • Case identifier (correlation key)
  • Case lifecycle (close, reopen, cancel, destroy)

A case definition is always an ad hoc process definition and does not require an explicit start node. The case definition is the main entry point for the business use case.

A process definition is introduced as a supporting construct of the case and can be invoked either as defined in the case definition or dynamically to bring in additional processing when required. A case definition defines the following new objects:

  • Activities (required)
  • Case file (required)
  • Milestones
  • Roles
  • Stages

Chapter 30. Case Management Model and Notation

You can use Business Central to import, view, and modify the content of Case Management Model and Notation (CMMN) files. When authoring a project, you can import your case management model and then select it from the asset list to view or modify it in a standard XML editor.

The following CMMN constructs are currently available:

  • Tasks (human task, process task, decision task, case task)
  • Discretionary tasks (same as above)
  • Stages
  • Milestones
  • Case file items
  • Sentries (entry and exit)

The following tasks are not supported:

  • Required
  • Repeat
  • Manual activation

Sentries for individual tasks are limited to entry criteria while entry and exit criteria are supported for stages and milestones. Decision tasks map by default to a DMN decision. Event listeners are not supported.

Red Hat Process Automation Manager does not provide any modeling capabilities for CMMN and focuses solely on the execution of the model.

Chapter 31. Case files

A case instance is a single instance of a case definition and encapsulates the business context. All case instance data is stored in the case file, which is accessible to all process instances that might participate in the particular case instance. Each case instance and its case file are completely isolated from the other cases. Only case instance participants can access the case file.

A case file is used in case management as a repository of data for the entire case instance. It contains all roles, data objects, the data map, and any other data. The case can be closed and reopened at a later date with the same case file attached. A case instance can be closed at any time and does not require a specific resolution to be completed.

The case file can also include embedded documentation, references, PDF attachments, web links, and other options.

31.1. Configuring case ID prefixes

The caseId parameter is a string value that is the identifier of the case instance. You can configure the Case ID Prefix in Red Hat Process Automation Manager designer to distinguish different types of cases.

The following procedures uses the IT_Orders sample project to demonstrate how to create unique case ID prefixes for specific business needs.

Prerequisites

  • The IT_Orders sample project is open in Business Central.

Procedure

  1. In Business Central, go to MenuDesignProjects. If there are existing projects, you can access the samples by clicking the MySpace default space and selecting Try Samples from the Add Project drop-down menu. If there are no existing projects, click Try samples.
  2. Select IT_Orders and click Ok.
  3. In the Assets window, click the orderhardware business process to open the designer.
  4. Click on an empty space on the canvas and in the upper-right corner, click the Properties diagram properties icon.
  5. Scroll down and expand Case Management.
  6. In the Case ID Prefix field, enter an ID value. The ID format is internally defined as ID-XXXXXXXXXX, where XXXXXXXXXX is a generated number that provides a unique ID for the case instance.

    If a prefix is not provided, the default prefix is CASE with the following identifiers:

    CASE-0000000001

    CASE-0000000002

    CASE-0000000003

    You can specify any prefix. For example, if you specify the prefix IT, the following identifiers are generated:

    IT-0000000001

    IT-0000000002

    IT-0000000003

    Figure 31.1. Case ID Prefix field

    case prefix

31.2. Configuring case ID expressions

The following procedures uses the IT_Orders sample project to demonstrate how set metadata attribute keys to customize expressions for generating the caseId.

Prerequisites

  • The IT_Orders sample project is open in Business Central.

Procedure

  1. In Business Central, go to MenuDesignProjects. If there are existing projects, you can access the samples by clicking the MySpace default space and selecting Try Samples from the Add Project drop-down menu. If there are no existing projects, click Try samples.
  2. Select IT_Orders and click Ok.
  3. In the Assets window, click the orderhardware business process to open the designer.
  4. Click on an empty space on the canvas and in the upper-right corner, click the Properties diagram properties icon.
  5. Expand the Advanced menu to access the Metadata Attributes fields.
  6. Specify one of the following functions for the customCaseIdPrefix metadata attribute:

    • LPAD: Left padding
    • RPAD: Right padding
    • TRUNCATE: Truncate
    • UPPER: Upper case

    Figure 31.2. Setting the UPPER function for the customCaseIdPrefix metadata attribute

    expressions

    In this example, type is a variable set in the Case File Variables field, which during runtime a user may define to it the value type1. UPPER is a pre-built function to uppercase a variable, and IT- is a static prefix. The results are dynamic case IDs such as IT-TYPE1-0000000001, IT-TYPE1-0000000002, and IT-TYPE1-0000000003.

    Figure 31.3. Case File Variables

    case vars

    If the customCaseIdPrefixIsSequence case metadata attribute is set to false (default value is true), the case instance will not create any sequence and the caseIdPrefix expression is the case ID. For example, if generating case IDs based on social security numbers, no specific sequence or instance identifiers are required.

    The customCaseIdPrefixIsSequence metadata attribute is optionally added and set to false (default value is true) to disable the numeric sequences for the case IDs. This is useful if an expression used for custom case IDs already contains a case file variable to express unique business identifiers instead of the generic sequence values. For example, if generating case IDs based on social security numbers, no specific sequence or instance identifiers are required. For the example below, SOCIAL_SECURITY_NUMBER is also a variable declared as a case file variable.

    Figure 31.4. customCaseIdPrefixIsSequence metadata attribute

    prefix false

    The IS_PREFIX_SEQUENCE case file variable is optionally added as a flag during runtime to disable or enable the sequence generation for case IDs. For example, there is no need to create a sequence suffix for medical insurance coverage for an individual. For a multi-family insurance policy, the company might set the IS_PREFIX_SEQUENCE case variable to true to aggregate a sequence number for each member of the family.

    The result of using the customCaseIdPrefixIsSequence metadata attribute statically as false or using the IS_PREFIX_SEQUENCE case file variable and setting during runtime for it the value false, is the same.

    Figure 31.5. IS_PREFIX_SEQUENCE case variable

    prefix sequence

Chapter 32. Subcases

Subcases provide the flexibility to compose complex cases that consist of other cases. This means that you can split large and complex cases into multiple layers of abstraction and even multiple case projects. This is similar to splitting a process into multiple subprocesses.

A subcase is another case definition that is invoked from within another case instance or a regular process instance. It has all of the capabilities of a regular case instance:

  • It has a dedicated case file.
  • It is isolated from any other case instance.
  • It has its own set of case roles.
  • It has its own case prefix.

You can use the process designer to add subcases to your case definition. A subcase is a case within your case project, similar to having a subprocess within your process. Subcases can also be added to a regular business process. Doing this enables you to start a case from within a process instance.

For more information about adding a subcase to your case definition, see Getting started with case management.

The Sub Case Data I/O window supports the following set of input parameters that enable you to configure and start the subcase:

case management subcase dataio
Independent
Optional indicator that tells the process engine whether or not the case instance is independent. If it is independent, the main case instance does not wait for its completion. The value of this property is false by default.
GroupRole_XXX
Optional group to case role mapping. The role names belonging to this case instance can be referenced here, meaning that participants of the main case can be mapped to participants of the subcase. This means that the group assigned to the main case is automatically assigned to the subcase, where XXX is the role name and the value of the property is the value of the group role assignment.
DataAccess_XXX
Optional data access restrictions where XXX is the name of the data item and the value of the property is the access restrictions.
DestroyOnAbort
Optional indicator that tells the process engine whether to cancel or destroy the subcase when the subcase activity is aborted. The default value is true.
UserRole_XXX
Optional user to case role mapping. You can reference the case instance role names here, meaning that an owner of the main case can be mapped to an owner of the subcase. The person assigned to the main case is automatically assigned to the subcase, where XXX is the role name and the value of the property is the value of the user role assignment.
Data_XXX
Optional data mapping from this case instance or business process to a subcase, where XXX is the name of the data in the subcase being targeted. This parameter can be provided as many times as needed.
DeploymentId
Optional deployment ID (or container ID in the context of KIE Server) that indicates where the targeted case definition is located.
CaseDefinitionId
The mandatory case definition ID to be started.
CaseId
The case instance ID of the subcase after it is started.

Chapter 33. Ad hoc and dynamic tasks

You can use case management to carry out tasks ad hoc, rather than following a strict end-to-end process. You can also add tasks to a case dynamically during run time.

Ad hoc tasks are defined in the case modeling phase. Ad hoc tasks that are not configured as AdHoc Autostart are optional and might not be used during a case. Therefore, they must be triggered by a signal event or by a Java API.

Dynamic tasks are defined during the case execution and are not present in the case definition model. Dynamic tasks address specific needs that arise during the case. They can be added to the case and worked on at any time using a case application, as demonstrated in the Red Hat Process Automation Manager Showcase application. Dynamic tasks can also be added by Java and Remote API calls.

Dynamic tasks can be user or service activities, while ad hoc tasks can be any type of task. For more information about task types, see "BPMN2 tasks in process designer" in Designing business processes in Business Central.

Dynamic processes are any reusable sub-process from a case project.

Ad hoc nodes with no incoming connections are configured in the node’s AdHoc Autostart property and are triggered automatically when the case instance is started.

Ad hoc tasks are optional tasks that are configured in a case definition. Because they are ad hoc, they must be triggered in some way, usually by a signal event or Java API call.

Chapter 34. Adding dynamic tasks and processes to a case using the KIE Server REST API

You can add dynamic tasks and processes to a case during run time to address unforeseen changes that can occur during the lifecycle of a case. Dynamic activities are not defined in the case definition and therefore they cannot be signaled the way that a defined ad hoc task or process can.

You can add the following dynamic activities to a case:

  • User tasks
  • Service tasks (any type that is implemented as a work item)
  • Reusable subprocesses

Dynamic user and service tasks are added to a case instance and immediately executed. Depending on the nature of a dynamic task, it might start and wait for completion (user task) or directly complete after execution (service task). For dynamic subprocesses, the process engine requires a KJAR containing the process definition for that dynamic process to locate the process by its ID and execute it. This subprocess belongs to the case and has access to all of the data in the case file.

You can use the Swagger REST API application to create dynamic tasks and subprocesses.

Prerequisites

Procedure

  1. In a web browser, open the following URL:

    http://localhost:8080/kie-server/docs

  2. Open the list of available endpoints under Case instances :: Case Management.
  3. Locate the POST method endpoints for creating dynamic activities.

    POST /server/containers/{id}/cases/instances/{caseId}/tasks

    Adds a dynamic task (user or service depending on the payload) to case instance.

    POST /server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/tasks

    Adds a dynamic task (user or service depending on the payload) to specific stage within the case instance.

    POST /server/containers/{id}/cases/instances/{caseId}/processes/{pId}

    Adds a dynamic subprocess identified by the process ID to case instance.

    POST /server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/processes/{pId}

    Adds a dynamic subprocess identified by process ID to stage within a case instance.

    swagger case management dynamic
  4. To open the documentation, click the REST endpoint required to create the dynamic task or process.
  5. Click Try it out and enter the parameters and body required to create the dynamic activity.
  6. Click Execute to create the dynamic task or subprocess using the REST API.

34.1. Creating a dynamic user task using the KIE Server REST API

You can create a dynamic user task during case run time using the REST API. To create a dynamic user task, you must provide the following information:

  • Task name
  • Task subject (optional, but recommended)
  • Actors or groups (or both)
  • Input data

Use the following procedure to create a dynamic user task for the IT_Orders sample project available in Business Central using the Swagger REST API tool. The same endpoint can be used for REST API without Swagger.

Prerequisites

Procedure

  1. In a web browser, open the following URL:

    http://localhost:8080/kie-server/docs.

  2. Open the list of available endpoints under Case instances :: Case Management.
  3. Click click the following POST method endpoint to open the details:

    /server/containers/{id}/cases/instances/{caseId}/tasks

  4. Click Try it out and then input the following parameters:

    Table 34.1. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    Request body

    {
     "name" : "RequestManagerApproval",
     "data" : {
       "reason" : "Fixed hardware spec",
       "caseFile_hwSpec" : "#{caseFile_hwSpec}"
      },
     "subject" : "Ask for manager approval again",
     "actors" : "manager",
     "groups" : ""
    }

  5. In the Swagger application, click Execute to create the dynamic task.

This procedure creates a new user task associated with case IT-000000001. The task is assigned to the person assigned to the manager case role. This task has two input variables:

  • reason
  • caseFile_hwSpec: defined as an expression to allow run time capturing of a process or case data.

Some tasks include a form that provides a user-friendly UI for the task, which you can locate by task name. In the IT Orders case, the RequestManagerApproval task includes the form RequestManagerApproval-taskform.form in its KJAR.

After it is created, the task appears in the assignee’s Task Inbox in Business Central.

34.2. Creating a dynamic service task using the KIE Server REST API

Service tasks are usually less complex than user tasks, although they might need more data to execute properly. Service tasks require the following information:

  • name: The name of the activity
  • nodeType: The type of node that will be used to find the work item handler
  • data: The map of the data to properly deal with execution

During case run time, you can create a dynamic service task with the same endpoint as a user task, but with a different body payload.

Use the following procedure using the Swagger REST API to create a dynamic service task for the IT_Orders sample project available in Business Central. You can use the same endpoint for REST API without Swagger.

Prerequisites

Procedure

  1. In a web browser, open the following URL:

    http://localhost:8080/kie-server/docs

  2. Open the list of available endpoints under Case instances :: Case Management.
  3. Click the following POST method endpoint to open the details:

    /server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/tasks

  4. Click Try it out and then enter the following parameters:

    Table 34.2. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    Request body

    {
     "name" : "InvokeService",
     "data" : {
       "Parameter" : "Fixed hardware spec",
       "Interface" : "org.jbpm.demo.itorders.services.ITOrderService",
       "Operation" : "printMessage",
       "ParameterType" : "java.lang.String"
      },
     "nodeType" : "Service Task"
    }

  5. In the Swagger application, click Execute to create the dynamic task.

In this example, a Java-based service is executed. It consists of an interface with the public class org.jbpm.demo.itorders.services.ITOrderService and the public printMessage method with a single String argument. When executed, the parameter value is passed to the method for execution.

Numbers, names, and other types of data given to create service tasks depend on the implementation of a service task’s handler. In the example provided, the org.jbpm.process.workitem.bpmn2.ServiceTaskHandler handler is used.

Note

For any custom service tasks, ensure the handler is registered in the deployment descriptor in the Work Item Handlers section, where the name is the same as the nodeType used for creating a dynamic service task.

34.3. Creating a dynamic subprocess using the KIE Server REST API

When creating a dynamic subprocess, only optional data is provided. There are no special parameters as there are when creating dynamic tasks.

The following procedure describes how to use the Swagger REST API to create a dynamic subprocess task for the IT_Orders sample project available in Business Central. The same endpoint can be used for REST API without Swagger.

Prerequisites

Procedure

  1. In a web browser, open the following URL:

    http://localhost:8080/kie-server/docs.

  2. Open the list of available endpoints under Case instances :: Case Management.
  3. Click the following POST method endpoint to open the details:

    /server/containers/{id}/cases/instances/{caseId}/processes/{pId}

  4. Click Try it out and enter the following parameters:

    Table 34.3. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    pId

    itorders-data.place-order

    The pId is the process ID of the subprocess to be created.

    Request body

    {
     "placedOrder" : "Manually"
    }

  5. In the Swagger application, click Execute to start the dynamic subprocess.

In this example, the place-order subprocess has been started in the IT Orders case with the case ID IT-0000000001. You can see this process in Business Central under MenuManageProcess Instances.

If the described example has executed correctly, the place-order process appears in the list of process instances. Open the details of the process and note that the correlation key for the process includes the IT Orders case instance ID, and the Process Variables list includes the variable placedOrder with the value Manually, as delivered in the REST API body.

Chapter 35. Comments

In case management, comments facilitate collaboration within the case instance, and allow case workers to easily communicate with each other to exchange information.

Comments are bound to the case instance. Case instances are part of the case file, so you can use comments to take action on the instances. Basic text-based comments can have a complete operations set, similar to CRUD (create, read, update, and delete).

Chapter 36. Case roles

Case roles provide an additional layer of abstraction for user participation in case handling. Roles, users, and groups are used for different purposes in case management.

Roles
Roles drive the authorization for a case instance and are used for user activity assignments. A user or one or more groups can be assigned to the owner role. The owner is whoever the case belongs to. Roles are not restricted to a single set of people or groups as part of a case definition. Use roles to specify task assignments instead of assigning a specific user or group to a task assignment to ensure that the case remains dynamic.
Groups
A group is a collection of users who are able to carry out a particular task or have a set of specified responsibilities. You can assign any number of people to a group and assign any group to a role. You can add or change members of a group at any time. Do not hard code a group to a particular task.
Users

A user is an individual who can be given a particular task when you assign them a role or add them to a group.

Note

Do not create a user called unknown in process engine or KIE Server. The unknown user account is a reserved system name with superuser access. The unknown user account performs tasks related to the SLA violation listener when there are no users logged in.

The following example illustrates how the preceding case management concepts apply to a hotel reservation with the following information:

  • Role: Guest
  • Group: Receptionist, Maid
  • User: Marilyn

The Guest role assignment affects the specific work of the associated case and is unique to all case instances. The number of users or groups that can be assigned to a role is limited by the case Cardinality, which is set during role creation in the process designer and case definition. For example, the hotel reservation case has only one guest while the IT_Orders sample project has two suppliers of IT hardware.

When roles are defined, ensure that roles are not hard-coded to a single set of people or groups as part of case definition and that they can differ for each case instance. This is why case role assignments are important.

Role assignments can be assigned or removed when a case starts or at any time when a case is active. Although roles are optional, use roles in case definitions to maintain an organized workflow.

Important

Always use roles for task assignments instead of actual user or group names. This ensures that the case remains dynamic and actual user or group assignments can be made as late as required.

Roles are assigned to users or groups and authorized to perform tasks when a case instance is started.

36.1. Creating case roles

You can create and define case roles in the case definition when you design the case in the process designer. Case roles are configured on the case definition level to keep them separate from the actors involved in handling the case instance. Roles can be assigned to user tasks or used as contact references throughout the case lifecycle, but they are not defined in the case as a specific user or group of users.

Case instances include the individuals that are actually handling the case work. Assign roles when starting a new case instance. In order to keep cases flexible, you can modify case role assignment during case run time, although doing this has no effect on tasks already created based on the previous role assignment. The actor assigned to a role is flexible but the role itself remains the same for each case.

Prerequisites

  • A case project that has a case definition exists in Business Central.
  • The case definition asset is open in the process designer.

Procedure

  1. To define the roles involved in the case, click on an empty space in the editor’s canvas, and click diagram properties to open the Properties menu.
  2. Expand Case Management to add a case role.

    The case role requires a name for the role and a case cardinality. Case cardinality is the number of actors that are assigned to the role in any case instance. For example, the IT_Orders sample case management project includes the following roles:

    Figure 36.1. ITOrders Case Roles

    Case Roles

    In this example, you can assign only one actor (a user or a group) as the case owner and assign only one actor to the manager role. The supplier role can have two actors assigned. Depending on the case, you can assign any number of actors to a particular role based on the configured case cardinality of the role.

36.2. Role authorization

Roles are authorized to perform specific case management tasks when starting a new case instance using the Showcase application or the REST API.

Use the following procedure to start a new IT Orders case using the REST API.

Prerequisites

  • The IT_Orders sample project has been imported in Business Central and deployed to the KIE Server.

Procedure

  1. Create a POST REST API call with the following endpoint:

    http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

    • itorders: The container alias that has been deployed to the KIE Server.
    • itorders.orderhardware: The name of the case definition.
  2. Provide the following role configuration in the request body:

    {
      "case-data" : {  },
      "case-user-assignments" : {
        "owner" : "cami",
        "manager" : "cami"
      },
      "case-group-assignments" : {
        "supplier" : "IT"
     }
    }

    This starts a new case with defined roles, as well as autostart activities, which are started and ready to be worked on. Two of the roles are user assignments (owner and manager) and the third is a group assignment (supplier).

    After the case instance is successfully started, the case instance returns the IT-0000000001 case ID.

For information about how to start a new case instance using the Showcase application, see Using the Showcase application for case management.

36.3. Assigning a task to a role

Case management processes need to be as flexible as possible to accommodate changes that can happen dynamically during run time. This includes changing user assignments for new case instances or for active cases. For this reason, ensure that you do not hard code roles to a single set of users or groups in the case definition. Instead, role assignments can be defined on the task nodes in the case definition, with users or groups assigned to the roles on case creation.

Red Hat Process Automation Manager contains a predefined selection of node types to simplify business process creation. The predefined node panel is located on the left side of the diagram editor.

node task panel

Prerequisites

  • A case definition has been created with case roles configured at the case definition level. For more information about creating case roles, see Creating case roles.

Procedure

  1. Open the Activities menu in the designer palette and drag the user or service task that you want to add to your case definition onto the process designer canvas.
  2. With the task node selected, click diagram properties to open the Properties panel on the right side of the designer.
  3. Expand Implementation/Execution, click Add below the Actors property and either select or type the name of the role to which the task will be assigned. You can use the Groups property in the same way for group assignments.

    For example, in the IT_Orders sample project, the Manager approval user task is assigned to the manager role:

    case management task assignment

    In this example, after the Prepare hardware spec user task has been completed, the user assigned to the manager role will receive the Manager approval task in their Task Inbox in Business Central.

The user assigned to the role can be changed during the case run time, but the task itself continues to have the same role assignment. For example, the person originally assigned to the manager role might need to take time off (if they become ill, for example), or they might unexpectedly leave the company. To respond to this change in circumstances, you can edit the manager role assignment so that someone else can be assigned the tasks associated with that role.

For information about how to change role assignments during case run time, see Modifying case role assignments during run time using Showcase or Modifying case role assignments during run time using REST API.

36.4. Modifying case role assignments during run time using Showcase

You can change case instance role assignments during case run time using the Showcase application. Roles are defined in the case definition and assigned to tasks in the case lifecycle. Roles cannot change during run time because they are predefined, but you can change the actors assigned to the roles to change who is responsible for carrying out case tasks.

Prerequisites

  • An active case instance with users or groups is already assigned to at least one case role.

Procedure

  1. In the Showcase application, click the case you want to work on in the Case list to open the case overview.
  2. Locate the role assignment that you want to change in the Roles box in the lower-right corner of the page.

    showcase role assignments
  3. To remove a single user or group from the role assignment, click the X next to the assignment. In the confirmation window, click Remove to remove the user or group from the role.
  4. To remove all role assignments from a role, click the three dots next to the role and select the Remove all assignments option. In the confirmation window, click Remove to remove all user and group assignments from the role.
  5. To change the role assignment from one user or group to another, click the three dots next to the role and select the Edit option.
  6. In the Edit role assignment window, delete the name of the assignee that you want to remove from the role assignment. Type the name of the user you want to assign to the role into the User field or the group you want to assign in the Group field.

    At least one user or group must be assigned when editing a role assignment.

  7. Click Assign to complete the role assignment.

36.5. Modifying case role assignments during run time using REST API

You can change case instance role assignments during case run time using the REST API or Swagger application. Roles are defined in the case definition and assigned to tasks in the case life cycle. Roles cannot change during run time because they are predefined, but you can change the actors assigned to the roles to change who is responsible for carrying out case tasks.

The following procedure includes examples based on the IT_Orders sample project. You can use the same REST API endpoints in the Swagger application or any other REST API client, or using Curl.

Prerequisites

  • An IT Orders case instance has been started with owner, manager, and supplier roles already assigned to actors.

Procedure

  1. Retrieve the list of current role assignments using a GET request on the following endpoint:

    http://localhost:8080/kie-server/services/rest/server/containers/{id}/cases/instances/{caseId}/roles

    Table 36.1. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    This returns the following response:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <case-role-assignment-list>
          <role-assignments>
                <name>owner</name>
                <users>Aimee</users>
          </role-assignments>
          <role-assignments>
                <name>manager</name>
                <users>Katy</users>
          </role-assignments>
          <role-assignments>
                <name>supplier</name>
                <groups>Lenovo</groups>
          </role-assignments>
    </case-role-assignment-list>
  2. To change the user assigned to the manager role, you must first remove the role assignment from the user Katy using DELETE.

    /server/containers/{id}/cases/instances/{caseId}/roles/{caseRoleName}

    Include the following information in the Swagger client request:

    Table 36.2. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    caseRoleName

    manager

    user

    Katy

    Click Execute.

  3. Execute the GET request from the first step again to check that the manager role no longer has a user assigned:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <case-role-assignment-list>
          <role-assignments>
                <name>owner</name>
                <users>Aimee</users>
          </role-assignments>
          <role-assignments>
                <name>manager</name>
          </role-assignments>
          <role-assignments>
                <name>supplier</name>
                <groups>Lenovo</groups>
          </role-assignments>
    </case-role-assignment-list>
  4. Assign the user Cami to the manager role using a PUT request on the following endpoint:

    /server/containers/{id}/cases/instances/{caseId}/roles/{caseRoleName}

    Include the following information in the Swagger client request:

    Table 36.3. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    caseRoleName

    manager

    user

    Cami

    Click Execute.

  5. Execute the GET request from the first step again to check that the manager role is now assigned to Cami:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <case-role-assignment-list>
          <role-assignments>
                <name>owner</name>
                <users>Aimee</users>
          </role-assignments>
          <role-assignments>
                <name>manager</name>
                <users>Cami</users>
          </role-assignments>
          <role-assignments>
                <name>supplier</name>
                <groups>Lenovo</groups>
          </role-assignments>
    </case-role-assignment-list>

Chapter 37. Stages

Case management stages are a collection of tasks. A stage is an ad hoc subprocess that can be defined using the process designer and may include other case management nodes, such as a milestone. A milestone can also be configured as completed when a stage or a number of stages are completed. Therefore, a milestone may be activated or achieved by the completion of a stage, and a stage may include a milestone or a number of milestones.

For example, in a patient triage case, the first stage may consist of observing and noting any obvious physical symptoms or a description from the patient of what their symptoms are, followed by a second stage for tests, and a third for diagnosis and treatment.

There are three ways to complete a stage:

  • By completion condition.
  • By terminal end event.
  • By setting the Completion Condition to autocomplete, which will automatically complete the stage when there are no active tasks left in the stage.

37.1. Defining a stage

A stage can be modeled in BPMN2 using the process designer. Stages are a way of grouping related tasks in a way that clearly defines activities that, if the stage is activated, must complete before the next stage of the case commences. For example, the IT_Orders case definition can also be defined using stages in the following way:

Figure 37.1. IT_Orders project stages example

IT_Orders - stages

Procedure

  1. From the predefined node panel located on the left side of the diagram editor, drag and drop an Adhoc subprocess node onto the design canvas and provide a name for the stage node.
  2. Define how the stage is activated:

    • If the stage is being activated by an incoming node, connect the stage with a sequence flow line from the incoming node.
    • If the stage is instead being activated by a signal event, configure the SignalRef on the signal node with the name of the stage that you configured in the first step.
    • Alternatively, configure the AdHocActivationCondition property to activate the stage when the condition has been met.
  3. Re-size the node as required to provide room to add the task nodes for the stage.
  4. Add the relevant tasks to the stage and configure them as required.
  5. (Optional) Configure a completion condition for the stage. As an ad hoc subprocess, stages are configured as autocomplete by default, which means that the stage will automatically complete and trigger the next activity in the case definition once all instances in the stage are no longer active.

    To change the completion condition, select the stage node and open the Properties panel on the right, expand Implementation/Execution, and modify the AdHocCompletionCondition property field with a free-form Drools expression for the completion condition you require. For more information about stage completion conditions, see Section 37.2, “Configuring stage activation and completion conditions”.

  6. Once the stage has been configured, connect it to the next activity in the case definition using a sequence flow line.

37.2. Configuring stage activation and completion conditions

Stages can be triggered by a start node, intermediate node, or manually using an API call.

You can configure stages with both activation and completion conditions using free-form Drools rules, the same way that milestone completion conditions are configured. For example, in the IT_Orders sample project, the Milestone 2: Order shipped completion condition (org.kie.api.runtime.process.CaseData(data.get("shipped") == true)) can also be used as the completion condition for the Order delivery stage represented here:

Figure 37.2. IT_Orders project stages example

IT_Orders - stages

Activation conditions can also be configured using a free-form Drools rule to configure the AdHocActivationCondition property to activate a stage.

Prerequisites

  • You have created a case definition in the Business Central process designer.
  • You have added an ad hoc subprocess to the case definition that is to be used as a stage.

Procedure

  1. With the stage selected, click diagram properties to open the Properties panel on the right side of the designer.
  2. Expand Implementation/Execution and in the AdHocActivationCondition property editor define an activation condition for the start node. For example, set autostart: true to make the stage automatically activated when a new case instance is started.
  3. The AdHocCompletionCondition is set to autocomplete by default. To change this, input a completion condition using a free-form Drools expression. For example, set org.kie.api.runtime.process.CaseData(data.get("ordered") == true) to activate the second stage in the example shown previously.

For more examples and information about the conditions used in the IT_Orders sample project, see Getting started with case management.

37.3. Adding a dynamic task to a stage

Dynamic tasks can be added to a case stage during run time using a REST API request. This is similar to adding a dynamic task to a case instance, but you must also define the caseStageId of the stage to which the task is added.

Use the following procedure to add a dynamic task to a stage in the IT_Orders sample project available in Business Central using the Swagger REST API tool. The same endpoint can be used for the REST API without Swagger.

Prerequisites

  • The IT_Orders sample project BPMN2 case definition has been reconfigured to use stages instead of milestones, as demonstrated in the provided example. For information about configuring stages for case management, see Section 37.1, “Defining a stage”.

Procedure

  1. Start a new case using the Showcase application. For more information about using Showcase, see Using the Showcase application for case management.

    Because this case is designed using stages, the case details page shows stage tracking:

    case with stages showcase

    The first stage starts automatically when the case instance is created.

  2. As a manager user, approve the hardware specification in Business Central under MenuTrackTask Inbox, then check the progress of the case.

    1. In Business Central, click MenuManageProcess Instances and open the active case instance IT-0000000001.
    2. Click Diagram to see the case progress.
  3. In a web browser, open the following URL:

    http://localhost:8080/kie-server/docs.

  4. Open the list of available endpoints under Case instances :: Case Management.
  5. Click click the following POST method endpoint to open the details:

    /server/containers/{id}/cases/instances/{caseId}/stages/{caseStageId}/tasks

  6. Click Try it out to complete the following parameters:

    Table 37.1. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    caseStageId

    Order delivery

    The caseStageId is the name of the stage in the case definition where the dynamic task is to be created. This can be any dynamic or service task payload. See ] or xref:case-management-dynamic-service-task-API-proc[ for examples.

After the dynamic task has been added to the stage, it must be completed in order for the stage to complete and for the case process to move on to the next item in the case flow.

Chapter 38. Milestones

Milestones are a special service task that can be configured in the case definition designer by adding the milestone node to the process designer palette. When creating a new case definition, a milestone configured as AdHoc Autostart is included on the design palette by default. Newly created milestones are not set to AdHoc Autostart by default.

Case management milestones generally occur at the end of a stage, but they can also be the result of achieving other milestones. A milestone always requires a condition to be defined in order to track progress. Milestones react to case file data when data is added to a case. A milestone represents a single point of achievement within the case instance. It can be used to flag certain events, which can be useful for Key Performance Indicator (KPI) tracking or identifying the tasks that are still to be completed.

Milestones can be in any of the following states during case execution:

  • Active: The condition has been defined on the milestone but it has not been met.
  • Completed: The milestone condition has been met, the milestone has been achieved, and the case can proceed to the next task.
  • Terminated: The milestone is no longer a part of the case process and is no longer required.

While a milestone is available or completed it can be triggered manually by a signal or automatically if AdHoc Autostart is configured when a case instance starts. Milestones can be triggered as many times as required, however, it is directly achieved when the condition is met.

38.1. Configuring and triggering milestones

Case milestones can be configured to start automatically when a case instance starts or they can triggered using a signal, which is configured manually during the case design.

Prerequisites

  • A case project has been created in Business Central.
  • A case definition has been created.

Procedure

  1. From the predefined node panel located on the left side of the diagram editor, drag and drop a Milestone object onto the palette.

    Milestone
  2. With the milestone selected, click diagram properties to open the Properties panel on the right side of the designer.
  3. Expand Data Assignments to add a completion condition. Milestones include a Condition parameter by default.
  4. To define the completion condition for the milestone, select Constant from the Source list. The condition must be provided using the Drools syntax.
  5. Expand Implementation/Execution to configure the AdHoc Autostart property.

    • Click the check box to set this property to true for milestones that are required to start automatically when a case instance starts.
    • Leave the check box empty to set this property to false for milestones that are to be triggered by a signal event.
  6. (Optional) Configure a signal event to trigger a milestone once a case goal has been reached.

    1. With the signal event selected in the case design palette, open the Properties panel on the right.
    2. Set the Signal Scope property to Process Instance.
    3. Open the SignalRef expression editor and type the name of the milestone to be triggered.

      Milestone trigger expression
  7. Click Save.

Chapter 39. Variable tags

Variables store data that is used during runtime. For greater control over variable behavior, you can tag case variables and local variables in the BPMN case file. Tags are simple string values that you add as metadata to a specific variable.

Red Hat Process Automation Manager supports the following tags for case and local variables:

  • required: Sets the variable as a requirement in order to start a case. If a case starts without the required variable, Red Hat Process Automation Manager generates a VariableViolationException error.
  • readonly: Indicates that the variable is for informational purposes only and can be set only once during case execution. If the value of a read-only variable is modified at any time, Red Hat Process Automation Manager generates a VariableViolationException error.
  • restricted: A tag that is used with the VariableGuardProcessEventListener to indicate that permission is granted to modify the variable based on the existing role. The restricted tag can be replaced by any other tag name if using the second constructor that passes the new tag name.

The VariableGuardProcessEventListener class is extended from the DefaultProcessEventListener class and supports two different constructors:

  • VariableGuardProcessEventListener

    public VariableGuardProcessEventListener(String requiredRole, IdentityProvider identityProvider) {
        this("restricted", requiredRole, identityProvider);
    }
  • VariableGuardProcessEventListener

    public VariableGuardProcessEventListener(String tag, String requiredRole, IdentityProvider identityProvider) {
        this.tag = tag;
        this.requiredRole = requiredRole;
        this.identityProvider = identityProvider;
    }

    Therefore, you must add an event listener to the session with the allowed role name and identity provider that returns the user role as shown in the following example:

    ksession.addEventListener(new VariableGuardProcessEventListener("AdminRole", myIdentityProvider));

    In the previous example, the VariableGuardProcessEventListener method verifies if a variable is tagged with a security constraint tag (restricted). If the user does not have the required role (for example, AdminRole), then Red Hat Process Automation Manager generates a VariableViolationException error. NOTE: The variable tags that appear in the Business Central UI, for example internal, input, output, business-relevant, and tracked are not supported in Red Hat Process Automation Manager.

You can add the tag directly to the BPMN process source file as a customTags metadata property with the tag value defined in the format ![CDATA[TAG_NAME]].

For example, the following BPMN process applies the required tag to an approved process variable:

Figure 39.1. Example variable tagged in the BPMN modeler

Image of variable tags in BPMN modeler

Example variable tagged in a BPMN file

<bpmn2:property id="approved" itemSubjectRef="ItemDefinition_9" name="approved">
  <bpmn2:extensionElements>
    <tns:metaData name="customTags">
      <tns:metaValue><![CDATA[required]]></tns:metaValue>
    </tns:metaData>
  </bpmn2:extensionElements>
</bpmn2:property>

You can use more than one tag for a variable where applicable. You can also define custom variable tags in your BPMN files to make variable data available to Red Hat Process Automation Manager process event listeners. Custom tags do not influence the Red Hat Process Automation Manager runtime as the standard variable tags do and are for informational purposes only. You define custom variable tags in the same customTags metadata property format that you use for standard Red Hat Process Automation Manager variable tags.

Chapter 40. Case event listener

The CaseEventListener listener is used to initiate notifications for case-related events and operations that are invoked on a case instance. Implement the case event listener by overriding the methods as needed for your particular use case.

You can configure the listener using the deployment descriptors located in Business Central in MenuDesignPROJECT_NAMESettingsDeployments.

When a new project is created, a kie-deployment-descriptor.xml file is generated with default values.

CaseEventListener methods

public interface CaseEventListener extends EventListener {

    default void beforeCaseStarted(CaseStartEvent event) {
    };

    default void afterCaseStarted(CaseStartEvent event) {
    };

    default void beforeCaseClosed(CaseCloseEvent event) {
    };

    default void afterCaseClosed(CaseCloseEvent event) {
    };

    default void beforeCaseCancelled(CaseCancelEvent event) {
    };

    default void afterCaseCancelled(CaseCancelEvent event) {
    };

    default void beforeCaseDestroyed(CaseDestroyEvent event) {
    };

    default void afterCaseDestroyed(CaseDestroyEvent event) {
    };

    default void beforeCaseReopen(CaseReopenEvent event) {
    };

    default void afterCaseReopen(CaseReopenEvent event) {
    };

    default void beforeCaseCommentAdded(CaseCommentEvent event) {
    };

    default void afterCaseCommentAdded(CaseCommentEvent event) {
    };

    default void beforeCaseCommentUpdated(CaseCommentEvent event) {
    };

    default void afterCaseCommentUpdated(CaseCommentEvent event) {
    };

    default void beforeCaseCommentRemoved(CaseCommentEvent event) {
    };

    default void afterCaseCommentRemoved(CaseCommentEvent event) {
    };

    default void beforeCaseRoleAssignmentAdded(CaseRoleAssignmentEvent event) {
    };

    default void afterCaseRoleAssignmentAdded(CaseRoleAssignmentEvent event) {
    };

    default void beforeCaseRoleAssignmentRemoved(CaseRoleAssignmentEvent event) {
    };

    default void afterCaseRoleAssignmentRemoved(CaseRoleAssignmentEvent event) {
    };

    default void beforeCaseDataAdded(CaseDataEvent event) {
    };

    default void afterCaseDataAdded(CaseDataEvent event) {
    };

    default void beforeCaseDataRemoved(CaseDataEvent event) {
    };

    default void afterCaseDataRemoved(CaseDataEvent event) {
    };

    default void beforeDynamicTaskAdded(CaseDynamicTaskEvent event) {
    };

    default void afterDynamicTaskAdded(CaseDynamicTaskEvent event) {
    };

    default void beforeDynamicProcessAdded(CaseDynamicSubprocessEvent event) {
    };

    default void afterDynamicProcessAdded(CaseDynamicSubprocessEvent event) {
    };
}

Chapter 41. Rules in case management

Cases are data-driven, rather than following a sequential flow. The steps required to resolve a case rely on data, which is provided by people involved in the case, or the system can be configured to trigger further actions based on the data available. In the latter case, you can use business rules to decide what further actions are required for the case to continue or reach a resolution.

Data can be inserted into the case file at any point during the case. The decision engine constantly monitors case file data, meaning that rules react to data that is contained in the case file. Using rules to monitor and respond to changes in the case file data provides a level of automation that drives cases forward.

41.1. Using rules to drive cases

Refer to the case management IT_Orders sample project in Business Central.

Suppose that the particular hardware specification provided by the supplier is incorrect or invalid. The supplier needs to provide a new, valid order so that the case can continue. Rather than wait for the manager to reject the invalid specification and create a new request for the supplier, you can create a business rule that will react immediately when the case data indicates that the provided specification is invalid. It can then create a new hardware specification request for the supplier.

The following procedure demonstrates how to create and use a business rule to execute this scenario.

Prerequisites

  • The IT_Orders sample project is open in Business Central, but it is not deployed to the KIE Server.
  • The ServiceRegistry is part of the jbpm-services-api module, and must be available on the class path.

    Note

    If building the project outside of Business Central, the following dependencies must be added to the project:

    • org.jbpm:jbpm-services-api
    • org.jbpm:jbpm-case-mgmt-api

Procedure

  1. Create the following business rule file called validate-document.drl:

    package defaultPackage;
    
    import java.util.Map;
    import java.util.HashMap;
    import org.jbpm.casemgmt.api.CaseService;
    import org.jbpm.casemgmt.api.model.instance.CaseFileInstance;
    import org.jbpm.document.Document;
    import org.jbpm.services.api.service.ServiceRegistry;
    
    rule "Invalid document name - reupload"
    when
        $caseData : CaseFileInstance()
        Document(name == "invalid.pdf") from $caseData.getData("hwSpec")
    
    then
    
        System.out.println("Hardware specification is invalid");
        $caseData.remove("hwSpec");
        update($caseData);
        CaseService caseService = (CaseService) ServiceRegistry.get().service(ServiceRegistry.CASE_SERVICE);
        caseService.triggerAdHocFragment($caseData.getCaseId(), "Prepare hardware spec", null);
    end

    This business rule detects when a file named invalid.pdf is uploaded to the case file. It then removes the invalid.pdf document and creates a new instance of the Prepare hardware spec user task.

  2. Click Deploy to build the IT_Orders project and deploy it to a KIE Server.

    Note

    You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The next time you deploy or redeploy the built KJAR, the previous deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server.

    To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production. To configure the deployment behavior for a corresponding project in Business Central, go to project SettingsGeneral SettingsVersion and toggle the Development Mode option. By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode.

  3. Create a file called invalid.pdf and save it locally.
  4. Create a file called valid-spec.pdf and save it locally.
  5. In Business Central, go to MenuProjectsIT_Orders to open the IT_Orders project.
  6. Click Import Asset in the upper-right corner of the page.
  7. Upload the validate-document.drl file to the default package (src/main/resources) and click Ok.

    case management validate document upload

    The validate-document.drl rule is shown in the rule editor. Click Save or close to exit the rule editor.

  8. Open the Showcase application by either clicking the Apps launcher (if it is installed), or go to http://localhost:8080/rhpam-case-mgmt-showcase/jbpm-cm.html.
  9. Click Start Case for the IT_Orders project.

    In this example, Aimee is the case owner, Katy is the manager, and the supplier group is supplier.

    showcase start case
  10. Log out of Business Central, and log back in as a user that belongs to the supplier group.
  11. Go to MenuTrackTask Inbox.
  12. Open the Prepare hardware spec task and click Claim. This assigns the task to the logged in user.
  13. Click Start and click choose file to locate the invalid.pdf hardware specification file. Click the upload button to upload the file.

    case management invalid spec
  14. Click Complete.

    The value in the Task Inbox for the Prepare hardware spec is Ready.

  15. In Showcase, click Refresh in the upper-right corner. Notice that a Prepare hardware task message appears in the Completed column and another appears in the In Progress column.

    case management new spec task

    This is because the first Prepare hardware spec task has been completed with the specification file invalid.pdf. As a result, the business rule causes the task and file to be discarded, and a new user task created.

  16. In the Business Central Task Inbox, repeat the previous steps to upload the valid-spec.pdf file instead of invalid.pdf.

Chapter 42. Case management security

Cases are configured at the case definition level with case roles. These are generic participants that are involved in case handling. These roles can be assigned to user tasks or used as contact references. Roles are not hard-coded to specific users or groups to keep the case definition independent of the actual actors involved in any given case instance. You can modify case role assignments at any time as long as case instance is active, though modifying a role assignment does not affect tasks already created based on the previous role assignment.

Case instance security is enabled by default. The case definition prevents case data from being accessed by users who do not belong to the case. Unless a user has a case role assignment (either assigned as user or a group member) then they are not able to access the case instance.

Case security is one of the reasons why it is recommended that you assign case roles when starting a case instance, as this will prevent tasks being assigned to users who should not have access to the case.

42.1. Configuring security for case management

You can turn off case instance authorization by setting the following system property to false:

org.jbpm.cases.auth.enabled

This system property is just one of the security components for case instances. In addition, you can configure case operations at the execution server level using the case-authorization.properties file, available at the root of the class path of the execution server application (kie-server.war/WEB-INF/classes).

Using a simple configuration file for all possible case definitions encourages you to think about case management as domain-specific. AuthorizationManager for case security is pluggable, which allows you to include custom code for specific security handling.

You can restrict the following case instance operations to case roles:

  • CANCEL_CASE
  • DESTROY_CASE
  • REOPEN_CASE
  • ADD_TASK_TO_CASE
  • ADD_PROCESS_TO_CASE
  • ADD_DATA
  • REMOVE_DATA
  • MODIFY_ROLE_ASSIGNMENT
  • MODIFY_COMMENT

Prerequisites

  • The Red Hat Process Automation Manager KIE Server is not running.

Procedure

  1. Open the JBOSS_HOME/standalone/deployments/kie-server.war/WEB-INF/classes/case-authorization.properties file in your preferred editor.

    By default, the file contains the following operation restrictions:

    CLOSE_CASE=owner,admin
    CANCEL_CASE=owner,admin
    DESTROY_CASE=owner,admin
    REOPEN_CASE=owner,admin
  2. Add or remove role permissions for these operations as needed:

    1. To remove permission for a role to perform an operation, remove it from the list of authorized roles for that operation in the case-authorization.properties file. For example, removing the admin role from the CLOSE_CASE operation restricts permission to close a case to the case owner for all cases.
    2. To give a role permission to perform a case operation, add it to the list of authorized roles for that operation in the case-authorization.properties file. For example, to allow anyone with the manager role to perform a CLOSE_CASE operation, add it to the list of roles, separated by a comma:

      CLOSE_CASE=owner,admin,manager

  3. To add role restrictions to other case operations listed in the file, remove the # from the line and list the role names in the following format:

    OPERATION=role1,role2,roleN

    Operations in the file that begin with # have restrictions ignored and can be performed by anyone involved in the case.

  4. When you have finished assigning role permissions, save and close the case-authorization.properties file.
  5. Start the execution server.

    The case authorization settings apply to all cases on the execution server.

Chapter 43. Closing cases

A case instance can be completed when there are no more activities to be performed and the business goal is achieved, or it can be closed prematurely. Usually the case owner closes the case when all work is completed and the case goals have been met. When you close a case, consider adding a comment about why the case instance is being closed.

A closed case can be reopened later with the same case ID if required. When a case is reopened, stages that were active when the case was closed will be active when the case is reopened.

You can close case instances remotely using KIE Server REST API requests or directly in the Showcase application.

43.1. Closing a case using the KIE Server REST API

You can use a REST API request to close a case instance. Red Hat Process Automation Manager includes the Swagger client, which includes endpoints and documentation for REST API requests. Alternatively, you can use the same endpoints to make API calls using your preferred client or Curl.

Prerequisites

  • A case instance has been started using Showcase.
  • You are able to authenticate API requests as a user with the admin role.

Procedure

  1. Open the Swagger REST API client in a web browser:

    http://localhost:8080/kie-server/docs

  2. Under Case Instances :: Case Management, open the POST request with the following endpoint:

    /server/containers/{id}/cases/instances/{caseId}

  3. Click Try it out and fill in the required parameters:

    Table 43.1. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

  4. (Optional) Include a comment to be included in the case file. To leave a comment, type it into the body text field as a String.
  5. Click Execute to close the case.
  6. To confirm the case is closed, open the Showcase application and change the case list status to Closed.

43.2. Closing a case in the Showcase application

A case instance is complete when no more activities need to be performed and the business goal has been achieved. After a case is complete, you can close the case to indicate that the case is complete and that no further work is required. When you close a case, consider adding a specific comment about why you are closing the case. If needed, you can reopen the case later with the same case ID.

You can use the Showcase application to close a case instance at any time. From Showcase, you can easily view the details of the case or leave a comment before closing it.

Prerequisites

  • You are logged in to the Showcase application and are the owner or administrator for a case instance that you want to close.

Procedure

  1. In the Showcase application, locate the case instance you want to close from the list of case instances.
  2. To close the case without viewing the details first, click Close.
  3. To close the case from the case details page, click the case in the list to open it.

    From the case overview page you can add comments to the case and verify that you are closing the correct case based on the case information.

  4. Click Close to close the case.
  5. Click Back to Case List in the upper-left corner of the page to return to the Showcase case list view.
  6. Click the drop-down list next to Status and select Canceled to view the list of closed and canceled cases.

Chapter 44. Canceling or destroying a case

Cases can be canceled if they are no longer required and do not require any case work to be performed. Cases that are canceled can be reopened later with the same case instance ID and case file data. In some cases, you might want to permanently destroy a case so that it cannot be reopened.

Cases can only be canceled or destroyed using an API request. Red Hat Process Automation Manager includes the Swagger client, which includes endpoints and documentation for REST API requests. Alternatively, you can use the same endpoints to make API calls using your preferred client or Curl.

Prerequisites

  • A case instance has been started using Showcase.
  • You are able to authenticate API requests as a user with the admin role.

Procedure

  1. Open the Swagger REST API client in a web browser:

    http://localhost:8080/kie-server/docs

  2. Under Case Instances :: Case Management, open the DELETE request with the following endpoint:

    /server/containers/{id}/cases/instances/{caseId}

    You can cancel a case using the DELETE request. Optionally, you can also destroy the case using the destroy parameter.

  3. Click Try it out and fill in the required parameters:

    Table 44.1. Parameters

    NameDescription

    id

    itorders

    caseId

    IT-0000000001

    destroy

    true

    (Optional. Permanently destroys the case. This parameter is false by default.)

  4. Click Execute to cancel (or destroy) the case.
  5. To confirm the case is canceled, open the Showcase application and change the case list status to Canceled. If the case has been destroyed, it will no longer appear in any case list.

44.1. Case log removal from the database

Use the CaseLogCleanupCommand to clean up cases, such as canceled cases that are using up database space. The CaseLogCleanupCommand command contains logic to automatically clean-up all or selected cases.

You can use the following configuration options with the CaseLogCleanupCommand command:

Table 44.2. CaseLogCleanupCommand parameters table

NameDescriptionIs Exclusive

SkipProcessLog

Indicates whether or not the process and node instances, along with the process variable log clean-up will be skipped when the command runs. Default value: false

No, can be used with other parameters

SkipTaskLog

Indicates whether or not the task audit, the task event, and the task variable log clean-up will be skipped when the command runs. Default value: false

No, can be used with other parameters

SkipExecutorLog

Indicates if the Red Hat Process Automation Manager executor entries clean-up will be skipped when the command runs. Default value: false

No, can be used with other parameters

SingleRun

Indicates if the job routine will run only once. Default value: false

No, can be used with other parameters

NextRun

Schedules the next job execution. For example, set to 12h for jobs to be executed every 12 hours. The schedule is ignored if you set SingleRun to true, unless you set both SingleRun and NextRun. If both are set, the NextRun schedule takes priority. The ISO format can be used to set the precise date. Default value: 24h

No, can be used with other parameters

OlderThan

Logs older than the specified date are removed. The date format is YYYY-MM-DD. Usually, this parameter is used for single run jobs.

Yes, cannot be used when the OlderThanPeriod parameter is used

OlderThanPeriod

Logs older than the specified timer expression are removed. For example, set 30d to remove logs older than 30 days.

Yes, cannot be used when the OlderThan parameter is used

ForCaseDefId

Specifies the case definition ID of the logs that are removed.

No, can be used with other parameters

ForDeployment

Specifies the deployment ID of the logs that are removed.

No, can be used with other parameters

EmfName

The persistence unit name used to perform the delete operation. Default value: org.jbpm.domain

N/A

DateFormat

Specifies the date format for time-related parameters. Default value: yyyy-MM-dd

No, can be used with other parameters

Status

Status of the case instances of the logs that are removed.

No, can be used with other parameters

Chapter 45. Additional resources

Part V. Using the Showcase application for case management

As a case worker or process administrator, you can use the Showcase application to manage and monitor case management applications while case work is carried out in Business Central.

Case management differs from business process management (BPM) in that it focuses on the actual data being handled throughout the case and less on the sequence of steps taken to complete a goal. Case data is the most important piece of information in case handling, while business context and decision-making is in the hands of the human case worker.

Use this document to install the Showcase application and start a case instance using the IT_Orders sample case management project in Business Central. Use Business Central to complete the tasks required to complete an IT Orders case.

Prerequisites

Chapter 46. Case management

Case management is an extension of Business Process Management (BPM) that enables you to manage adaptable business processes.

BPM is a management practice used to automate tasks that are repeatable and have a common pattern, with a focus on optimization by perfecting a process. Business processes are usually modeled with clearly defined paths leading to a business goal. This requires a lot of predictability, usually based on mass-production principles. However, many real-world applications cannot be described completely from start to finish (including all possible paths, deviations, and exceptions). Using a process-oriented approach in certain cases can lead to complex solutions that are hard to maintain.

Case management provides problem resolution for non-repeatable, unpredictable processes as opposed to the efficiency-oriented approach of BPM for routine, predictable tasks. It manages one-off situations when the process cannot be predicted in advance. A case definition usually consists of loosely coupled process fragments that can be connected directly or indirectly to lead to certain milestones and ultimately a business goal, while the process is managed dynamically in response to changes that occur during run time.

In Red Hat Process Automation Manager, case management includes the following core process engine features:

  • Case file instance
  • A per case runtime strategy
  • Case comments
  • Milestones
  • Stages
  • Ad hoc fragments
  • Dynamic tasks and processes
  • Case identifier (correlation key)
  • Case lifecycle (close, reopen, cancel, destroy)

A case definition is always an ad hoc process definition and does not require an explicit start node. The case definition is the main entry point for the business use case.

A process definition is introduced as a supporting construct of the case and can be invoked either as defined in the case definition or dynamically to bring in additional processing when required. A case definition defines the following new objects:

  • Activities (required)
  • Case file (required)
  • Milestones
  • Roles
  • Stages

Chapter 47. Case management Showcase application

The Showcase application is included in the Red Hat Process Automation Manager distribution to demonstrate the capabilities of case management in an application environment. Showcase is intended to be used as a proof of concept that aims to show the interaction between business process management (BPM) and case management. You can use the application to start, close, monitor, and interact with cases.

Showcase must be installed in addition to the Business Central application and KIE Server. The Showcase application is required to start new case instances, however the case work is still performed in Business Central.

After a case instance is created and is being worked on, you can monitor the case in the Showcase application by clicking the case in the Case List to open the case Overview page.

Showcase Support

The Showcase application is not an integral part of Red Hat Process Automation Manager and is intended for demonstration purposes for case management. Showcase is provided to encourage customers to adopt and modify it to work for their specific needs. The content of the application itself does not carry product-specific Service Level Agreements (SLAs). We encourage you to report issues, request for enhancements, and any other feedback for consideration in Showcase updates.

Red Hat Support will provide guidance on the use of this template on a commercially reasonable basis for its intended use, excluding the provided example UI code provided within.

Note

Production support is limited to the Red Hat Process Automation Manager distribution.

Chapter 48. Installing and logging in to the Showcase application

The Showcase application is included with the Red Hat Process Automation Manager 7.9 distribution in the add-ons Zip file. The purpose of this application is to demonstrate the functionality of case management in Red Hat Process Automation Manager and enable you to interact with cases created in Business Central. You can install the Showcase application in a Red Hat JBoss Enterprise Application Platform instance or on OpenShift. This procedure describes how to install the Showcase application in Red Hat JBoss EAP.

Prerequisites

  • Business Central and KIE Server are installed in an Red Hat JBoss EAP instance.
  • You have created a user with kie-server and user roles. Only users with the user role are able to log in to the Showcase application. Users also require the kie-server role to perform remote operations on the running KIE Server.
  • Business Central is not running.

Procedure

  1. Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options:

    • Product: Red Hat Process Automation Manager
    • Version: 7.9
  2. Download Red Hat Process Automation Manager 7.9 Add Ons (rhpam-7.9.1-add-ons.zip).
  3. Extract the (rhpam-7.9.1-add-ons.zip). file. The (rhpam-7.9-case-mgmt-showcase-eap7-deployable.zip). file is in the unzipped directory.
  4. Extract the (rhpam-7.9-case-mgmt-showcase-eap7-deployable.zip). archive to a temporary directory. In the following examples this directory is called TEMP_DIR.
  5. Copy the contents of the _TEMP_DIR/rhpam-7.9-case-mgmt-showcase-eap7-deployable/jboss-eap-7.3 directory to EAP_HOME.

    When asked to overwrite files or merge directories, select Yes.

    Warning

    Ensure the names of the Red Hat Process Automation Manager deployments you copy do not conflict with your existing deployments in the Red Hat JBoss EAP instance.

  6. Add the following system property to your deployment’s 7.3/jboss-eap-7.3/standalone/configuration/standalone-full.xml file:

    <property name="org.jbpm.casemgmt.showcase.url" value="/rhpam-case-mgmt-showcase"/>

  7. In a terminal application, navigate to EAP_HOME/bin and run the standalone configuration to start Business Central:

    ./standalone.sh -c standalone-full.xml

  8. In a web browser, enter localhost:8080/business-central.

    If Red Hat Process Automation Manager has been configured to run from a domain name, replace localhost with the domain name, for example:

    http://www.example.com:8080/business-central

  9. In the upper-right corner in Business Central, click the Apps launcher button to launch the Case Management Showcase in a new browser window.

    apps launcher showcase button
  10. Log in to the Showcase application using your Business Central user credentials.

Chapter 49. Case roles

Case roles provide an additional layer of abstraction for user participation in case handling. Roles, users, and groups are used for different purposes in case management.

Roles
Roles drive the authorization for a case instance and are used for user activity assignments. A user or one or more groups can be assigned to the owner role. The owner is whoever the case belongs to. Roles are not restricted to a single set of people or groups as part of a case definition. Use roles to specify task assignments instead of assigning a specific user or group to a task assignment to ensure that the case remains dynamic.
Groups
A group is a collection of users who are able to carry out a particular task or have a set of specified responsibilities. You can assign any number of people to a group and assign any group to a role. You can add or change members of a group at any time. Do not hard code a group to a particular task.
Users

A user is an individual who can be given a particular task when you assign them a role or add them to a group.

Note

Do not create a user called unknown in process engine or KIE Server. The unknown user account is a reserved system name with superuser access. The unknown user account performs tasks related to the SLA violation listener when there are no users logged in.

The following example illustrates how the preceding case management concepts apply to a hotel reservation with the following information:

  • Role: Guest
  • Group: Receptionist, Maid
  • User: Marilyn

The Guest role assignment affects the specific work of the associated case and is unique to all case instances. The number of users or groups that can be assigned to a role is limited by the case Cardinality, which is set during role creation in the process designer and case definition. For example, the hotel reservation case has only one guest while the IT_Orders sample project has two suppliers of IT hardware.

When roles are defined, ensure that roles are not hard-coded to a single set of people or groups as part of case definition and that they can differ for each case instance. This is why case role assignments are important.

Role assignments can be assigned or removed when a case starts or at any time when a case is active. Although roles are optional, use roles in case definitions to maintain an organized workflow.

Important

Always use roles for task assignments instead of actual user or group names. This ensures that the case remains dynamic and actual user or group assignments can be made as late as required.

Roles are assigned to users or groups and authorized to perform tasks when a case instance is started.

Chapter 50. Starting dynamic tasks and processes

You can add dynamic tasks and processes to a case during run time. Dynamic actions are a way to address changing situations, where an unanticipated change during the case requires a new task or process to be incorporated into the case.

Use a case application to add a dynamic task during run time. For demonstration purposes, the Business Central distribution includes a Showcase application where you can start a new dynamic task or process for the IT Orders application.

Prerequisites

  • KIE Server is deployed and connected to Business Central.
  • The IT Orders project is deployed to KIE Server.
  • The Showcase application .war file has been deployed alongside Business Central.

Procedure

  1. With the IT_Orders_New project deployed and running in the KIE Server, in a web browser, navigate to the Showcase login page http://localhost:8080/rhpam-case-mgmt-showcase/.

    Alternatively, if you have configured Business Central to display the Apps launcher button, use it to open a new browser window with the Showcase login page.

    apps launcher showcase button
  2. Log in to the Showcase application using your Business Central login credentials.
  3. Select an active case instance from the list to open it.
  4. Under OverviewActionsAvailable, click the dotdotdotbutton button next to New user task or New process task to add a new task or process task.

    Figure 50.1. Showcase dynamic actions

    showcase dynamic actions
    • To create a dynamic user task, start a New user task and complete the required information:

      showcase dynamic user task
    • To create a dynamic process task, start a New process task and complete the required information:

      showcase dynamic process task
  5. To view a dynamic user task in Business Central, click MenuTrackTask Inbox. The user task that was added dynamically using the Showcase application appears in the Task Inbox of users assigned to the task during task creation.

    task inbox dynamic task
    1. Click the dynamic task in the Task Inbox to open the task. A number of action tabs are available from this page.
    2. Using the actions available under the task tabs, you can begin working on the task.
    3. In the Showcase application, click the refresh button in the upper-right corner. Case tasks and processes that are in progress appear under OverviewActionsIn progress.
    4. When you have completed working on the task, click the Complete button under the Work tab.
    5. In the Showcase application, click the refresh button in the upper-right corner. The completed task appears under OverviewActionsCompleted.
  6. To view a dynamic process task in Business Central, click MenuManageProcess Instances.

    dynamic process instance
    1. Click the dynamic process instance in the list of available process instances to view information about the process instance.
    2. In the Showcase application, click the refresh button in the upper-right corner. Case tasks and processes that are in progress appear under OverviewActionsIn progress.

Chapter 51. Starting an IT Orders case in the Showcase application

You can start a new case instance for the IT Orders sample case management project in the Showcase application.

The IT Orders sample case management project includes the following roles:

  • owner: The employee who is making the hardware order request. There can be only one of these roles.
  • manager: The employee’s manager; the person who will approve or deny the requested hardware. There is only one manager in the IT Orders project.
  • supplier: The available suppliers of IT hardware in the system. There is usually more than one supplier.

These roles are configured at the case definition level:

Figure 51.1. ITOrders Case Roles

Case Roles

Assign users or groups to these roles when starting a new case file instance.

Prerequisites

Procedure

  1. In the Showcase application, start a new case instance by clicking the Start Case button.
  2. Select the Order for IT hardware case name from the list and complete the role information as shown:

    showcase start case

    In this example, Aimee is the case owner, Katy is the manager, and the supplier group is supplier.

  3. Click Start to start the case instance.
  4. Select the case from the Case List. The Overview page opens.

    From the Overview page, you can monitor the case progress, add comments, start new dynamic tasks and processes, and complete and close cases.

    case management showcase overview
Note

Cases can be started and closed using the Showcase application, but they cannot be reopened using this application. You can only reopen a case using a JMS or REST API call.

Chapter 52. Completing the IT_Orders case using Showcase and Business Central

When a case instance is started using the Showcase application, tasks that are configured as AdHoc Autostart in the case definition are automatically assigned and made available to users with the role assignment for each task. Case workers can then work on the tasks in Business Central and complete them to move the case forward.

In the IT_Orders case project, the following case definition nodes are configured with the AdHoc Autostart property:

  • Prepare hardware spec
  • Hardware spec ready
  • Manager decision
  • Milestone 1: Order placed

Of these, the only user task is Prepare hardware spec, which is assigned to the supplier group. This is the first human task to be completed in the IT Orders case. When this task is complete, the Manager approval task becomes available to the user assigned to the manager role, and after the rest of the case work is finished, the Customer satisfaction survey task is assigned to the case owner for completion.

Prerequisites

  • As the wbadmin user, you have started an IT_Orders case in the Showcase application.

Procedure

  1. Log out of Business Central and log back in as a user that belongs to the supplier group.
  2. Go to MenuTrackTask Inbox.
  3. Open the Prepare hardware spec task and click Claim. This assigns the task to the logged in user.
  4. Click Start and click choose file to locate the hardware specification file. Click the upload button to upload the file.

    case management valid spec
  5. Click Complete.
  6. In Showcase, click Refresh in the upper-right corner. Notice that the Prepare hardware task user task and the Hardware spec ready milestone appear in the Completed column.

    case management ordered
  7. In Business Central, go to MenuTrackTask Inbox. Open the Manager approval task for wbadmin.

    1. Click Claim and then click Start.
    2. Check the approve box for the task that includes the valid-spec.pdf file, then click Complete.
  8. Go to MenuManageProcess Instances and open the Order for IT hardware process instance.

    1. Open the Diagram tab. Note that the Place order task is complete.
    2. Refresh the Showcase page to see that the Manager approval task and the Manager decision milestone are in the Completed column. The Milestones pane in the lower-left corner of the Showcase overview page also shows the completed and pending milestones.

      showcase milestones ordered
  9. In Business Central go to MenuManageTasks. Click the Place order task to open it.

    1. Click Claim and then click Start.
    2. Select the Is order placed check box and click Complete.

      itorders order placed

      The process instance diagram now shows the Milestones 2: Order shipped case progress:

    3. Refresh the Showcase page to view the case progress.
  10. Go to MenuManageProcess Instances and open the Order for IT hardware.

    1. Open the Process Variables tab. Locate the caseFile_shipped variable and click Edit.
    2. In the Edit window, type true and click Save.

      itorders shipped variable
    3. Refresh the Showcase page. Note that the Milestone 2: Order shipped milestone is shown as Completed.

      The final milestone, Milestone 3: Delivered to customer is In progress.

  11. Go to MenuManageProcess Instances and open the Order for IT hardware.

    1. Open the Process Variables tab. Locate the caseFile_delivered variable and click Edit.
    2. In the Edit window, type true and click Save.
    3. Refresh the Showcase page. Note that the Milestone 3: Delivered to customer milestone is shown as Completed. All milestones under the Milestones pane in the lower-left corner are shown as complete.

      The final task of the IT Orders case, Customer satisfaction survey is shown under In progress.

      itorders customer survey
  12. In Business Central go to MenuTrackTask Inbox. Click the Customer satisfaction survey task to open it.

    This task is already reserved for wbadmin.

  13. Click Start and fill out the survey.

    itorders complete survey
  14. Click Complete.
  15. Go to MenuManageProcess Instances and open the Order for IT hardware process instance.

    1. Open the Diagram tab. This shows that all required case process nodes are complete and there is nothing left to do for this case instance.
    2. Refresh the Showcase page and note that there are no actions under In progress.
  16. In Showcase, type a comment in to the field under Comments. Click round plus to add the comment to the case file.

    itorders comment
  17. Click Close in the upper-right corner of the Showcase page to complete and close the case.

Chapter 53. Additional resources

Part VI. Custom tasks and work item handlers in Business Central

As a business rules developer, you can create custom tasks and work item handlers in Business Central to execute custom code within your process flows and extend the operations available for use in Red Hat Process Automation Manager. You can use custom tasks to develop operations that Red Hat Process Automation Manager does not directly provide and include them in process diagrams.

In Business Central, each task in a process diagram has a WorkItem Java class with an associated WorkItemHandler Java class. The work item handler contains Java code registered with Business Central and implements org.kie.api.runtime.process.WorkItemHandler.

The Java code of the work item handler is executed when the task is triggered. You can customize and register a work item handler to execute your own Java code in custom tasks.

Prerequisites

  • Business Central is deployed and is running on a web or application server.
  • You are logged in to Business Central.
  • Maven is installed.
  • The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories.
  • Your system has access to the Red Hat Maven repository either locally or online

Chapter 54. Managing custom tasks in Business Central

Custom tasks (work items) are tasks that you can customize and reuse across multiple business processes or across all projects in Business Central. Red Hat Process Automation Manager provides a set of custom tasks within the custom task repository in Business Central. You can enable or disable the default custom tasks and upload custom tasks into Business Central to implement the tasks in the relevant processes.

Note

Red Hat Process Automation Manager includes a limited set of supported custom tasks. Custom tasks that are not included in Red Hat Process Automation Manager are not supported.

Procedure

  1. In Business Central, click gear icon in the upper-right corner and select Custom Tasks Administration.

    This page lists the custom task installation settings and available custom tasks for processes in projects throughout Business Central. The custom tasks that you enable on this page become available in the project-level settings where you can then install each custom task to be used in processes. The way in which the custom tasks are installed in a project is determined by the global settings that you enable or disable under Settings on this Custom Tasks Administration page.

  2. Under Settings, enable or disable each setting to determine how the available custom tasks are implemented when a user installs them at the project level.

    The following custom task settings are available:

    • Install as Maven artifact: Uploads the custom task JAR file to the Maven repository that is configured with Business Central, if the file is not already present.
    • Install custom task dependencies into project: Adds any custom task dependencies to the pom.xml file of the project where the task is installed.
    • Use version range when installing custom task into project: Uses a version range instead of a fixed version of a custom task that is added as a project dependency. Example: [7.16,) instead of 7.16.0.Final
  3. Enable or disable (set to ON or OFF) any available custom tasks as needed. Custom tasks that you enable are displayed in project-level settings for all projects in Business Central.

    Figure 54.1. Enable custom tasks and custom task settings

    Custom Tasks Administration page
  4. To add a custom task, click Add Custom Task, browse to the relevant JAR file, and click the Upload icon. The JAR file must contain work item handler implementations annotated with @Wid.
  5. Optionally, to remove a custom task, click remove on the row of the custom task you want to remove and click Ok to confirm removal.
  6. After you configure all required custom tasks, navigate to a project in Business Central and go to the project SettingsCustom Tasks page to view the available custom tasks that you enabled.
  7. For each custom task, click Install to make the task available to the processes in that project or click Uninstall to exclude the task from the processes in the project.
  8. If you are prompted for additional information when you install a custom task, enter the required information and click Install again.

    The required parameters for the custom task depend on the type of task. For example, rule and decision tasks require artifact GAV information (Group ID, Artifact ID, Version), email tasks require host and port access information, and REST tasks require API credentials. Other custom tasks might not require any additional parameters.

    Figure 54.2. Install custom tasks for use in processes

    Project-level custom task settings
  9. Click Save.
  10. Return to the project page, select or add a business process in the project, and in the process designer palette, select the Custom Tasks option to view the available custom tasks that you enabled and installed:

    Figure 54.3. Access installed custom tasks in process designer

    Custom tasks in process designer

Chapter 55. Creating work item handler projects

Create the software project to contain all configurations, mappings, and executable code for the custom task.

You can create a work item handler from scratch or use a Maven archetype to create an example project. Red Hat Process Automation Manager provides the jbpm-workitems-archetype from the Red Hat Maven repository for this purpose.

Procedure

  1. Open the command line and create a directory where you will build your work item handler such as workitem-home:

    $ mkdir workitem-home
  2. Check the Maven settings.xml file and ensure that the Red Hat Maven repository is included in the repository list.

    Note

    Setting up Maven is outside the scope of this guide.

    For example, to add the online Red Hat Maven repository to your Maven settings.xml file:

    <settings>
      <profiles>
        <profile>
          <id>my-profile</id>
          <activation>
            <activeByDefault>true</activeByDefault>
          </activation>
          <repositories>
            <repository>
              <id>redhat-ga</id>
              <url>http://maven.repository.redhat.com/ga/</url>
              <snapshots>
                <enabled>false</enabled>
              </snapshots>
              <releases>
                <enabled>true</enabled>
              </releases>
            </repository>
            ...
          </repositories>
        </profile>
      </profiles>
      ...
    </settings>
  3. Find the Red Hat library version and perform one of the following tasks:

  4. In the workitem-home directory, execute the following command:

    $ mvn archetype:generate \
    -DarchetypeGroupId=org.jbpm \
    -DarchetypeArtifactId=jbpm-workitems-archetype \
    -DarchetypeVersion=<redhat-library-version> \
    -Dversion=1.0.0-SNAPSHOT \
    -DgroupId=com.redhat \
    -DartifactId=myworkitem \
    -DclassPrefix=MyWorkItem

    Table 55.1. Parameter descriptions

    ParameterDescription

    -DarchetypeGroupId

    Specific to the archetype and must remain unchanged.

    -DarchetypeArtifactId

    Specific to the archetype and must remain unchanged.

    -DarchetypeVersion

    Red Hat library version that is searched for when Maven attempts to download the jbpm-workitems-archetype artifact.

    -Dversion

    Version of your specific project. For example, 1.0.0-SNAPSHOT.

    -DgroupId

    Maven group of your specific project. For example, com.redhat.

    -DartifactId

    Maven ID of your specific project. For example, myworkitem.

    -DclassPrefix

    String added to the beginning of Java classes when Maven generates the classes for easier identification. For example, MyWorkItem.

    A myworkitem folder is created in the workitem-home directory. For example:

    assembly/
      assembly.xml
    src/
      main/
        java/
          com/
            redhat/
              MyWorkItemWorkItemHandler.java
        repository/
        resources/
      test/
        java/
          com/
            redhat/
              MyWorkItemWorkItemHandlerTest.java
              MyWorkItemWorkItemIntegrationTest.java
        resources/
          com/
            redhat/
    pom.xml
  5. Add any Maven dependencies required by the work item handler class to the pom.xml file.
  6. To create a deployable JAR for this project, in the parent project folder where the pom.xml file is located, execute the following command:

    $ mvn clean package

    Several files are created in the target/ directory which include the following two main files:

    Table 55.2. File descriptions

    ParameterDescription

    myworkitems-<version>.jar

    Used for direct deployment to Red Hat Process Automation Manager.

    myworkitems-<version>.zip

    Used for deployment using a service repository.

Chapter 56. Work item handler project customization

You can customize the code of a work item handler project. There are two Java methods required by a work item handler, executeWorkItem and abortWorkItem.

Table 56.1. Java method descriptions

Java MethodDescription

executeWorkItem(WorkItem workItem, WorkItemManager manager)

Executed by default when the work item handler is run.

abortWorkItem(WorkItem workItem, WorkItemManager manager)

Executed when the work item is aborted.

In both methods, the WorkItem parameter contains any of the parameters entered into the custom task through a GUI or API call, and the WorkItemManager parameter is responsible for tracking the state of the custom task.

Example code structure

public class MyWorkItemWorkItemHandler extends AbstractLogOrThrowWorkItemHandler {

  public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {
    try {
      RequiredParameterValidator.validate(this.getClass(), workItem);

      // sample parameters
      String sampleParam = (String) workItem.getParameter("SampleParam");
      String sampleParamTwo = (String) workItem.getParameter("SampleParamTwo");

      // complete workitem impl...

      // return results
      String sampleResult = "sample result";
      Map<String, Object> results = new HashMap<String, Object>();
      results.put("SampleResult", sampleResult);
      manager.completeWorkItem(workItem.getId(), results);
    } catch(Throwable cause) {
      handleException(cause);
    }
	}

	@Override
	public void abortWorkItem(WorkItem workItem, WorkItemManager manager) {
    // similar
	}
}

Table 56.2. Parameter descriptions

ParameterDescription

RequiredParameterValidator.validate(this.getClass(), workItem);

Checks that all parameters marked “required” are present. If they are not, an IllegalArgumentException is thrown.

String sampleParam = (String) workItem.getParameter("SampleParam");

Example of getting a parameter from the WorkItem class. The name is always a string. For example, WorkItem. In the example, SampleParam is always a string but the object associated with it can be many things and require a cast in order to avoid errors.

// complete workitem impl…

Executes the custom Java code when a parameter is received.

results.put("SampleResult", sampleResult);

Passes results to the custom task. The results are placed in the data output areas of the custom task.

manager.completeWorkItem(workItem.getId(), results);

Marks the work item handler as complete. The WorkItemManager controls the state of the work item and is responsible for getting the WorkItem ID and associate the results with the correct custom task.

abortWorkItem()

Aborts the custom Java code. May be left blank if the work item is not designed to be aborted

Note

Red Hat Process Automation Manager includes a limited set of supported custom tasks. Custom tasks that are not included in Red Hat Process Automation Manager are not supported.

Chapter 57. Work item definitions

Red Hat Process Automation Manager requires a work item definition (WID) file to identify the data fields to show in Business Central and accept API calls. The WID file is a mapping between user interactions with Red Hat Process Automation Manager and the data that is passed to the work item handler. The WID file also handles the UI details such as the name of the custom task, the category it is displayed as on the palette in Business Central, the icon used to designate the custom task, and the work item handler the custom task will map to.

In Red Hat Process Automation Manager you can create a WID file in two ways:

  • Use a @Wid annotation when coding the work item handler.
  • Create a .wid text file. For example, definitions-example.wid.

57.1. @Wid Annotation

The @Wid annotation is automatically created when you generate a work item handler project using the Maven archetype. You can also add the annotation manually.

@Wid Example

@Wid(widfile="MyWorkItemDefinitions.wid",
    name="MyWorkItemDefinitions",
    displayName="MyWorkItemDefinitions",
    icon="",
    defaultHandler="mvel: new com.redhat.MyWorkItemWorkItemHandler()",
    documentation = "myworkitem/index.html",
    parameters={
      @WidParameter(name="SampleParam", required = true),
      @WidParameter(name="SampleParamTwo", required = true)
    },
    results={
      @WidResult(name="SampleResult")
    },
    mavenDepends={
      @WidMavenDepends(group="com.redhat",
      artifact="myworkitem",
      version="7.26.0.Final-example-00004")
    },
    serviceInfo={
      @WidService(category = "myworkitem",
      description = "${description}",
      keywords = "",
      action = @WidAction(title = "Sample Title"),
      authinfo = @WidAuth(required = true,
      params = {"SampleParam", "SampleParamTwo"},
      paramsdescription = {"SampleParam", "SampleParamTwo"},
      referencesite = "referenceSiteURL"))
    }
)

Table 57.1. @Wid descriptions

 Description

@Wid

Top-level annotation to auto-generate WID files.

widfile

Name of the file that is automatically created for the custom task when it is deployed in Red Hat Process Automation Manager.

name

Name of the custom task, used internally. This name must be unique to custom tasks deployed in Red Hat Process Automation Manager.

displayName

Displayed name of the custom task. This name is displayed in the palette in Business Central.

icon

Path from src/main/resources/ to an icon located in the current project. The icon is displayed in the palette in Business Central. The icon, if specified, must be a PNG or GIF file and 16x16 pixels. This value can be left blank to use a default “Service Task” icon.

description

Description of the custom task.

defaultHandler

The work item handler Java class that is linked to the custom task. This entry is in the format <language> : <class>. Red Hat Process Automation Manager recommends using mvel as the language value for this attribute but java can also be used. For more information about mvel, see MVEL Documentation.

documentation

Path to an HTML file in the current project that contains a description of the custom task.

@WidParameter

Child annotation of @Wid. Specifies values that will be populated in the Business Central GUI or expected by API calls as data inputs for the custom task. More than one parameter can be specified:

name - A name for the parameter.

Note

Due to the possibility of this name being used in API calls over transfer methods such as REST or SOAP, this name should not contain spaces or special characters.

required - Boolean value indicating whether the parameter is required for the custom task to execute.

@WidResult

Child annotation of @Wid. Specifies values that will be populated in the Business Central GUI or expected by API calls as data outputs for the custom task. You can specify more than one result:

name - A name for the result.

Note

Due to the possibility of this name being used in API calls over transfer methods such as REST or SOAP, this name should not contain spaces or special characters.

@WidMavenDepends - Child annotation of @Wid. Specifies Maven dependencies that will be required for the correct functioning of the work item handler. You can specify more than one dependency:

group - Maven group ID of the dependency.

artifact - Maven artifact ID of the dependency.

version - Maven version number of the dependency.

@WidService

Child annotation of @Wid. Specifies values that will be populated in the service repository.

category - The UI palette category that the handler will be placed. This value should match the category field of the @Wid annotation.

description - Description of the handler that will be displayed in the service repository.

keywords - Comma-separated list of keywords that apply to the handler. Note: Currently not used by the Business Central service repository.

action - The @WidAction object. Contains the fields title and description.

authinfo - The @WidAuth object. Optional. Contains the fields required, params, paramsdescription, referencesite.

@WidAction

Object of @WidService that describes the handler purpose.

title - The title for the handler action.

description - The description for the handler action.

@WidAuth

Object of @WidService that defines the authentication required by the handler.

required - The boolean value that determines whether authentication is required.

params - The array containing the authentication parameters required.

paramsdescription - The array containing the descriptions for each authentication parameter.

referencesite - The URL to where the handler documentation can be found. Note: Currently not used by the Business Central service repository.

57.2. Text File

A global WorkDefinitions WID text file is automatically generated by new projects when a business process is added. The WID text file is similar to the JSON format but is not a completely valid JSON file. You can open this file in Business Central. You can create additional WID files by selecting Add Asset > Work item definitions from an existing project.

Text file example

[
  [
    "name" : "MyWorkItemDefinitions",
    "displayName" : "MyWorkItemDefinitions",
    "category" : "",
    "description" : "",
    "defaultHandler" : "mvel: new com.redhat.MyWorkItemWorkItemHandler()",
    "documentation" : "myworkitem/index.html",
    "parameters" : [
      "SampleParam" : new StringDataType(),
      "SampleParamTwo" : new StringDataType()
    ],
    "results" : [
      "SampleResult" : new StringDataType()
    ],
    "mavenDependencies" : [
      "com.redhat:myworkitem:7.26.0.Final-example-00004"
    ],
    "icon" : ""
  ]
]

The file is structured as a plain-text file using a JSON-like structure. The filename extension is .wid.

Table 57.2. Text file descriptions

 Description

name

Name of the custom task, used internally. This name must be unique to custom tasks deployed in Red Hat Process Automation Manager.

displayName

Displayed name of the custom task. This name is displayed in the palette in Business Central.

icon

Path from src/main/resources/ to an icon located in the current project. The icon is displayed in the palette in Business Central. The icon, if specified, must be a PNG or GIF file and 16x16 pixels. This value can be left blank to use a default “Service Task” icon.

category

Name of a category within the Business Central palette under which this custom task is displayed.

description

Description of the custom task.

defaultHandler

The work item handler Java class that is linked to the custom task. This entry is in the format <language> : <class>. Red Hat Process Automation Manager recommends using mvel as the language value for this attribute but java can also be used. For more information about mvel, see MVEL Documentation.

documentation

Path to an HTML file in the current project that contains a description of the custom task.

parameters

Specifies the values to be populated in the Business Central GUI or expected by API calls as data inputs for the custom task. Parameters use the <key> : <DataType> format. Accepted data types are StringDataType(), IntegerDataType(), and ObjectDataType(). More than one parameter can be specified.

results

Specifies the values to be populated in the Business Central GUI or expected by API calls as data outputs for the custom task. Results use the <key> : <DataType> format. Accepted data types are StringDataType(), IntegerDataType(), and ObjectDataType(). More than one result can be specified.

mavenDependencies

Optional: Specifies Maven dependencies required for the correct functioning of the work item handler. Dependencies can also be specified in the work item handler pom.xml file. Dependencies are in the format <group>:<artifact>:<version>. More than one dependency may be specified

Red Hat Process Automation Manager tries to locate a *.wid file in two locations by default:

  • Within Business Central in the project’s top-level global/ directory. This is the location of the default WorkDefinitions.wid file that is created automatically when a project first adds a business process asset.
  • Within Business Central in the project’s src/main/resources/ directory. This is where WID files created within a project in Business Central will be placed. A WID file may be created at any level of a Java package, so a WID file created at a package location of <default> will be created directly inside src/main/resources/ while a WID file created at a package location of com.redhat will be created at src/main/resources/com/redhat/
Warning

Red Hat Process Automation Manager does not validate that the value for the defaultHandler tag is executable or is a valid Java class. Specifying incorrect or invalid classes for this tag will return errors.

Chapter 58. Deploying custom tasks

Work item handlers, as custom code, are created outside of Red Hat Process Automation Manager. To use the code in your custom task, the code must be deployed to the server. Work item handler projects must be Java JAR files that can be placed into a Maven repository.

In Red Hat Process Automation Manager you can deploy custom tasks using three methods:

  • Within a Business Central custom task repository. For more information, see Chapter 54, Managing custom tasks in Business Central.
  • Within Business Central where you can use both the legacy and current editors to upload the work item handler JAR to the Business Central Maven repository as an artifact.
  • Outside of Business Central, you can manually copy the JAR files into the Maven repository.

58.1. Using a Business Central custom task repository

You can enable, disable, and deploy custom tasks within a Business Central custom task repository. For more information, see Chapter 54, Managing custom tasks in Business Central.

58.2. Uploading JAR Artifact to Business Central

You can upload the work item handler JAR to the Business Central Maven repository as an artifact by using the legacy and current editors.

Procedure

  1. In Business Central, select the Admin icon in the top-right corner of the screen and select Artifacts.
  2. Click Upload.
  3. In the Artifact Upload window, click the Choose File icon.
  4. Navigate to the location of the work item handler JAR, select the file and click Open.
  5. In the pop-up dialog, click the Upload icon.

    The artifact is uploaded and can now be viewed on the Artifacts page and referenced.

58.3. Manually copying work item definitions to Business Central Maven repository

Business Central automatically creates or reuses the Maven repository folder. By default the location is based on the location of the user launched Red Hat JBoss EAP. For example, the full default path would be <startup location>/repositories/kie/global. It is possible to replicate a standard Maven repository folder layout of <groupId>/<artifactId>/<versionId>/ in this folder and copy work item handler JAR files to this location. For example:

<startup location>/repositories/kie/global/com/redhat/myworkitem/1.0.0-SNAPSHOT/myworkitems-1.0.0-SNAPSHOT.jar

Any artifacts copied in this fashion are available to Red Hat Process Automation Manager without a server restart. Viewing the artifact in the Business Central Artifacts page requires clicking Refresh.

Chapter 59. Registering custom tasks

Red Hat Process Automation Manager must know how to associate a custom task work item with the code executed by the work item handler. The work item definition file links the custom task with the work item handler by name and Java class. The work item handler’s Java class has to be registered as usable in Red Hat Process Automation Manager.

Note

Service repositories contain domain-specific services that provide integration of your processes with different types of systems. Registering a custom task is not necessary when using a service repository because the import process registers the custom task.

Red Hat Process Automation Manager creates a WID file by default for projects that contain at least one business process. You can create a WID file when registering a work item handler or edit the default WID file. For more information about WID file locations or formatting, see Chapter 57, Work item definitions.

For non-service repository deployments, work item handlers can be registered in two ways:

  • Registering using the deployment descriptor.
  • Registering using the spring component registration.

59.1. Registering custom tasks using the deployment descriptor inside Business Central

You can register a custom task work item with the work item handler using the deployment descriptor in Business Central.

Procedure

  1. In Business Central, go to MenuDesignProjects and select the project name.
  2. In the project pane, select SettingsDeploymentsWork Item Handlers.
  3. Click Add Work Item Handler.
  4. In the Name field, enter the display name for the custom task.
  5. From the Resolver list, select MVEL, Reflection or Spring.
  6. In the Value field, enter the value based on the resolver type:

    • For MVEL, use the format new <full Java package>.<Java work item handler class name>()

      Example: new com.redhat.MyWorkItemWorkItemHandler()

    • For Reflection, use the format <full Java package>.<Java work item handler class name>

      Example: com.redhat.MyWorkItemWorkItemHandler

    • For Spring, use the format <Spring bean identifier>

      Example: workItemSpringBean

    Note

    The value fields may be filled automatically.

  7. Click Save to save your changes

59.2. Registering custom tasks using the deployment descriptor outside Business Central

You can register a custom task work item with the work item handler using the deployment descriptor outside Business Central.

Procedure

  1. Open the file src/main/resources/META-INF/kie-deployment-descriptor.xml.
  2. Add the following content based on the resolver type under <work-item-handlers>:

    • For MVEL, add the following:

      <work-item-handler>
        <resolver>mvel</resolver>
        <identifier>new com.redhat.MyWorkItemWorkItemHandler()</identifier>
        <parameters/>
        <name>MyWorkItem</name>
      </work-item-handler>
    • For Reflections, add the following:

      <work-item-handler>
        <resolver>reflection</resolver>
        <identifier>com.redhat.MyWorkItemWorkItemHandler</identifier>
        <parameters/>
        <name>MyWorkItem</name>
      </work-item-handler>
    • For Spring, add the following and ensure the identifier is the identifier of a Spring bean:

      <work-item-handler>
        <resolver>spring</resolver>
        <identifier>beanIdentifier</identifier>
        <parameters/>
        <name>MyWorkItem</name>
      </work-item-handler>
      Note

      If you are using Spring to discover and configure Spring beans, it is possible to use an annotation of the org.springframework.stereotype.Component class to automatically register work item handlers.

      Within a work item handler, add the annotation @Component("<Name>") before the declaration of the work item handler class. For example: @Component("MyWorkItem") public class MyWorkItemWorkItemHandler extends AbstractLogOrThrowWorkItemHandler {

Chapter 60. Placing custom tasks

When a custom task is registered in Red Hat Process Automation Manager it appears in the process designer palette. The custom task is named and categorized according to the entries in its corresponding WID file.

Prerequisites

Procedure

  1. In Business Central, go to MenuDesignProjects and click a project.
  2. Select the business process that you want to add a custom task to.
  3. Select the custom task from the palette and drag it to the BPMN2 diagram.
  4. Optional: Change the custom task attributes. For example, change the data output and input from the corresponding WID file.
Note

If the WID file is not visible in your project and no Work Item Definition object is visible in the Others category of your project, you must register the custom task. For more information about registering a custom task, see Chapter 59, Registering custom tasks.

Part VII. Process engine in Red Hat Process Automation Manager

As a business process analyst or developer, your understanding of the process engine in Red Hat Process Automation Manager can help you design more effective business assets and a more scalable process management architecture. The process engine implements the Business Process Management (BPM) paradigm in Red Hat Process Automation Manager and manages and executes business assets that comprise processes. This document describes concepts and functions of the process engine to consider as you create your business process management system and process services in Red Hat Process Automation Manager.

Chapter 61. Process engine in Red Hat Process Automation Manager

The process engine implements the Business Process Management (BPM) paradigm in Red Hat Process Automation Manager. BPM is a business methodology that enables modeling, measuring, and optimizing processes within an enterprise.

In BPM, a repeatable business process is represented as a workflow diagram. The Business Process Model and Notation (BPMN) specification defines the available elements of this diagram. The process engine implements a large subset of the BPMN 2.0 specification.

With the process engine, business analysts can develop the diagram itself. Developers can implement the business logic of every element of the flow in code, making an executable business process. Users can execute the business process and interact with it as necessary. Analysts can generate metrics that reflect the efficiency of the process.

The workflow diagram consists of a number of nodes. The BPMN specification defines many kinds of nodes, including the following principal types:

  • Event: Nodes representing something happening in the process or outside of the process. Typical events are the start and the end of a process. An event can throw messages to other processes and catch such messages. Circles on the diagram represent events.
  • Activity: Nodes representing an action that must be taken (whether automatically or with user involvement). Typical events are a task, which represents an action taken within the process, and a call to a subprocess. Rounded rectangles on the diagram represent activities.
  • Gateway: A branching or merging node. A typical gateway evaluates an expression and, depending on the result, continues to one of several execution paths. Diamond shapes on the diagram represent gateways.

When a user starts the process, a process instance is created. The process instance contains a set of data, or context, stored in process variables. The state of a process instance includes all the context data and also the current active node (or, in some cases, several active nodes).

Some of these variables can be initialized when a user starts the process. An activity can read from process variables and write to process variables. A gateway can evaluate process variables to determine the execution path.

For example, a purchase process in a shop can be a business process. The content of the user’s cart can be the initial process context. At the end of execution, the process context can contain the payment confirmation and shipment tracking details.

Optionally, you can use the BPMN data modeler in Business Central to design the model for the data in process variables.

The workflow diagram is represented in code by an XML business process definition. The logic of events, gateways, and subprocess calls are defined within the business process definition.

Some task types (for example, script tasks and the standard decision engine rule task) are implemented in the engine. For other task types, including all custom tasks, when the task must be executed the process engine executes a call using the Work Item Handler API. Code external to the engine can implement this API, providing a flexible mechanism for implementing various tasks.

The process engine includes a number of predefined types of tasks. These types include a script task that runs user Java code, a service task that calls a Java method or a Web Service, a decision task that calls a decision engine service, and other custom tasks (for example, REST and database calls).

Another predefined type of task is a user task, which includes interaction with a user. User tasks in the process can be assigned to users and groups.

The process engine uses the KIE API to interact with other software components. You can run business processes as services on a KIE Server and interact with them using a REST implementation of the KIE API. Alternatively, you can embed business processes in your application and interact with them using KIE API Java calls. In this case, you can run the process engine in any Java environment.

Business Central includes a user interface for users executing human tasks and a form modeler for creating the web forms for human tasks. However, you can also implement a custom user interface that interacts with the process engine using the KIE API.

The process engine supports the following additional features:

  • Support for persistence of the process information using the JPA standard. Persistence preserves the state and context (data in process variables) of every process instance, so that they are not lost in case any components are restarted or taken offline for some time. You can use an SQL database engine to store the persistence information.
  • Pluggable support for transactional execution of process elements using the JTA standard. If you use a JTA transaction manager, every element of the business process starts as a transaction. If the element does not complete, the context of the process instance is restored to the state in which it was before the element started.
  • Support for custom extension code, including new node types and other process languages.
  • Support for custom listener classes that are notified about various events.
  • Support for migrating running process instances to a new version of their process definition

The process engine can also be integrated with other independent core services:

  • The human task service can manage user tasks when human actors need to participate in the process. It is fully pluggable and the default implementation is based on the WS-HumanTask specification. The human task service manages the lifecycle of the tasks, task lists, task forms, and some more advanced features like escalation, delegation, and rule-based assignments.
  • The history log can store all information about the execution of all the processes in the process engine. While runtime persistence stores the current state of all active process instances, you need the history log to ensure access to historic information. The history log contains all current and historic states of all active and completed process instances. You can use the log to query for any information related to the execution of process instances for monitoring and analysis.

Chapter 62. Core engine API for the process engine

The process engine executes business processes. To define the processes, you create business assets, including process definitions and custom tasks.

You can use the Core Engine API to load, execute, and manage processes in the process engine.

Several levels of control are available:

  • At the lowest level, you can directly create a KIE base and a KIE session. A KIE base represents all the assets in a business process. A KIE session is an entity in the process engine that runs instances of a business process. This level provides fine-grained control, but requires explicit declaration and configuration of process instances, task handlers, event handlers, and other process engine entities in your code.
  • You can use the RuntimeManager class to manage sessions and processes. This class provides sessions for required process instances using a configurable strategy. It automatically configures the interaction between the KIE session and task services. It disposes of process engine entities that are no longer necessary, ensuring optimal use of resources. You can use a fluent API to instantiate RuntimeManager with the necessary business assets and to configure its environment.
  • You can use the Services API to manage the execution of processes. For example, the deployment service deploys business assets into the engine, forming a deployment unit. The process service runs a process from this deployment unit.

    If you want to embed the process engine in your application, the Services API is the most convenient option, because it hides the internal details of configuring and managing the engine.

  • Finally, you can deploy a KIE Server that loads business assets from KJAR files and runs processes. The KIE Server provides a REST API for loading and managing the processes. You can also use Business Central to manage a KIE Server.

    If you use a KIE Server, you do not need to use the Core Engine API. For information about deploying and managing processes on a KIE Server, see Packaging and deploying a Red Hat Process Automation Manager project.

For the full reference information for all public process engine API calls, see the Java documentation. Other API classes also exist in the code, but they are internal APIs that can be changed in later versions. Use public APIs in applications that you develop and maintain.

62.1. KIE base and KIE session

A KIE base contains a reference to all process definitions and other assets relevant for a process. The engine uses this KIE base to look up all information for the process, or for several processes, whenever necessary.

You can load assets into a KIE base from various sources, such as a class path, file system, or process repository. Creating a KIE base is a resource-heavy operation, as it involves loading and parsing assets from various sources. You can dynamically modify the KIE base to add or remove process definitions and other assets at run time.

After you create a KIE base, you can instantiate a KIE session based on this KIE base. Use this KIE session to run processes based on the definitions in the KIE base.

When you use the KIE session to start a process, a new process instance is created. This instance maintains a specific process state. Different instances in the same KIE session can use the same process definition but have different states.

Figure 62.1. KIE base and KIE session in the process engine

KnowledgeBaseAndSession

For example, if you develop an application to process sales orders, you can create one or more process definitions that determine how an order should be processed. When starting the application, you first need to create a KIE base that contains those process definitions. You can then create a session based on this KIE base. When a new sales order comes in, start a new process instance for the order. This process instance contains the state of the process for the specific sales request.

You can create many KIE sessions for the same KIE base and you can create many instances of the process within the same KIE session. Creating a KIE session, and also creating a process instance within the KIE session, uses far fewer resources than creating a KIE base. If you modify a KIE base, all the KIE sessions that use it can use the modifications automatically.

In most simple use cases, you can use a single KIE session to execute all processes. You can also use several sessions if needed. For example, if you want order processing for different customers to be completely independent, you can create a KIE session for each customer. You can also use multiple sessions for scalability reasons.

In typical applications you do not need to create a KIE base or KIE session directly. However, when you use other levels of the process engine API, you can interact with elements of the API that this level defines.

62.1.1. KIE base

The KIE base includes all process definitions and other assets that your application might need to execute a business process.

To create a KIE base, use a KieHelper instance to load processes from various resources, such as the class path or the file system, and to create a new KIE base.

The following code snippet shows how to create a KIE base consisting of only one process definition, which is loaded from from the class path.

Creating a KIE base containing one process definition

  KieHelper kieHelper = new KieHelper();
  KieBase kieBase = kieHelper
    .addResource(ResourceFactory.newClassPathResource("MyProcess.bpmn"))
    .build();

The ResourceFactory class has similar methods to load resources from a file, a URL, an InputStream, a Reader, and other sources.

Note

This "manual" process of creating a KIE base is simpler than other alternatives, but can make an application hard to maintain. Use other methods of creating a KIE base, such as the RuntimeManager class or the Services API, for applications that you expect to develop and maintain over long periods of time.

62.1.2. KIE session

After creating and loading the KIE base, you can create a KIE session to interact with the process engine. You can use this session to start and manage processes and to signal events.

The following code snippet creates a session based on the KIE base that you created previously and then starts a process instance, referencing the ID in the process definition.

Creating a KIE session and starting a process instance

KieSession ksession = kbase.newKieSession();
ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess");

62.1.3. ProcessRuntime interface

The KieSession class exposes the ProcessRuntime interface, which defines all the session methods for interacting with processes, as the following definition shows.

Definition of the ProcessRuntime interface

  /**
	 * Start a new process instance.  Use the process (definition) that
	 * is referenced by the given process ID.
	 *
	 * @param processId  The ID of the process to start
	 * @return the ProcessInstance that represents the instance of the process that was started
	 */
    ProcessInstance startProcess(String processId);

    /**
	 * Start a new process instance.  Use the process (definition) that
	 * is referenced by the given process ID.  You can pass parameters
	 * to the process instance as name-value pairs, and these parameters set
	 * variables of the process instance.
   *
	 * @param processId  the ID of the process to start
   * @param parameters  the process variables to set when starting the process instance
	 * @return the ProcessInstance that represents the instance of the process that was started
     */
    ProcessInstance startProcess(String processId,
                                 Map<String, Object> parameters);

    /**
     * Signals the process engine that an event has occurred. The type parameter defines
     * the type of event and the event parameter can contain additional information
     * related to the event.  All process instances that are listening to this type
     * of (external) event will be notified.  For performance reasons, use this type of
     * event signaling only if one process instance must be able to notify
     * other process instances. For internal events within one process instance, use the
     * signalEvent method that also include the processInstanceId of the process instance
     * in question.
     *
     * @param type the type of event
     * @param event the data associated with this event
     */
    void signalEvent(String type,
                     Object event);

    /**
     * Signals the process instance that an event has occurred. The type parameter defines
     * the type of event and the event parameter can contain additional information
     * related to the event.  All node instances inside the given process instance that
     * are listening to this type of (internal) event will be notified.  Note that the event
     * will only be processed inside the given process instance.  All other process instances
     * waiting for this type of event will not be notified.
     *
     * @param type the type of event
     * @param event the data associated with this event
     * @param processInstanceId the id of the process instance that should be signaled
     */
    void signalEvent(String type,
                     Object event,
                     long processInstanceId);

    /**
     * Returns a collection of currently active process instances.  Note that only process
     * instances that are currently loaded and active inside the process engine are returned.
     * When using persistence, it is likely not all running process instances are loaded
     * as their state is stored persistently.  It is best practice not to use this
     * method to collect information about the state of your process instances but to use
     * a history log for that purpose.
     *
     * @return a collection of process instances currently active in the session
     */
    Collection<ProcessInstance> getProcessInstances();

    /**
     * Returns the process instance with the given ID.  Note that only active process instances
     * are returned. If a process instance has been completed already, this method returns
     * null.
     *
     * @param id the ID of the process instance
     * @return the process instance with the given ID, or null if it cannot be found
     */
    ProcessInstance getProcessInstance(long processInstanceId);

    /**
     * Aborts the process instance with the given ID. If the process instance has been completed
     * (or aborted), or if the process instance cannot be found, this method will throw an
     * IllegalArgumentException.
     *
     * @param id the ID of the process instance
     */
    void abortProcessInstance(long processInstanceId);

    /**
     * Returns the WorkItemManager related to this session. This object can be used to
     * register new WorkItemHandlers or to complete (or abort) WorkItems.
     *
     * @return the WorkItemManager related to this session
     */
    WorkItemManager getWorkItemManager();

62.1.4. Correlation Keys

When working with processes, you might need to assign a business identifier to a process instance and then use the identifier to reference the instance without storing the generated instance ID.

To provide such capabilities, the process engine uses the CorrelationKey interface, which can define CorrelationProperties. A class that implements CorrelationKey can have either a single property describing it or a multi-property set. The value of the property or a combination of values of several properties refers to a unique instance.

The KieSession class implements the CorrelationAwareProcessRuntime interface to support correlation capabilities. This interface exposes the following methods:

Methods of the CorrelationAwareProcessRuntime interface

      /**
      * Start a new process instance.  Use the process (definition) that
      * is referenced by the given process ID.  You can pass parameters
      * to the process instance (as name-value pairs), and these parameters set
      * variables of the process instance.
      *
      * @param processId  the ID of the process to start
      * @param correlationKey custom correlation key that can be used to identify the process instance
      * @param parameters  the process variables to set when starting the process instance
      * @return the ProcessInstance that represents the instance of the process that was started
      */
      ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);

      /**
      * Create a new process instance (but do not yet start it).  Use the process
      * (definition) that is referenced by the given process ID.
      * You can pass to the process instance (as name-value pairs),
      * and these parameters set variables of the process instance.
      * Use this method if you need a reference to the process instance before actually
      * starting it.  Otherwise, use startProcess.
      *
      * @param processId  the ID of the process to start
      * @param correlationKey custom correlation key that can be used to identify the process instance
      * @param parameters  the process variables to set when creating the process instance
      * @return the ProcessInstance that represents the instance of the process that was created (but not yet started)
      */
      ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);

      /**
      * Returns the process instance with the given correlationKey.  Note that only active process instances
      * are returned.  If a process instance has been completed already, this method will return
      * null.
      *
      * @param correlationKey the custom correlation key assigned when the process instance was created
      * @return the process instance identified by the key or null if it cannot be found
      */
      ProcessInstance getProcessInstance(CorrelationKey correlationKey);

Correlation is usually used with long-running processes. You must enable persistence if you want to store correlation information permanently.

62.2. Runtime manager

The RuntimeManager class provides a layer in the process engine API that simplifies and empowers its usage. This class encapsulates and manages the KIE base and KIE session, as well as the task service that provides handlers for all tasks in the process. The KIE session and the task service within the runtime manager are already configured to work with each other and you do not need to provide such configuration. For example, you do not need to register a human task handler and to ensure that it is connected to the required service.

The runtime manager manages the KIE session according to a predefined strategy. The following strategies are available:

  • Singleton: The runtime manager maintains a single KieSession and uses it for all the requested processes.
  • Per Request: The runtime manager creates a new KieSession for every request.
  • Per Process Instance: The runtime manager maintains mapping between process instance and KieSession and always provides the same KieSession whenever working with a given process instance.

Regardless of the strategy, the RuntimeManager class ensures the same capabilities in initialization and configuration of the process engine components:

  • KieSession instances are loaded with the same factories (either in memory or JPA based).
  • Work item handlers are registered on every KieSession instance (either loaded from the database or newly created).
  • Event listeners (Process, Agenda, WorkingMemory) are registered on every KIE session, whether the session is loaded from the database or newly created.
  • The task service is configured with the following required components:

    • The JTA transaction manager
    • The same entity manager factory as the one used for KieSession instances
    • The UserGroupCallback instance that can be configured in the environment

The runtime manager also enables disposing the process engine cleanly. It provides dedicated methods to dispose a RuntimeEngine instance when it is no longer needed, releasing any resources it might have acquired.

The following code shows the definition of the RuntimeManager interface:

Definition of the RuntimeManager interface

public interface RuntimeManager {

	/**
	 * Returns a <code>RuntimeEngine</code> instance that is fully initialized:
	 * <ul>
	 * 	<li>KieSession is created or loaded depending on the strategy</li>
	 * 	<li>TaskService is initialized and attached to the KIE session (through a listener)</li>
	 * 	<li>WorkItemHandlers are initialized and registered on the KIE session</li>
	 * 	<li>EventListeners (process, agenda, working memory) are initialized and added to the KIE session</li>
	 * </ul>
	 * @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code>
	 * @return instance of the <code>RuntimeEngine</code>
	 */
    RuntimeEngine getRuntimeEngine(Context<?> context);

    /**
     * Unique identifier of the <code>RuntimeManager</code>
     * @return
     */
    String getIdentifier();

    /**
     * Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact.
     * This method should always be used to dispose <code>RuntimeEngine</code> that is not needed
     * anymore. <br/>
     * Do not use KieSession.dispose() used with RuntimeManager as it will break the internal
     * mechanisms of the manager responsible for clear and efficient disposal.<br/>
     * Disposing is not needed if <code>RuntimeEngine</code> was obtained within an active JTA transaction,
     * if the getRuntimeEngine method was invoked during active JTA transaction, then disposing of
     * the runtime engine will happen automatically on transaction completion.
     * @param runtime
     */
    void disposeRuntimeEngine(RuntimeEngine runtime);

    /**
     * Closes <code>RuntimeManager</code> and releases its resources. Call this method when
     * a runtime manager is not needed anymore. Otherwise it will still be active and operational.
     */
    void close();

}

The RuntimeManager class also provides the RuntimeEngine class, which includes methods to get access to underlying process engine components:

Definition of the RuntimeEngine interface

public interface RuntimeEngine {

	/**
	 * Returns the <code>KieSession</code> configured for this <code>RuntimeEngine</code>
	 * @return
	 */
    KieSession getKieSession();

    /**
	 * Returns the <code>TaskService</code> configured for this <code>RuntimeEngine</code>
	 * @return
	 */
    TaskService getTaskService();
}

Note

An identifier of the RuntimeManager class is used as deploymentId during runtime execution. For example, the identifier is persisted as deploymentId of a Task when the Task is persisted. The deploymentID of a Task associates it with the RuntimeManager when the Task is completed and the process instance is resumed.

The same deploymentId is also persisted as externalId in history log tables.

If you don’t specify an identifier when creating a RuntimeManager instance, a default value is applied, depending on the strategy (for example, default-per-pinstance for PerProcessInstanceRuntimeManager). That means your application uses the same deployment of the RuntimeManager class in its entire lifecycle.

If you maintain multiple runtime managers in your application, you must specify a unique identifier for every RuntimeManager instance.

For example, the deployment service maintains multiple runtime managers and uses the GAV value of the KJAR file as an identifier. The same logic is used in Business Central and in KIE Server, because they depend on the deployment service.

Note

When you need to interact with the process engine or task service from within a handler or a listener, you can use the RuntimeManager interface to retrieve the RuntimeEngine instance for the given process instance, and then use the RuntimeEngine instance to retrieve the KieSession or TaskService instance. This approach ensures that the proper state of the engine, managed according to the selected strategy, is preserved.

62.2.1. Runtime manager strategies

The RuntimeManager class supports the following strategies for managing KIE sessions.

Singleton strategy

This strategy instructs the runtime manager to maintain a single RuntimeEngine instance (and in turn single KieSession and TaskService instances). Access to the runtime engine is synchronized and, therefore, thread safe, although it comes with a performance penalty due to synchronization.

Use this strategy for simple use cases.

This strategy has the following characteristics:

  • It has a small memory footprint, with single instances of the runtime engine and the task service.
  • It is simple and compact in design and usage.
  • It is a good fit for low-to-medium load on the process engine because of synchronized access.
  • In this strategy, because of the single KieSession instance, all state objects (such as facts) are directly visible to all process instances and vice versa.
  • The strategy is not contextual. When you retrieve instances of RuntimeEngine from a singleton RuntimeManager, you do not need to take the Context instance into account. Usually, you can use EmptyContext.get() as the context, although a null argument is acceptable as well.
  • In this strategy, the runtime manager keeps track of the ID of the KieSession, so that the same session remains in use after a RuntimeManager restart. The ID is stored as a serialized file in a temporary location in the file system that, depending on the environment, can be one of the following directories:

    • The value of the jbpm.data.dir system property
    • The value of the jboss.server.data.dir system property
    • The value of the java.io.tmpdir system property
Warning

A combination of the Singleton strategy and the EJB Timer Scheduler might raise Hibernate issues under load. Do not use this combination in production applications. The EJB Timer Scheduler is the default scheduler in the KIE Server.

Per request strategy

This strategy instructs the runtime manager to provide a new instance of RuntimeEngine for every request. One or more invocations of the process engine within a single transaction are considered a single request.

The same instance of RuntimeEngine must be used within a single transaction to ensure correctness of state. Otherwise, an operation completed in one call would not be visible in the next call.

This strategy is stateless, as process state is preserved only within the request. When a request is completed, the RuntimeEngine instance is permanently destroyed. If persistence is used, information related to the KIE session is removed from the persistence database as well.

This strategy has the following characteristics:

  • It provides completely isolated process engine and task service operations for every request.
  • It is completely stateless, because facts are stored only for the duration of the request.
  • It is a good fit for high-load, stateless processes, where no facts or timers must be preserved between requests.
  • In this strategy, the KIE session is only available during the life of a request and is destroyed at the end of the request.
  • The strategy is not contextual. When you retrieve instances of RuntimeEngine from a per-request RuntimeManager, you do not need to take the Context instance into account. Usually, you can use EmptyContext.get() as the context, although a null argument is acceptable as well.
Per process instance strategy

This strategy instructs RuntimeManager to maintain a strict relationship between a KIE session and a process instance. Each KieSession is available as long as the ProcessInstance to which it belongs is active.

This strategy provides the most flexible approach for using advanced capabilities of the process engine, such as rule evaluation and isolation between process instances. It maximizes performance and reduces potential bottlenecks introduced by synchronization. At the same time, unlike the request strategy, it reduces the number of KIE sessions to the actual number of process instances, rather than the total number of requests.

This strategy has the following characteristics:

  • It provides isolation for every process instance.
  • It maintains a strict relationship between KieSession and ProcessInstance to ensure that it always delivers the same KieSession for a given ProcessInstance.
  • It merges the lifecycle of KieSession with ProcessInstance, and both are disposed when the process instance completes or aborts.
  • It enables maintenance of data, such as facts and timers, in the scope of the process instance. Only the process instance has access to the data.
  • It introduces some overhead because of the need to look up and load the KieSession for the process instance.
  • It validates every usage of a KieSession so it cannot be used for other process instances. An exception is thrown if another process instance uses the same KieSession.
  • The strategy is contextual and accepts the following context instances:

    • EmptyContext or null: Used when starting a process instance because no process instance ID is available yet
    • ProcessInstanceIdContext: Used after the process instance is created
    • CorrelationKeyContext: Used as an alternative to ProcessInstanceIdContext to use a custom (business) key instead of the process instance ID

62.2.2. Typical usage scenario for the runtime manager

The typical usage scenario for the runtime manager consists of the following stages:

  • At application startup time, complete the following stage:

    • Build a RuntimeManager instance and keep it for the entire lifetime of the application, as it is thread-safe and can be accessed concurrently.
  • At request time, complete the following stages:

    • Get RuntimeEngine from the RuntimeManager, using the proper context instance as determined by the strategy that you configured for the RuntimeManager class.
    • Get the KieSession and TaskService objects from the RuntimeEngine.
    • Use the KieSession and TaskService objects for operations such as startProcess or completeTask.
    • After completing processing, dispose RuntimeEngine using the RuntimeManager.disposeRuntimeEngine method.
  • At application shutdown time, complete the following stage:

    • Close the RuntimeManager instance.
Note

When RuntimeEngine is obtained from RuntimeManager within an active JTA transaction, you do not need to dispose RuntimeEngine at the end, as RuntimeManager automatically disposes the RuntimeEngine on transaction completion (regardless of the completion status: commit or rollback).

The following example shows how you can build a RuntimeManager instance and get a RuntimeEngine instance (that encapsulates KieSession and TaskService classes) from it:

Building a RuntimeManager instance and then getting RuntimeEngine and KieSession

    // First, configure the environment to be used by RuntimeManager
    RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
    .newDefaultInMemoryBuilder()
    .addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
    .get();

    // Next, create the RuntimeManager - in this case the singleton strategy is chosen
    RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);

    // Then get RuntimeEngine from the runtime manager, using an empty context because singleton does not keep track
    // of runtime engine as there is only one
    RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());

    // Get the KieSession from the RuntimeEngine - already initialized with all handlers, listeners, and other requirements
    // configured on the environment
    KieSession ksession = runtimeEngine.getKieSession();

    // Add invocations of the process engine here,
    // for example, ksession.startProcess(processId);

    // Finally, dispose the runtime engine
    manager.disposeRuntimeEngine(runtimeEngine);

This example provides the simplest, or minimal, way of using RuntimeManager and RuntimeEngine classes. It has the following characteristics:

  • The KieSession instance is created in memory, using the newDefaultInMemoryBuilder builder.
  • A single process, which is added as an asset, is available for execution.
  • The TaskService class is configured and attached to the KieSession instance through the LocalHTWorkItemHandler interface to support user task capabilities within processes.

62.2.3. Runtime environment configuration object

The RuntimeManager class encapsulates internal process engine complexity, such as creating, disposing, and registering handlers.

It also provides fine-grained control over process engine configuration. To set this configuration, you must create a RuntimeEnvironment object and then use it to create the RuntimeManager object.

The following definition shows the methods available in the RuntimeEnvironment interface:

Methods in the RuntimeEnvironment interface

  public interface RuntimeEnvironment {

	/**
	 * Returns <code>KieBase</code> that is to be used by the manager
	 * @return
	 */
    KieBase getKieBase();

    /**
     * KieSession environment that is to be used to create instances of <code>KieSession</code>
     * @return
     */
    Environment getEnvironment();

    /**
     * KieSession configuration that is to be used to create instances of <code>KieSession</code>
     * @return
     */
    KieSessionConfiguration getConfiguration();

    /**
     * Indicates if persistence is to be used for the KieSession instances
     * @return
     */
    boolean usePersistence();

    /**
     * Delivers a concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners
     * that is to be registered on instances of <code>KieSession</code>
     * @return
     */
    RegisterableItemsFactory getRegisterableItemsFactory();

    /**
     * Delivers a concrete implementation of <code>UserGroupCallback</code> that is to be registered on instances
     * of <code>TaskService</code> for managing users and groups.
     * @return
     */
    UserGroupCallback getUserGroupCallback();

    /**
     * Delivers a custom class loader that is to be used by the process engine and task service instances
     * @return
     */
    ClassLoader getClassLoader();

    /**
     * Closes the environment, permitting closing of all dependent components such as ksession factories
     */
    void close();

62.2.4. Runtime environment builder

To create an instance of RuntimeEnvironment that contains the required data, use the RuntimeEnvironmentBuilder class. This class provides a fluent API to configure a RuntimeEnvironment instance with predefined settings.

The following definition shows the methods in the RuntimeEnvironmentBuilder interface:

Methods in the RuntimeEnvironmentBuilder interface

public interface RuntimeEnvironmentBuilder {

	public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled);

	public RuntimeEnvironmentBuilder entityManagerFactory(Object emf);

	public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type);

	public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value);

	public RuntimeEnvironmentBuilder addConfiguration(String name, String value);

	public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase);

	public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback);

	public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory);

	public RuntimeEnvironment get();

	public RuntimeEnvironmentBuilder classLoader(ClassLoader cl);

	public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler);

Use the RuntimeEnvironmentBuilderFactory class to obtain instances of RuntimeEnvironmentBuilder. Along with empty instances with no settings, you can get builders with several preconfigured sets of configuration options for the runtime manager.

The following definition shows the methods in the RuntimeEnvironmentBuilderFactory interface:

Methods in the RuntimeEnvironmentBuilderFactory interface

public interface RuntimeEnvironmentBuilderFactory {

	/**
     * Provides a completely empty <code>RuntimeEnvironmentBuilder</code> instance to manually
     * set all required components instead of relying on any defaults.
     * @return new instance of <code>RuntimeEnvironmentBuilder</code>
     */
    public RuntimeEnvironmentBuilder newEmptyBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * but does not have persistence for the process engine configured so it will only store process instances in memory
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This method is tailored to work smoothly with KJAR files
     * @param groupId group id of kjar
     * @param artifactId artifact id of kjar
     * @param version version number of kjar